www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.announce - DMD 1.005 release

reply Walter Bright <newshound digitalmars.com> writes:
Fixes many bugs, some serious.

Some new goodies.

http://www.digitalmars.com/d/changelog.html

http://ftp.digitalmars.com/dmd.1.005.zip
Feb 05 2007
next sibling parent John <kmk200us yahoo.com> writes:
Walter Bright Wrote:

 Fixes many bugs, some serious.
 
 Some new goodies.
 
 http://www.digitalmars.com/d/changelog.html
 
 http://ftp.digitalmars.com/dmd.1.005.zip

Good stuff.
Feb 05 2007
prev sibling next sibling parent janderson <askme me.com> writes:
Walter Bright wrote:
 Fixes many bugs, some serious.
 
 Some new goodies.
 
 http://www.digitalmars.com/d/changelog.html
 
 http://ftp.digitalmars.com/dmd.1.005.zip

The new mixin stuff with the quotes seems a bit left of field. However I'd imagine you could write some very reusable code with the string con-concatenation stuff, although I'll bet it'll make things very hard to debug. Maybe even use the strings for some kinda meta/reflection coding. I can't wait to see some real-world examples. -Joel
Feb 05 2007
prev sibling next sibling parent reply Kirk McDonald <kirklin.mcdonald gmail.com> writes:
Walter Bright wrote:
 Fixes many bugs, some serious.
 
 Some new goodies.
 
 http://www.digitalmars.com/d/changelog.html
 
 http://ftp.digitalmars.com/dmd.1.005.zip

Mwahaha! This program, when run, prints out a copy of its own source. It also has some completely gratuitous new-style mixins. // file test.d mixin(`import std.stdio : writefln;`); mixin(`void main() { mixin("writefln(import(\"test.d\"));"); }`); -- Kirk McDonald Pyd: Wrapping Python with D http://pyd.dsource.org
Feb 05 2007
parent reply BCS <BCS pathlink.com> writes:
Kirk McDonald wrote:
 Walter Bright wrote:
 
 Fixes many bugs, some serious.

 Some new goodies.

 http://www.digitalmars.com/d/changelog.html

 http://ftp.digitalmars.com/dmd.1.005.zip

Mwahaha! This program, when run, prints out a copy of its own source. It also has some completely gratuitous new-style mixins. // file test.d mixin(`import std.stdio : writefln;`); mixin(`void main() { mixin("writefln(import(\"test.d\"));"); }`);

Without the gratuitous stuff that has to be the cleanest quine outside of bash (in bash an empty file prints nothing) import std.stdio; void main(){writef(import(__FILE__));}
Feb 06 2007
next sibling parent reply =?ISO-8859-1?Q?Jari-Matti_M=E4kel=E4?= <jmjmak utu.fi.invalid> writes:
BCS kirjoitti:
 Without the gratuitous stuff that has to be the cleanest quine outside
 of bash (in bash an empty file prints nothing)
 
 import std.stdio;
 void main(){writef(import(__FILE__));}

And if the strings don't mess up with printf, it can be made even shorter: void main(){printf(import(__FILE__));}
Feb 06 2007
parent reply "Jarrett Billingsley" <kb3ctd2 yahoo.com> writes:
"Jari-Matti Mäkelä" <jmjmak utu.fi.invalid> wrote in message 
news:eqaban$2l90$1 digitaldaemon.com...
 BCS kirjoitti:
 Without the gratuitous stuff that has to be the cleanest quine outside
 of bash (in bash an empty file prints nothing)

 import std.stdio;
 void main(){writef(import(__FILE__));}

And if the strings don't mess up with printf, it can be made even shorter: void main(){printf(import(__FILE__));}

"writef".length == "printf".length
Feb 06 2007
parent reply Chris Nicholson-Sauls <ibisbasenji gmail.com> writes:
Jarrett Billingsley wrote:
 "Jari-Matti Mäkelä" <jmjmak utu.fi.invalid> wrote in message 
 news:eqaban$2l90$1 digitaldaemon.com...
 BCS kirjoitti:
 Without the gratuitous stuff that has to be the cleanest quine outside
 of bash (in bash an empty file prints nothing)

 import std.stdio;
 void main(){writef(import(__FILE__));}

void main(){printf(import(__FILE__));}

"writef".length == "printf".length

But the "printf" version is -= "import std.stdio;".length + 1; That said, for examplar|demonstrative D code I'd just assume avoid printf regardless. It just isn't "the D way." Personal preferance, though, would be: import tango.io.Stdout; void main(){Stdout(import(__FILE__));} -- Chris Nicholson-Sauls
Feb 06 2007
parent =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Chris Nicholson-Sauls wrote:

 "writef".length == "printf".length

But the "printf" version is -= "import std.stdio;".length + 1;

If you use printf, you should use "import std.c.stdio;" That it works without it is a long-standing bug, IMHO.
 That said, for examplar|demonstrative D code I'd just assume
 avoid printf regardless. It just isn't "the D way."

I don't think there is anything inherently wrong with using printf or the rest of the C standard library, as long as it is explicitly imported by the D code ? Having "printf" defined in Object is evil, though... --anders
Feb 07 2007
prev sibling parent Stewart Gordon <smjg_1998 yahoo.com> writes:
BCS Wrote:
<snip>
 Without the gratuitous stuff that has to be the cleanest quine 
 outside of bash (in bash an empty file prints nothing)
 
 import std.stdio;
 void main(){writef(import(__FILE__));}

What is your definition of "clean"? Moreover, there are many languages in which an empty source file is a null program - BASIC, Perl and probably most shell scripting languages (indeed, probably most scripting languages) have this characteristic. In the course of history there have even been one or two C compliers that did this. Stewart.
Feb 07 2007
prev sibling next sibling parent reply Kevin Bealer <kevinbealer gmail.com> writes:
Walter Bright wrote:
 Fixes many bugs, some serious.
 
 Some new goodies.
 
 http://www.digitalmars.com/d/changelog.html
 
 http://ftp.digitalmars.com/dmd.1.005.zip

You fixed all the bugs I've added in recent memory. Plus, if I understand correctly, the implications of some of these features is staggering... It looks like one could write a few hundred line module that can pull in and do compile-time interpreting of a language of the complexity of say, Scheme. And the code in the module could be both readable and straightforward... And the results would be absorbed into the calling code as normal optimizable statements... "I warn you, Doctor -- man was not meant to have this kind of power!" Kevin
Feb 05 2007
next sibling parent Don Clugston <dac nospam.com.au> writes:
Kevin Bealer wrote:
 Walter Bright wrote:
 Fixes many bugs, some serious.

 Some new goodies.

 http://www.digitalmars.com/d/changelog.html

 http://ftp.digitalmars.com/dmd.1.005.zip

You fixed all the bugs I've added in recent memory. Plus, if I understand correctly, the implications of some of these features is staggering... It looks like one could write a few hundred line module that can pull in and do compile-time interpreting of a language of the complexity of say, Scheme. And the code in the module could be both readable and straightforward... And the results would be absorbed into the calling code as normal optimizable statements... "I warn you, Doctor -- man was not meant to have this kind of power!" Kevin

My thoughts exactly. And it only warrants a version number increase of 0.001 ??? <g>
Feb 05 2007
prev sibling next sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Kevin Bealer wrote:
 You fixed all the bugs I've added in recent memory.  Plus, if I 
 understand correctly, the implications of some of these features is
 staggering...
 
 It looks like one could write a few hundred line module that can pull in 
 and do compile-time interpreting of a language of the complexity of say, 
 Scheme.  And the code in the module could be both readable and 
 straightforward...  And the results would be absorbed into the calling 
 code as normal optimizable statements...

The irony is that it only took 3 hours to implement, which shows the power of having the lexing, parsing, and semantic passes be logically distinct. The idea is to enable the creation of DSLs (Domain Specific Languages) that don't have the crippling problem C++ expression templates have - that of being stuck with C++ operators and precedence. To make this work, however, one must be able to manipulate strings at compile time. I've made a start on a library to do this, std.metastrings, based on earlier work by Don Clugston and Eric Anderton. This is just the start of what's going to happen with D 2.0.
Feb 06 2007
next sibling parent "Yauheni Akhotnikau" <eao197 intervale.ru> writes:
 The idea is to enable the creation of DSLs (Domain Specific Languages)=

 that don't have the crippling problem C++ expression templates have - =

 that of being stuck with C++ operators and precedence.

Not only that. This opens clean and simple way to use pre-compile-time = code generation. For example: // greeting.d: import std.stdio; class Greeting { mixin( import( "greeting.impl.d" ) ); } void main() { auto g =3D new Greeting; g.hello(); g.bye(); } // greeting.impl.d: void hello() { writefln( "Hello!" ); } void bye() { writefln( "Bye!" ); } Where the content of greeting.impl.d can be generated by some = domain-specific tool (such as ASN.1 serializator/deserializator = generators). It's a very good news! Thanks! -- = Regards, Yauheni Akhotnikau
Feb 06 2007
prev sibling next sibling parent Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 Kevin Bealer wrote:
 You fixed all the bugs I've added in recent memory.  Plus, if I 
 understand correctly, the implications of some of these features is
 staggering...

 It looks like one could write a few hundred line module that can pull 
 in and do compile-time interpreting of a language of the complexity of 
 say, Scheme.  And the code in the module could be both readable and 
 straightforward...  And the results would be absorbed into the calling 
 code as normal optimizable statements...

The irony is that it only took 3 hours to implement, which shows the power of having the lexing, parsing, and semantic passes be logically distinct. The idea is to enable the creation of DSLs (Domain Specific Languages) that don't have the crippling problem C++ expression templates have - that of being stuck with C++ operators and precedence. To make this work, however, one must be able to manipulate strings at compile time. I've made a start on a library to do this, std.metastrings, based on earlier work by Don Clugston and Eric Anderton. This is just the start of what's going to happen with D 2.0.

For those who haven't seen it, Walter updated http://www.digitalmars.com/d/mixin.html with a simple example of what you can do with the mixin string mojo. Phobos also has new documentation for metastrings: http://www.digitalmars.com/d/phobos/std_metastrings.html It looks like this feature is calling out for some sort of new 'here document' syntax, so that the code in strings can look and read more like code. It certainly seems to solve the long standing 'can't generate identifiers at compile time' feature request. And then some. :-) It'll be very exciting to see what people come up with! --bb
Feb 06 2007
prev sibling next sibling parent reply Pragma <ericanderton yahoo.removeme.com> writes:
Walter Bright wrote:
 Kevin Bealer wrote:
 You fixed all the bugs I've added in recent memory.  Plus, if I 
 understand correctly, the implications of some of these features is
 staggering...

 It looks like one could write a few hundred line module that can pull 
 in and do compile-time interpreting of a language of the complexity of 
 say, Scheme.  And the code in the module could be both readable and 
 straightforward...  And the results would be absorbed into the calling 
 code as normal optimizable statements...

The irony is that it only took 3 hours to implement, which shows the power of having the lexing, parsing, and semantic passes be logically distinct. The idea is to enable the creation of DSLs (Domain Specific Languages) that don't have the crippling problem C++ expression templates have - that of being stuck with C++ operators and precedence.

It's funny you should say that. I was kidding with Kris in IRC last week about how you could just slap a copy of DMDScript in the compiler and let us talk to it directly from within templates. While this isn't letting us muck about with the AST, to create specalized grammars, this is certainly a more elegant solution. ... and it doesn't even require a separate syntax.
 
 To make this work, however, one must be able to manipulate strings at 
 compile time. I've made a start on a library to do this, 
 std.metastrings, based on earlier work by Don Clugston and Eric Anderton.
 
 This is just the start of what's going to happen with D 2.0.

-- - EricAnderton at yahoo
Feb 06 2007
parent Walter Bright <newshound digitalmars.com> writes:
Pragma wrote:
 Walter Bright wrote:
 The idea is to enable the creation of DSLs (Domain Specific Languages) 
 that don't have the crippling problem C++ expression templates have - 
 that of being stuck with C++ operators and precedence.

It's funny you should say that. I was kidding with Kris in IRC last week about how you could just slap a copy of DMDScript in the compiler and let us talk to it directly from within templates. While this isn't letting us muck about with the AST, to create specalized grammars, this is certainly a more elegant solution. ... and it doesn't even require a separate syntax.

Andrei and I toyed with that exact idea for a while. It got shot down after it became clear that since DMDScript has a "same-only-different" syntax from D, it would be terribly confusing.
Feb 06 2007
prev sibling next sibling parent reply Lionello Lunesu <lio lunesu.remove.com> writes:
Walter Bright wrote:
 Kevin Bealer wrote:
 You fixed all the bugs I've added in recent memory.  Plus, if I 
 understand correctly, the implications of some of these features is
 staggering...

 It looks like one could write a few hundred line module that can pull 
 in and do compile-time interpreting of a language of the complexity of 
 say, Scheme.  And the code in the module could be both readable and 
 straightforward...  And the results would be absorbed into the calling 
 code as normal optimizable statements...

The irony is that it only took 3 hours to implement, which shows the power of having the lexing, parsing, and semantic passes be logically distinct. The idea is to enable the creation of DSLs (Domain Specific Languages) that don't have the crippling problem C++ expression templates have - that of being stuck with C++ operators and precedence. To make this work, however, one must be able to manipulate strings at compile time. I've made a start on a library to do this, std.metastrings, based on earlier work by Don Clugston and Eric Anderton.

You know the next step, right? A template version of htod! include!("gl.h"); :D L.
Feb 06 2007
parent Walter Bright <newshound digitalmars.com> writes:
Lionello Lunesu wrote:
 You know the next step, right? A template version of htod!
 
 include!("gl.h");

I did shy away from the "execute this shell command and insert its output into a string literal" because that would turn a D compiler into a huge security risk.
Feb 06 2007
prev sibling parent BCS <BCS pathlink.com> writes:
Walter Bright wrote:
 To make this work, however, one must be able to manipulate strings at 
 compile time. I've made a start on a library to do this, 
 std.metastrings, based on earlier work by Don Clugston and Eric Anderton.
 
 This is just the start of what's going to happen with D 2.0.

it needs some string manipulation stuff. I'd be more than happy to let you put the string templates from dparse in. It has template to: Discard leading white space return a slice up to the first white space char return a slice starting with the first white space char Return a slice up-to but not including the first instance of t. Return a slice starting after the first instance of t and containing the rest of the string. discard [ ]* then return [a-zA-Z_][a-zA-Z0-9_]* non-string type template return tuple with string broken up by d return tuple with string broken up by white space tuple cdr (think lisp) check if anything in tuple begins with a given prefix source at: http://www.dsource.org/projects/scrapple/browser/trunk/dparser/dparse.d
Feb 06 2007
prev sibling parent reply BLS <Killing_Zoe web.de> writes:
Walter Bright schrieb:

 The idea is to enable the creation of DSLs (Domain Specific Languages)

Kevin Bealer schrieb :
 It looks like one could write a few hundred line module that can pull 
 in  and do compile-time interpreting of a language of the complexity 
 of say Scheme.


to figure it out / This means / procedure foo( num1, num2 ) return aValue or PROCEDURE fo IN PARAMETERS num1, num2 OUT PARAMETERS aValue Is this something I can establish in D since 1.005 ? ? Bjoern Kevin Bealer schrieb:
 

Feb 06 2007
parent reply BLS <Killing_Zoe web.de> writes:
I guess here is a need for further explaination.

Either I am an complete idiot (not completely unrealistic) and 
missunderstood something, or a new, quit radical, programming paradigmn 
change is on it s way.  I mean it is difficult to realize the implications.
Bjoern


BLS schrieb:
 Walter Bright schrieb:
 
  > The idea is to enable the creation of DSLs (Domain Specific Languages)
 
 Kevin Bealer schrieb :
  > It looks like one could write a few hundred line module that can pull 
  > in  and do compile-time interpreting of a language of the complexity 
  > > of say Scheme.
 
 to figure it out / This means /
 
 procedure foo( num1, num2 )
   return aValue
 
 or
 
 PROCEDURE fo
   IN PARAMETERS num1, num2
   OUT PARAMETERS aValue
 
 
 Is this something I can establish in D since 1.005 ?
 ?
 Bjoern
 
 
 Kevin Bealer schrieb:
 


Feb 06 2007
next sibling parent reply Pragma <ericanderton yahoo.removeme.com> writes:
BLS wrote:
 I guess here is a need for further explaination.
 
 Either I am an complete idiot (not completely unrealistic) and 
 missunderstood something, or a new, quit radical, programming paradigmn 
 change is on it s way.  I mean it is difficult to realize the implications.
 Bjoern

Just try to wrap your head around this: http://www.digitalmars.com/d/mixin.html template GenStruct(char[] Name, char[] M1) { const char[] GenStruct = "struct " ~ Name ~ "{ int " ~ M1 ~ "; }"; } mixin(GenStruct!("Foo", "bar")); //which generates: struct Foo { int bar; } In short this means that we can have *100%* arbitrary code generation at compile time, w/o need of a new grammar to support the capability. -- - EricAnderton at yahoo
Feb 06 2007
next sibling parent reply BLS <Killing_Zoe web.de> writes:
Pragma schrieb:
 BLS wrote:
 
 I guess here is a need for further explaination.

 Either I am an complete idiot (not completely unrealistic) and 
 missunderstood something, or a new, quit radical, programming 
 paradigmn change is on it s way.  I mean it is difficult to realize 
 the implications.
 Bjoern

Just try to wrap your head around this: http://www.digitalmars.com/d/mixin.html template GenStruct(char[] Name, char[] M1) { const char[] GenStruct = "struct " ~ Name ~ "{ int " ~ M1 ~ "; }"; } mixin(GenStruct!("Foo", "bar")); //which generates: struct Foo { int bar; } In short this means that we can have *100%* arbitrary code generation at compile time, w/o need of a new grammar to support the capability.

Hi Eric, I am able to read and understand the code. (not nessesarily the far reaching implications) But the generated code is still D. So what does it mean : Walter Bright schrieb:
 The idea is to enable the creation of DSLs (Domain Specific Languages)

Bjoern Post scriptum I can imagine the following scenario : D Compiler is calling a Translator, a modified Enki f.i. to translate a Domain Specific Language into D ... strange
Feb 06 2007
parent reply Pragma <ericanderton yahoo.removeme.com> writes:
BLS wrote:
 Pragma schrieb:
 BLS wrote:

 I guess here is a need for further explaination.

 Either I am an complete idiot (not completely unrealistic) and 
 missunderstood something, or a new, quit radical, programming 
 paradigmn change is on it s way.  I mean it is difficult to realize 
 the implications.
 Bjoern

Just try to wrap your head around this: http://www.digitalmars.com/d/mixin.html template GenStruct(char[] Name, char[] M1) { const char[] GenStruct = "struct " ~ Name ~ "{ int " ~ M1 ~ "; }"; } mixin(GenStruct!("Foo", "bar")); //which generates: struct Foo { int bar; } In short this means that we can have *100%* arbitrary code generation at compile time, w/o need of a new grammar to support the capability.

Hi Eric, I am able to read and understand the code. (not nessesarily the far reaching implications) But the generated code is still D. So what does it mean : Walter Bright schrieb: > The idea is to enable the creation of DSLs (Domain Specific Languages) How ? Bjoern

I think you answered your own question. :) Take the compile-time regexp lib that Don and I wrote a while back. Technically, Regular-expressions are a DSL of sorts. This feature just makes the implementation of stuff like that easier. The end result will still be D code. auto widget = CreateNewWidget!("Some DSL Code"); I was confused too, since the wording could be interpreted as allowing you to just code in some other language, wherever you want. This is not the case. Ultimately, any DSL implemented in this fashion is going to have to operate on static strings.
 I can imagine the following scenario : D Compiler is calling a
 Translator, a modified Enki f.i. to translate a Domain Specific Language
 into D ... strange

I've thought about that too- much like BCS's work. The only thing keeping me from doing this *was* that the code generated would be largely inferior to that created by an external program. Thanks to the new syntax of mixin(), this is no longer the case. -- - EricAnderton at yahoo
Feb 06 2007
next sibling parent Chris Nicholson-Sauls <ibisbasenji gmail.com> writes:
Pragma wrote:
 BLS wrote:
 Pragma schrieb:
 BLS wrote:

 I guess here is a need for further explaination.

 Either I am an complete idiot (not completely unrealistic) and 
 missunderstood something, or a new, quit radical, programming 
 paradigmn change is on it s way.  I mean it is difficult to realize 
 the implications.
 Bjoern

Just try to wrap your head around this: http://www.digitalmars.com/d/mixin.html template GenStruct(char[] Name, char[] M1) { const char[] GenStruct = "struct " ~ Name ~ "{ int " ~ M1 ~ "; }"; } mixin(GenStruct!("Foo", "bar")); //which generates: struct Foo { int bar; } In short this means that we can have *100%* arbitrary code generation at compile time, w/o need of a new grammar to support the capability.

Hi Eric, I am able to read and understand the code. (not nessesarily the far reaching implications) But the generated code is still D. So what does it mean : Walter Bright schrieb: > The idea is to enable the creation of DSLs (Domain Specific Languages) How ? Bjoern

I think you answered your own question. :) Take the compile-time regexp lib that Don and I wrote a while back. Technically, Regular-expressions are a DSL of sorts. This feature just makes the implementation of stuff like that easier. The end result will still be D code. auto widget = CreateNewWidget!("Some DSL Code");

You just showed something that I've been pondering attempting with this. Namely a GUI library that builds all forms/controls/etc from some sort of markup (probably modified XML or JSON), either at runtime /or/ compile-time. -- Chris Nicholson-Sauls
Feb 06 2007
prev sibling parent reply renoX <renosky free.fr> writes:
Pragma a écrit :
 BLS wrote:
 Pragma schrieb:
 BLS wrote:

Translator, a modified Enki f.i. to translate a Domain Specific Language into D ... strange

I've thought about that too- much like BCS's work. The only thing

Enki? BCS? Could you avoid mysterious references? Regards, renoX
Feb 07 2007
next sibling parent Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
renoX wrote:
 Pragma a écrit :
 BLS wrote:
 Pragma schrieb:
 BLS wrote:

Translator, a modified Enki f.i. to translate a Domain Specific Language into D ... strange

I've thought about that too- much like BCS's work. The only thing

Enki? BCS? Could you avoid mysterious references?

Enki: http://www.dsource.org/projects/ddl/wiki/Enki BCS is a poster in these newsgroups. He's been mentioning a project of his called dparser lately: http://www.dsource.org/projects/scrapple/browser/trunk/dparser/dparse.d
Feb 07 2007
prev sibling parent reply BCS <BCS pathlink.com> writes:
renoX wrote:
 Pragma a écrit :
 
 BLS wrote:

 Pragma schrieb:

 BLS wrote:

I can imagine the following scenario : D Compiler is calling a Translator, a modified Enki f.i. to translate a Domain Specific Language into D ... strange

I've thought about that too- much like BCS's work. The only thing

Enki?

http://www.dsource.org/projects/ddl/browser/trunk/enki written by Pragma
 BCS?

http://www.dsource.org/projects/scrapple/browser/trunk/dparser/dparse.d written by me (BCS)
 
 Could you avoid mysterious references?
 
 Regards,
 renoX

Feb 07 2007
parent renoX <renosky free.fr> writes:
BCS a écrit :
 renoX wrote:
 Pragma a écrit :

 BLS wrote:

 Pragma schrieb:

 BLS wrote:

I can imagine the following scenario : D Compiler is calling a Translator, a modified Enki f.i. to translate a Domain Specific Language into D ... strange

I've thought about that too- much like BCS's work. The only thing

Enki?

http://www.dsource.org/projects/ddl/browser/trunk/enki written by Pragma
 BCS?

http://www.dsource.org/projects/scrapple/browser/trunk/dparser/dparse.d written by me (BCS)

Thanks, I was a bit lost. renoX
 
 Could you avoid mysterious references?

 Regards,
 renoX


Feb 07 2007
prev sibling parent Chris Nicholson-Sauls <ibisbasenji gmail.com> writes:
Pragma wrote:
 BLS wrote:
 I guess here is a need for further explaination.

 Either I am an complete idiot (not completely unrealistic) and 
 missunderstood something, or a new, quit radical, programming 
 paradigmn change is on it s way.  I mean it is difficult to realize 
 the implications.
 Bjoern

Just try to wrap your head around this: http://www.digitalmars.com/d/mixin.html template GenStruct(char[] Name, char[] M1) { const char[] GenStruct = "struct " ~ Name ~ "{ int " ~ M1 ~ "; }"; } mixin(GenStruct!("Foo", "bar")); //which generates: struct Foo { int bar; } In short this means that we can have *100%* arbitrary code generation at compile time, w/o need of a new grammar to support the capability.

Combine that with the ever increasing power of Tuples, some of the slowly improving type info, and compile-time string manipulation.... and you've got programs that write programs for writing programs that generate the original program automatically. o_O I like it. Might even scrap some old code in favor of it. -- Chris Nicholson-Sauls
Feb 06 2007
prev sibling next sibling parent reply Walter Bright <newshound digitalmars.com> writes:
BLS wrote:
 I guess here is a need for further explaination.
 
 Either I am an complete idiot (not completely unrealistic) and 
 missunderstood something, or a new, quit radical, programming paradigmn 
 change is on it s way.  I mean it is difficult to realize the implications.

I think you're right. The only thing that makes me uneasy is the "preprocessor abuse" that comes up in C++. We should be careful in how we use this, lest the cuticle side of the thumb take over.
Feb 06 2007
parent reply Sean Kelly <sean f4.ca> writes:
Walter Bright wrote:
 BLS wrote:
 I guess here is a need for further explaination.

 Either I am an complete idiot (not completely unrealistic) and 
 missunderstood something, or a new, quit radical, programming 
 paradigmn change is on it s way.  I mean it is difficult to realize 
 the implications.

I think you're right. The only thing that makes me uneasy is the "preprocessor abuse" that comes up in C++. We should be careful in how we use this, lest the cuticle side of the thumb take over.

The most obvious danger is simply being able to eyeball what the source code for a module actually is, but that's been an issue for any sufficiently complex template code anyway. What I like about this feature is that it improves upon the power of macros but does so without providing a method for changing the meaning of existing symbols (the "#define if while" problem). It also requires almost no new language features, so it shouldn't have a tremendous learning curve. Finally, since all this works via strings, it should be easy to determine what's actually going on simply by tossing in a few pragma(msg) statements. If there were a way to emit the "expanded" source we could even use this as a "standalone" code generation tool of sorts. Nice work! Sean
Feb 06 2007
next sibling parent reply BCS <BCS pathlink.com> writes:
Sean Kelly wrote:
 
 
 The most obvious danger is simply being able to eyeball what the source 
 code for a module actually is, but that's been an issue for any 
 sufficiently complex template code anyway.  

How are #line directives handled? is their any way to tell the debugger to look at another file: mixin(MixInThisFile("foo")); // results in this // stuff #line foo, 127 // stuff from foo:127 #line ... // revert back to original file:line Then, in the debugger, it would start stepping you through foo in the correct place.
 If 
 there were a way to emit the "expanded" source we could even use this as 
 a "standalone" code generation tool of sorts.  Nice work!

Put in a pragma msg in place of the mixin and you get the code. maybe a mixin(string, filename) form would be nice. It would dump to given file as well as generate code.
 
 
 Sean

Feb 06 2007
parent Sean Kelly <sean f4.ca> writes:
BCS wrote:
 Sean Kelly wrote:
 The most obvious danger is simply being able to eyeball what the 
 source code for a module actually is, but that's been an issue for any 
 sufficiently complex template code anyway.  

How are #line directives handled? is their any way to tell the debugger to look at another file: mixin(MixInThisFile("foo")); // results in this // stuff #line foo, 127 // stuff from foo:127 #line ... // revert back to original file:line Then, in the debugger, it would start stepping you through foo in the correct place.

I suspect that generating debug info will require the mixed-in code to be expanded in place with the proper #line directives, etc, in the object file.
 If there were a way to emit the "expanded" source we could even use 
 this as a "standalone" code generation tool of sorts.  Nice work!

Put in a pragma msg in place of the mixin and you get the code.

Yup. I think for a standalone code generator, it would probably be better to generate the output file completely through pragma(msg) so as to omit the template code used for processing the mixins. For example, I figure it shouldn't be terribly difficult to do D codegen from a UML file in template code, etc. Sean
Feb 06 2007
prev sibling parent Pragma <ericanderton yahoo.removeme.com> writes:
Sean Kelly wrote:
 If there were a way to emit the "expanded" source we could even use this as 
 a "standalone" code generation tool of sorts.

Now there's something that's missing. At least with C compilers, they can usually be asked to spit out what things look like after the pre-processor is done with it. Getting the same kind of results from DMD, after all templates are evaluated would be great for diagnostics and debugging. -- - EricAnderton at yahoo
Feb 06 2007
prev sibling next sibling parent reply BCS <BCS pathlink.com> writes:
BLS wrote:
 I guess here is a need for further explaination.
 
 Either I am an complete idiot (not completely unrealistic) and 
 missunderstood something, or a new, quit radical, programming paradigmn 
 change is on it s way.  I mean it is difficult to realize the implications.
 Bjoern
 
 

This could do thing like this: BuildBarserFromFileSpec!("foo.bnf") that would import "foo.bnf", parser a BNF grammar, and build a parser from it. It even avoids the need to use functions for callbacks. p.s. I'm going to have to try and re-implement dparse using this. Darn you Walter!!! I don't have time for this (cool stuff)!!! <G>
Feb 06 2007
parent reply Pragma <ericanderton yahoo.removeme.com> writes:
BCS wrote:
 BLS wrote:
 I guess here is a need for further explaination.

 Either I am an complete idiot (not completely unrealistic) and 
 missunderstood something, or a new, quit radical, programming 
 paradigmn change is on it s way.  I mean it is difficult to realize 
 the implications.
 Bjoern

This could do thing like this: BuildBarserFromFileSpec!("foo.bnf") that would import "foo.bnf", parser a BNF grammar, and build a parser from it. It even avoids the need to use functions for callbacks. p.s. I'm going to have to try and re-implement dparse using this. Darn you Walter!!! I don't have time for this (cool stuff)!!! <G>

BCS: I may be tempted to Enki-ize your work once you're done with that. I think a compile-time rendition is due. ;) -- - EricAnderton at yahoo
Feb 06 2007
parent reply BCS <BCS pathlink.com> writes:
Pragma wrote:
 BCS wrote:
 p.s.
 I'm going to have to try and re-implement dparse using this.

BCS: I may be tempted to Enki-ize your work once you're done with that. I think a compile-time rendition is due. ;)

Done? What is that? I haven't heard the term before. <g>
Feb 06 2007
parent reply Pragma <ericanderton yahoo.removeme.com> writes:
BCS wrote:
 Pragma wrote:
 BCS wrote:
 p.s.
 I'm going to have to try and re-implement dparse using this.

BCS: I may be tempted to Enki-ize your work once you're done with that. I think a compile-time rendition is due. ;)

Done? What is that? I haven't heard the term before. <g>

BCS, you're not working at 3D-realms by any chance, are you? <eg> -- - EricAnderton at yahoo
Feb 06 2007
parent BCS <BCS pathlink.com> writes:
Pragma wrote:
 BCS wrote:
 
 Pragma wrote:

 BCS wrote:

 p.s.
 I'm going to have to try and re-implement dparse using this.

BCS: I may be tempted to Enki-ize your work once you're done with that. I think a compile-time rendition is due. ;)

Done? What is that? I haven't heard the term before. <g>

BCS, you're not working at 3D-realms by any chance, are you? <eg>

no, I'm an Mechanical engineering undergraduate student working in declarative code/system generation (professionally) and secure software systems (academically)
Feb 06 2007
prev sibling parent reply Andreas Kochenburger <akk nospam.org> writes:
BLS wrote:
 I guess here is a need for further explaination.
 
 Either I am an complete idiot (not completely unrealistic) and 
 missunderstood something, or a new, quit radical, programming paradigmn 
 change is on it s way.  I mean it is difficult to realize the implications.
 Bjoern

I am not a D programmer (yet) only observing what is happening. I compare the new "code generation at compile-time" stuff in D with Forth. Forth also has a built-in interpreter & compiler which extends the language and can also execute macros at compile-time through EVALUATE. Of course Forth is much more low-level than D. But IMO the new mixins are not a "radical programming paradigm change". Perhaps I just did misunderstand something? Andreas
Feb 06 2007
parent reply janderson <askme me.com> writes:
Andreas Kochenburger wrote:
 BLS wrote:
 I guess here is a need for further explaination.

 Either I am an complete idiot (not completely unrealistic) and 
 missunderstood something, or a new, quit radical, programming 
 paradigmn change is on it s way.  I mean it is difficult to realize 
 the implications.
 Bjoern

I am not a D programmer (yet) only observing what is happening. I compare the new "code generation at compile-time" stuff in D with Forth. Forth also has a built-in interpreter & compiler which extends the language and can also execute macros at compile-time through EVALUATE. Of course Forth is much more low-level than D. But IMO the new mixins are not a "radical programming paradigm change". Perhaps I just did misunderstand something? Andreas

I'm not familiar with forth. Can you provide some examples? Does it allow partial macro definitions. Can you apply string operations on them at compile time? -Joel
Feb 06 2007
next sibling parent Andreas Kochenburger <akk nospam.org> writes:
janderson wrote:
 I'm not familiar with forth.  Can you provide some examples?  Does it 
 allow partial macro definitions.  Can you apply string operations on 
 them at compile time?

Forth is not a traditional compiler that generates executables from source files. A Forth system includes a built-in interpreter and compiler (and most systems have an assembler too). Source definitions are compiled and linked to the internal dictionary i.e. you extend the system itself. You can not only create application programms, but you can also easily add new features to the compiler e.g. new compiler commands. Macro functions at compile-time are only a small exercise for a Forth programmer. You mark the last defined function IMMEDIATE, and the next time the function is used it is executed _at_compile-time_ ! Please note, Forth and D are playing in different leagues. But you don't always have large object-oriented applications. For the more "bare metal" stuff Forth is flexibler than D. If you never had contact with Forth you will probably find it rather strange: it is a stack-based language and uses post-fix annotation like HP calculators. Andreas
Feb 07 2007
prev sibling parent reply Charles D Hixson <charleshixsn earthlink.net> writes:
janderson wrote:
 Andreas Kochenburger wrote:
 BLS wrote:
 I guess here is a need for further explaination.

 Either I am an complete idiot (not completely unrealistic) and 
 missunderstood something, or a new, quit radical, programming 
 paradigmn change is on it s way.  I mean it is difficult to realize 
 the implications.
 Bjoern

I am not a D programmer (yet) only observing what is happening. I compare the new "code generation at compile-time" stuff in D with Forth. Forth also has a built-in interpreter & compiler which extends the language and can also execute macros at compile-time through EVALUATE. Of course Forth is much more low-level than D. But IMO the new mixins are not a "radical programming paradigm change". Perhaps I just did misunderstand something? Andreas

I'm not familiar with forth. Can you provide some examples? Does it allow partial macro definitions. Can you apply string operations on them at compile time? -Joel

Forth is a Polish, as opposed to reverse Polish, language. Every word understood by the system is a command. Forth doesn't exactly HAVE a grammar. What it has is two stacks. (Sometimes more, but the additional stacks are optional, and their usage is inconsistent at best.) Forth commands generally operate on the stack. There are exceptional commands, those marked IMMEDIATE, which operate on the input stream. E.g.: 1 means push a 1 to the top of the stack. (This is typical of all integer literals. Floats are, or were, not standardized.) + means add together the top two items on the stack (removing them from the stack) and push their sum onto the top of the stack. dup means take the value (number?) at the to of the stack and, without removing it, push it to the top of the stack. does means examine the top of the stack. If it's true, continue execution, if not, skip down to the word following the end marker. It's been too long or I'd give a few more examples. The important thing to note is that words are executed without respect to grammar, but with the ability to determine their context. Forth is very similar to LISP, only with a simpler grammar. I.e., the grammar is simple serial execution, with certain words (those marked immediate) able to manipulate the input stream to determine what will be the next in order. N.B.: I'm discussing a basic Forth system, approximating FIG-FORTH, which is as close to standardized as Forth gets. There have been considerable variations. My favorite was Neon, an Object-Oriented Forth for the Mac from Kyria Systems (now long defunct). If they hadn't died while attempting to transition to MSWind95 I might have ended up as a Forth programmer. But don't confuse Forth, in any of it's variations, with D. Forth didn't really create compiled code...and it also wasn't an interpreter in any normal sense of the term. (I'd say isn't, but I'm not really familiar with current Forths.) Forth was what was called a "Threaded Interpretive Language". It might be best to say that you didn't program Forth, you built a software machine that when run did what you wanted to program to accomplish. OTOH, looked at from a different angle, the closest analog to Forth is Smalltalk. But Smalltalk has too much grammar. Still, they both have that system library that is grown by users...and which makes the environment both richer and more complicated to use than languages like D, where the libraries are more distinct from the language.
Feb 07 2007
parent reply Kevin Bealer <kevinbealer gmail.com> writes:
Charles D Hixson wrote:
...
 Forth is very similar to LISP, only with a simpler grammar. I.e., the 
 grammar is simple serial execution, with  certain words (those marked 
 immediate) able to manipulate the input stream to determine what will be 
 the next in order.

I think the comparison to LISP is a good way to think about forth: 1. Write a lisp program, but put the function name after its arguments for every expression. This will be hard to read but by following the parenthesis you can just barely tell what is happening. 2. Remove all the parenthesis. At least 90% of the time, if you remove a symbol from a valid forth program, you get a valid (but incorrect) forth program -- there is almost zero redundancy in the language so errors are almost never detectable at compile time, which means you should write short clear functions. But the central feature of FORTH is that the compiler and runtime can be made mind-bogglingly small. I think the run time speed for a naive interpretation is probably somewhere between C and interpreted bytecode. From this page about tiny4th: http://www.seanet.com/~karllunt/tiny4th "The run-time engine takes up less than 1K of code space and the p-codes are so dense that you can get a lot of robot functionality in just 2K." Of course, that's a compiler; an interactive language environment can be used for prototyping (like with lisp) and that will run a bit bigger. Kevin
Feb 08 2007
parent reply Andreas Kochenburger <akk nospam.org> writes:
Kevin Bealer wrote:
 Charles D Hixson wrote:
 But the central feature of FORTH is that the compiler and runtime can be 
 made mind-bogglingly small.  I think the run time speed for a naive 
 interpretation is probably somewhere between C and interpreted bytecode.
 
  From this page about tiny4th: http://www.seanet.com/~karllunt/tiny4th
 
 "The run-time engine takes up less than 1K of code space and the p-codes 
 are so dense that you can get a lot of robot functionality in just 2K."

Before someone thinks, Forth is only a play-thing, see http://www.forth.com/ There are also excellent freeware versions around, f.ex. http://win32forth.sourceforge.net/ There is even ans ANS / ISO standard for the language. Andreas
Feb 08 2007
next sibling parent kris <foo bar.com> writes:
Andreas Kochenburger wrote:
 Kevin Bealer wrote:
 
 Charles D Hixson wrote:
 But the central feature of FORTH is that the compiler and runtime can 
 be made mind-bogglingly small.  I think the run time speed for a naive 
 interpretation is probably somewhere between C and interpreted bytecode.

  From this page about tiny4th: http://www.seanet.com/~karllunt/tiny4th

 "The run-time engine takes up less than 1K of code space and the 
 p-codes are so dense that you can get a lot of robot functionality in 
 just 2K."

Before someone thinks, Forth is only a play-thing, see http://www.forth.com/ There are also excellent freeware versions around, f.ex. http://win32forth.sourceforge.net/ There is even ans ANS / ISO standard for the language. Andreas

Yeah, Forth is an incredibly powerful language
Feb 08 2007
prev sibling next sibling parent Derek Parnell <derek nomail.afraid.org> writes:
On Thu, 08 Feb 2007 18:57:24 +0100, Andreas Kochenburger wrote:

 Kevin Bealer wrote:

 Before someone thinks, Forth is only a play-thing, see http://www.forth.com/
 
 There are also excellent freeware versions around, f.ex.
 http://win32forth.sourceforge.net/
 
 There is even ans ANS / ISO standard for the language.

Forth was the first programming language that felt a mystical connection to. I still love its simplicity/complexity dichotomy. I haven't really touched in years though so I may just go and spend some time with that old friend again. -- Derek (skype: derek.j.parnell) Melbourne, Australia "Justice for David Hicks!" 9/02/2007 10:31:43 AM
Feb 08 2007
prev sibling next sibling parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Andreas Kochenburger wrote:
 Kevin Bealer wrote:
 Charles D Hixson wrote:
 But the central feature of FORTH is that the compiler and runtime can 
 be made mind-bogglingly small.  I think the run time speed for a naive 
 interpretation is probably somewhere between C and interpreted bytecode.

  From this page about tiny4th: http://www.seanet.com/~karllunt/tiny4th

 "The run-time engine takes up less than 1K of code space and the 
 p-codes are so dense that you can get a lot of robot functionality in 
 just 2K."

Before someone thinks, Forth is only a play-thing, see http://www.forth.com/

There's an extra comma in there that pretty much changes the meaning of the sentence :o). Andrei
Feb 09 2007
prev sibling parent reply Charles D Hixson <charleshixsn earthlink.net> writes:
Andreas Kochenburger wrote:
 Kevin Bealer wrote:
 Charles D Hixson wrote:
 But the central feature of FORTH is that the compiler and runtime can 
 be made mind-bogglingly small.  I think the run time speed for a naive 
 interpretation is probably somewhere between C and interpreted bytecode.

  From this page about tiny4th: http://www.seanet.com/~karllunt/tiny4th

 "The run-time engine takes up less than 1K of code space and the 
 p-codes are so dense that you can get a lot of robot functionality in 
 just 2K."

Before someone thinks, Forth is only a play-thing, see http://www.forth.com/ There are also excellent freeware versions around, f.ex. http://win32forth.sourceforge.net/ There is even ans ANS / ISO standard for the language. Andreas

I wouldn't have minded writing that, but there was a mistake in editing.
 But the central feature of FORTH is that the compiler and 
 runtime can be made mind-bogglingly small.  I think the run
 time speed for a naive interpretation is probably somewhere
 between C and interpreted bytecode.


Also, I'm not that impressed by SwiftForth (except the price they charge, THAT'S impressive). OTOH, I'm running Linux, so I can only judge them by their web pages. I tend to think of them as being rather like Allegro Lisp: Enormously more expensive than the competition, and only marginally better. That said, I don't really have any evidence. Were I to select a Forth to use I'm probably pick gforth or bigforth (+ Minos?). Every once in awhile I think of going back to it...but I've lost the books I once had on it. I've lost, sold, given away, or discarded my copies of Forth Dimensions, and it would really be a great deal of effort to get as familiar with it as I once was. So I don't. Neon, though, was impressive. I think there's a version calls MOPS or mops or some such still extant, but I don't know how complete it is, and last I checked it only ran on Mac pre-OSX. (I trust that's no longer true...if not, it's history.)
Feb 09 2007
parent reply Andreas Kochenburger <akk nospam.org> writes:
Charles D Hixson wrote:
 Were I to select a Forth to use I'm probably pick gforth or bigforth (+ 
 Minos?).

IMHO the tool (=programming language) should be chosen according to the work at hand, and to the mastership one can achieve as an individual with a certain tool. For some people FICL is a good tool http://ficl.sourceforge.net/ It should not be too difficult to incorporate it into D applications as _resident_ interactive debugging aid _during_runtime_.
 Neon, though, was impressive.  I think there's a version calls MOPS or 
 mops or some such still extant, but I don't know how complete it is, and 
 last I checked it only ran on Mac pre-OSX.  (I trust that's no longer 
 true...if not, it's history.)

See http://powermops.sourceforge.net/index.php/Main_Page
Feb 11 2007
parent Charles D Hixson <charleshixsn earthlink.net> writes:
Andreas Kochenburger wrote:
 Charles D Hixson wrote:
 Were I to select a Forth to use I'm probably pick gforth or bigforth 
 (+ Minos?).

IMHO the tool (=programming language) should be chosen according to the work at hand, and to the mastership one can achieve as an individual with a certain tool. For some people FICL is a good tool http://ficl.sourceforge.net/ It should not be too difficult to incorporate it into D applications as _resident_ interactive debugging aid _during_runtime_.
 Neon, though, was impressive.  I think there's a version calls MOPS or 
 mops or some such still extant, but I don't know how complete it is, 
 and last I checked it only ran on Mac pre-OSX.  (I trust that's no 
 longer true...if not, it's history.)

See http://powermops.sourceforge.net/index.php/Main_Page

Glad to hear it's still going... sorry to hear it's soon to be gone. (The author announces no plans to port to the Intel Macs...well, he's a bit stronger about it than that.)
Feb 11 2007
prev sibling next sibling parent Lionello Lunesu <lio lunesu.remove.com> writes:
Walter Bright wrote:
 Fixes many bugs, some serious.
 
 Some new goodies.
 
 http://www.digitalmars.com/d/changelog.html
 
 http://ftp.digitalmars.com/dmd.1.005.zip

!!!! This is just what I needed for a compile-time .rc compiler! L.
Feb 06 2007
prev sibling next sibling parent reply Lars Ivar Igesund <larsivar igesund.net> writes:
Walter Bright wrote:

 Fixes many bugs, some serious.
 
 Some new goodies.
 
 http://www.digitalmars.com/d/changelog.html
 
 http://ftp.digitalmars.com/dmd.1.005.zip

Sounds like some nice new features, but even though the compiler seems to know that these new features are D 2.0, the spec don't show it. I'd suggest to branch the specification now, after all 1.0 shouldn't see any new feature changes. Without this, there is no point in the 1.0 marker whatsoever. -- Lars Ivar Igesund blog at http://larsivi.net DSource & #D: larsivi Dancing the Tango
Feb 06 2007
next sibling parent "Frank Benoit (keinfarbton)" <benoit tionex.removethispart.de> writes:
 Sounds like some nice new features, but even though the compiler seems to
 know that these new features are D 2.0, the spec don't show it. I'd suggest
 to branch the specification now, after all 1.0 shouldn't see any new
 feature changes. Without this, there is no point in the 1.0 marker
 whatsoever.
 

I second that.
Feb 06 2007
prev sibling parent BCS <BCS pathlink.com> writes:
Lars Ivar Igesund wrote:
 Walter Bright wrote:
 
 
Fixes many bugs, some serious.

Some new goodies.

http://www.digitalmars.com/d/changelog.html

http://ftp.digitalmars.com/dmd.1.005.zip

Sounds like some nice new features, but even though the compiler seems to know that these new features are D 2.0, the spec don't show it. I'd suggest to branch the specification now, after all 1.0 shouldn't see any new feature changes. Without this, there is no point in the 1.0 marker whatsoever.

I also second that, branch the spec or annotate it *Vary* well. Either way, the change log should say v2.0 as well
Feb 06 2007
prev sibling next sibling parent mike <vertex gmx.at> writes:
Nice!

That new mixin stuff will be great for that stacktrace I'm planning to d=
o  =

for months now ... gotta try that out asap :)

-Mike

Am 06.02.2007, 05:54 Uhr, schrieb Walter Bright  =

<newshound digitalmars.com>:

 Fixes many bugs, some serious.

 Some new goodies.

 http://www.digitalmars.com/d/changelog.html

 http://ftp.digitalmars.com/dmd.1.005.zip

-- = Erstellt mit Operas revolution=E4rem E-Mail-Modul: http://www.opera.com/= mail/
Feb 06 2007
prev sibling next sibling parent reply Pragma <ericanderton yahoo.removeme.com> writes:
Walter Bright wrote:
 Fixes many bugs, some serious.
 
 Some new goodies.
 
 http://www.digitalmars.com/d/changelog.html
 
 http://ftp.digitalmars.com/dmd.1.005.zip

"The AssignExpression must evaluate at compile time to a constant string. The text contents of the string must be compilable as a valid StatementList, and is compiled as such.:" Arbitrary code generation?! This ought to make for some really slick compile-time code generators - say goodbye to delegate calling overhead and static variable bloat. The import expression thing has me scratching my head though: what path does DMD use to determine where to find the imported file? (it's not clear in the documentation) Awesome update Walter - thanks again. :) -- - EricAnderton at yahoo
Feb 06 2007
parent Walter Bright <newshound digitalmars.com> writes:
Pragma wrote:
 The import expression thing has me scratching my head though: what path 
 does DMD use to determine where to find the imported file? (it's not 
 clear in the documentation)

It just looks in the default directory. I know this is inadequate for a long term solution, but I wanted to see what people thought of it before spending a lot of effort on the details.
Feb 06 2007
prev sibling next sibling parent reply janderson <askme me.com> writes:
Walter Bright wrote:
 Fixes many bugs, some serious.
 
 Some new goodies.
 
 http://www.digitalmars.com/d/changelog.html
 
 http://ftp.digitalmars.com/dmd.1.005.zip

The import stuff has been part of C for a long time (in the form of #include), however I've never seen it used. Maybe with string operations it will be useful, but otherwise I don't see the point. =Joel
Feb 06 2007
next sibling parent reply BCS <BCS pathlink.com> writes:
janderson wrote:
 Walter Bright wrote:
 
 Fixes many bugs, some serious.

 Some new goodies.

 http://www.digitalmars.com/d/changelog.html

 http://ftp.digitalmars.com/dmd.1.005.zip

The import stuff has been part of C for a long time (in the form of #include), however I've never seen it used. Maybe with string operations it will be useful, but otherwise I don't see the point. =Joel

not quite, I don't think this works in c char string[] = "#import<bigstring.txt>" sting gets the value of "#import<bigstring.txt>", not the contents of "bigstring.txt"
Feb 06 2007
parent janderson <askme me.com> writes:
BCS wrote:
 janderson wrote:
 Walter Bright wrote:

 Fixes many bugs, some serious.

 Some new goodies.

 http://www.digitalmars.com/d/changelog.html

 http://ftp.digitalmars.com/dmd.1.005.zip

The import stuff has been part of C for a long time (in the form of #include), however I've never seen it used. Maybe with string operations it will be useful, but otherwise I don't see the point. =Joel

not quite, I don't think this works in c char string[] = "#import<bigstring.txt>" sting gets the value of "#import<bigstring.txt>", not the contents of "bigstring.txt"

No but you could do: char* string = #import "bigstring.txt" and have quotes in the bigstring.txt D's look nicer but its been around for ages. I'm sure combining it with the mixins will make all the difference. -Joel
Feb 06 2007
prev sibling parent reply Walter Bright <newshound digitalmars.com> writes:
janderson wrote:
 The import stuff has been part of C for a long time (in the form of 
 #include), however I've never seen it used.  Maybe with string 
 operations it will be useful, but otherwise I don't see the point.

The fundamental difference is that #include inserts *program text*, while import inserts the contents as a *string literal*. Some things that you can do with import that you cannot do with #include: 1) You can have tech writers write "help text" files, which can then be imported by the programmers as string literals. This means the tech writers do not have to be concerned in the slightest with string syntax. 2) It's an easy way to bind binary data into a program. For example, let's say one wants to embed an icon (.ico) file into your program binary. In C, one would have to write: static unsigned char icon[] = { 0x00, 0x10, 0x53, 0x29, ... }; meaning one must translate the binary data in foo.ico to the hex notation. In D, one can write: static ubyte[] icon = cast(ubyte[])import("foo.ico");
Feb 06 2007
parent reply janderson <askme me.com> writes:
Walter Bright wrote:
 janderson wrote:
 The import stuff has been part of C for a long time (in the form of 
 #include), however I've never seen it used.  Maybe with string 
 operations it will be useful, but otherwise I don't see the point.

The fundamental difference is that #include inserts *program text*, while import inserts the contents as a *string literal*. Some things that you can do with import that you cannot do with #include: 1) You can have tech writers write "help text" files, which can then be imported by the programmers as string literals. This means the tech writers do not have to be concerned in the slightest with string syntax.

They could just add quotes around them in C. Slightly less convenient, never seen anyone use that feature.
 
 2) It's an easy way to bind binary data into a program. For example, 
 let's say one wants to embed an icon (.ico) file into your program 
 binary. In C, one would have to write:
 
     static unsigned char icon[] = { 0x00, 0x10, 0x53, 0x29, ... };
 
 meaning one must translate the binary data in foo.ico to the hex notation.
 
 In D, one can write:
 
     static ubyte[] icon = cast(ubyte[])import("foo.ico");

This is a good point. Maybe it should be on the webpage. -Joel
Feb 06 2007
parent Walter Bright <newshound digitalmars.com> writes:
janderson wrote:
 Walter Bright wrote:
 Some things that you can do with import that you cannot do with #include:

 1) You can have tech writers write "help text" files, which can then 
 be imported by the programmers as string literals. This means the tech 
 writers do not have to be concerned in the slightest with string syntax.

They could just add quotes around them in C.

Not exactly. You cannot have multiline string literals in C. You'd have to use \n\ line splicing, and escape any " and \ embedded in the text: "Your text\n\ would have to\n\ look like this\n\ and be careful to\n\ escape any \"s in\n\ the text, as well as\n\ any \\s." Furthermore, string literals are limited to 4095 characters (if one cares about portability). C99 5.2.4.1 These restrictions are annoying enough that I've preferred using runtime reading and loading of the message file instead.
Feb 06 2007
prev sibling next sibling parent reply mike <vertex gmx.at> writes:
Updated DMD to 1.005 from 1.0 today and now I get this error:

' C:\dmd\src\ext\derelict\sdl\ttf.d(79): struct derelict.sdl.ttf._TTF_Fo=
nt  =

unknown
'  size
' C:\dmd\src\ext\derelict\sdl\ttf.d(79): struct derelict.sdl.ttf._TTF_Fo=
nt  =

no size
'  yet for forward reference

Does anybody know how to fix that? I've already searched the NG, the  =

derelict forum, upgraded to the latest trunk ... nothing helped so far.

-Mike

Am 06.02.2007, 05:54 Uhr, schrieb Walter Bright  =

<newshound digitalmars.com>:

 Fixes many bugs, some serious.

 Some new goodies.

 http://www.digitalmars.com/d/changelog.html

 http://ftp.digitalmars.com/dmd.1.005.zip

-- = Erstellt mit Operas revolution=E4rem E-Mail-Modul: http://www.opera.com/= mail/
Feb 06 2007
parent reply Thomas Brix Larsen <brix brix-verden.dk> writes:
mike wrote:

 Updated DMD to 1.005 from 1.0 today and now I get this error:
 
 ' C:\dmd\src\ext\derelict\sdl\ttf.d(79): struct derelict.sdl.ttf._TTF_Font
 unknown
 '  size
 ' C:\dmd\src\ext\derelict\sdl\ttf.d(79): struct derelict.sdl.ttf._TTF_Font
 no size
 '  yet for forward reference
 
 Does anybody know how to fix that? I've already searched the NG, the
 derelict forum, upgraded to the latest trunk ... nothing helped so far.
 
 -Mike
 

It should read: struct _TTF_Font {} http://dsource.org/projects/derelict/browser/trunk/DerelictSDLttf/derelict/sdl/ttf.d - Brix
Feb 06 2007
parent mike <vertex gmx.at> writes:
Thanks! That worked :)

Am 06.02.2007, 19:28 Uhr, schrieb Thomas Brix Larsen <brix brix-verden.d=
k>:

 mike wrote:

 Updated DMD to 1.005 from 1.0 today and now I get this error:

 ' C:\dmd\src\ext\derelict\sdl\ttf.d(79): struct  =


 derelict.sdl.ttf._TTF_Font
 unknown
 '  size
 ' C:\dmd\src\ext\derelict\sdl\ttf.d(79): struct  =


 derelict.sdl.ttf._TTF_Font
 no size
 '  yet for forward reference

 Does anybody know how to fix that? I've already searched the NG, the
 derelict forum, upgraded to the latest trunk ... nothing helped so fa=


 -Mike

It should read: struct _TTF_Font {} http://dsource.org/projects/derelict/browser/trunk/DerelictSDLttf/dere=

 - Brix

-- = Erstellt mit Operas revolution=E4rem E-Mail-Modul: http://www.opera.com/= mail/
Feb 06 2007
prev sibling next sibling parent reply Hasan Aljudy <hasan.aljudy gmail.com> writes:
Walter Bright wrote:
 Fixes many bugs, some serious.
 
 Some new goodies.
 
 http://www.digitalmars.com/d/changelog.html
 
 http://ftp.digitalmars.com/dmd.1.005.zip

Wow, this is no small change .. this should've ben dmd 1.2 or something. Now, there's already been alot of talk about what new doors this might open, so I'm not gonna talk about that. What concerns me is that this will make semantic analysis more difficult to implement. Just think about "build/bud" for example, now the author will have to worry about things like: mixin("import x.y.z"); or even worse: mixin(templ1!(something, templ2!(somethingelse), "x.y")); I don't see how it's possible to interpret that without implementing a full compiler. P.S. I know that for "build" all we need is a list of import files, and dmd already has a switch to do that.
Feb 06 2007
next sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Hasan Aljudy wrote:
 I don't see how it's possible to interpret that without implementing a 
 full compiler.

You're right, it isn't possible.
 P.S. I know that for "build" all we need is a list of import files, and 
 dmd already has a switch to do that.

DMD will also output a list of files that are textually imported, so bud can pick them up at least the second time around.
Feb 06 2007
parent reply Derek Parnell <derek nomail.afraid.org> writes:
On Tue, 06 Feb 2007 11:55:18 -0800, Walter Bright wrote:

 Hasan Aljudy wrote:
 I don't see how it's possible to interpret that without implementing a 
 full compiler.

You're right, it isn't possible.
 P.S. I know that for "build" all we need is a list of import files, and 
 dmd already has a switch to do that.

DMD will also output a list of files that are textually imported, so bud can pick them up at least the second time around.

Thanks Walter! :-( Bud is no longer a useful tool because it can no longer do what it was trying to do - namely find out which files needed recompiling and get only that done. Because in order to do that now, it first has to recursively compile each command line file and imported file using the -c -v switches to get a list of the potential files needing to be checked for recompilation. But seeing I've just compiled them to get this list, there is not much point now in /recompiling/ them. Also, mixin-imported files are not necessarily modules but must be treated as code fragments, so they can't be compiled to see if they in-turn effectively import other files! My work here is (un)done. It seems that DMD now needs to be enhanced to do what Rebuild and Bud were trying to do. -- Derek (skype: derek.j.parnell) Melbourne, Australia "Down with mediocrity!" 7/02/2007 10:18:52 AM
Feb 06 2007
next sibling parent BCS <ao pathlink.com> writes:
Reply to Derek,

 Thanks Walter! :-(
 
 Bud is no longer a useful tool because it can no longer do what it was
 trying to do - namely find out which files needed recompiling and get
 only that done. Because in order to do that now, it first has to
 recursively compile each command line file and imported file using the
 -c -v switches to get a list of the potential files needing to be
 checked for recompilation. But seeing I've just compiled them to get
 this list, there is not much point now in /recompiling/ them. Also,
 mixin-imported files are not necessarily modules but must be treated
 as code fragments, so they can't be compiled to see if they in-turn
 effectively import other files!
 
 My work here is (un)done.
 
 It seems that DMD now needs to be enhanced to do what Rebuild and Bud
 were trying to do.
 

If bud keep around meta data about what happened last time (building N required A, B, C) then if none of those changed, the set of files that can be used can't change. Having that kind of tree would let you do a minimal rebuild even with the new import stuff. Once you go that direction (DMD would have to report the files used) why not have DMD report the public interface of each module? that would let bud notice when a change in a module doesn't demands a rebuild of the modules that import it. Some of this might be possible by watching for actual changes in .di file, not just checking modification dates.
Feb 06 2007
prev sibling next sibling parent Kirk McDonald <kirklin.mcdonald gmail.com> writes:
Derek Parnell wrote:
 On Tue, 06 Feb 2007 11:55:18 -0800, Walter Bright wrote:
 
 Hasan Aljudy wrote:
 I don't see how it's possible to interpret that without implementing a 
 full compiler.

 P.S. I know that for "build" all we need is a list of import files, and 
 dmd already has a switch to do that.

can pick them up at least the second time around.

Thanks Walter! :-( Bud is no longer a useful tool because it can no longer do what it was trying to do - namely find out which files needed recompiling and get only that done. Because in order to do that now, it first has to recursively compile each command line file and imported file using the -c -v switches to get a list of the potential files needing to be checked for recompilation. But seeing I've just compiled them to get this list, there is not much point now in /recompiling/ them. Also, mixin-imported files are not necessarily modules but must be treated as code fragments, so they can't be compiled to see if they in-turn effectively import other files! My work here is (un)done. It seems that DMD now needs to be enhanced to do what Rebuild and Bud were trying to do.

Perhaps it would be feasible to turn bud (and perhaps rebuild) into Makefile generators. -- Kirk McDonald Pyd: Wrapping Python with D http://pyd.dsource.org
Feb 06 2007
prev sibling next sibling parent Walter Bright <newshound digitalmars.com> writes:
Derek Parnell wrote:
 Bud is no longer a useful tool because it can no longer do what it was
 trying to do - namely find out which files needed recompiling and get only
 that done. Because in order to do that now, it first has to recursively
 compile each command line file and imported file using the -c -v switches
 to get a list of the potential files needing to be checked for
 recompilation. But seeing I've just compiled them to get this list, there
 is not much point now in /recompiling/ them. Also, mixin-imported files are
 not necessarily modules but must be treated as code fragments, so they
 can't be compiled to see if they in-turn effectively import other files!
 
 My work here is (un)done.
 
 It seems that DMD now needs to be enhanced to do what Rebuild and Bud were
 trying to do. 

The compiler cannot tell what file it'll need to textually import without compiling either, so cannot do a 'make' on textual imports. No tool is perfect; I recommend just ignoring the problem with textual imports. Such shouldn't be used outside of specialized modules.
Feb 06 2007
prev sibling parent Hasan Aljudy <hasan.aljudy gmail.com> writes:
Derek Parnell wrote:
 On Tue, 06 Feb 2007 11:55:18 -0800, Walter Bright wrote:
 
 Hasan Aljudy wrote:
 I don't see how it's possible to interpret that without implementing a 
 full compiler.

 P.S. I know that for "build" all we need is a list of import files, and 
 dmd already has a switch to do that.

can pick them up at least the second time around.

Thanks Walter! :-( Bud is no longer a useful tool because it can no longer do what it was trying to do - namely find out which files needed recompiling and get only that done.

Really? I always use -full -clean switches. I use build/bud because dmd is not smart enough to compile all the files needed to make the program. dmd is fast enough to not require one to bother do "selective" compliation. (if one can say so ..)
Feb 06 2007
prev sibling parent reply Ary Manzana <ary esperanto.org.ar> writes:
Hasan Aljudy escribió:
 
 
 Walter Bright wrote:
 Fixes many bugs, some serious.

 Some new goodies.

 http://www.digitalmars.com/d/changelog.html

 http://ftp.digitalmars.com/dmd.1.005.zip

Wow, this is no small change .. this should've ben dmd 1.2 or something. Now, there's already been alot of talk about what new doors this might open, so I'm not gonna talk about that. What concerns me is that this will make semantic analysis more difficult to implement. Just think about "build/bud" for example, now the author will have to worry about things like: mixin("import x.y.z"); or even worse: mixin(templ1!(something, templ2!(somethingelse), "x.y")); I don't see how it's possible to interpret that without implementing a full compiler. P.S. I know that for "build" all we need is a list of import files, and dmd already has a switch to do that.

I also like the new features and think the same as you. First, I don't know what is the real beneffit of this features. I mean, I want to see a real world example using mixins before thinking they are great (BTW, the first time I saw them PHP inmediately came into my head... even more after the post about """ msg """). Second, this makes even harder to get good IDE support for D. You can have syntax coloring, and that's it. Autocompletion is going to be a very though part: the IDE must act as a compiler, as you say it, to figure out what the program will look like so that it can know what are the declarations available to the programmer. Anyway, I'm still working on Descent, and when I get to that part (which I plan to implement because it's all there, in DMD... I guess?), I'll tell you. :-)
Feb 06 2007
next sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Ary Manzana wrote:
 Second, this makes even harder to get good IDE support for D. You can 
 have syntax coloring, and that's it. Autocompletion is going to be a 
 very though part: the IDE must act as a compiler, as you say it, to 
 figure out what the program will look like so that it can know what are 
 the declarations available to the programmer.

True, but on the other hand, specifically not supporting it in the IDE may act as a needed brake on evil uses of it.
Feb 06 2007
parent reply Ary Manzana <ary esperanto.org.ar> writes:
Walter Bright escribió:
 Ary Manzana wrote:
 Second, this makes even harder to get good IDE support for D. You can 
 have syntax coloring, and that's it. Autocompletion is going to be a 
 very though part: the IDE must act as a compiler, as you say it, to 
 figure out what the program will look like so that it can know what 
 are the declarations available to the programmer.

True, but on the other hand, specifically not supporting it in the IDE may act as a needed brake on evil uses of it.

I didn't say an IDE won't support it, I said it'll be very hard to get there :-) But... I'm wondering which are the evil uses of it. For me it's now almost impossible not to program with an IDE (big projects, I mean). At least in Java. Maybe compile time stuff will make it such that an IDE won't be needed anymore. But it's very hard for me to see that happening.
Feb 06 2007
next sibling parent reply Sean Kelly <sean f4.ca> writes:
Ary Manzana wrote:
 Walter Bright escribió:
 Ary Manzana wrote:
 Second, this makes even harder to get good IDE support for D. You can 
 have syntax coloring, and that's it. Autocompletion is going to be a 
 very though part: the IDE must act as a compiler, as you say it, to 
 figure out what the program will look like so that it can know what 
 are the declarations available to the programmer.

True, but on the other hand, specifically not supporting it in the IDE may act as a needed brake on evil uses of it.

I didn't say an IDE won't support it, I said it'll be very hard to get there :-) But... I'm wondering which are the evil uses of it. For me it's now almost impossible not to program with an IDE (big projects, I mean). At least in Java. Maybe compile time stuff will make it such that an IDE won't be needed anymore. But it's very hard for me to see that happening.

Oddly, I've found myself moving away from IDEs over the years, perhaps partially because the editors I like to use aren't IDEs. About the only time I use an IDE any more is for debugging... coding happens elsewhere. Sean
Feb 06 2007
parent Charlie <charlie.fats gmail.com> writes:
Sean Kelly wrote:
 Ary Manzana wrote:
 Walter Bright escribió:
 Ary Manzana wrote:
 Second, this makes even harder to get good IDE support for D. You 
 can have syntax coloring, and that's it. Autocompletion is going to 
 be a very though part: the IDE must act as a compiler, as you say 
 it, to figure out what the program will look like so that it can 
 know what are the declarations available to the programmer.

True, but on the other hand, specifically not supporting it in the IDE may act as a needed brake on evil uses of it.

I didn't say an IDE won't support it, I said it'll be very hard to get there :-) But... I'm wondering which are the evil uses of it. For me it's now almost impossible not to program with an IDE (big projects, I mean). At least in Java. Maybe compile time stuff will make it such that an IDE won't be needed anymore. But it's very hard for me to see that happening.

Oddly, I've found myself moving away from IDEs over the years, perhaps partially because the editors I like to use aren't IDEs. About the only time I use an IDE any more is for debugging... coding happens elsewhere. Sean

When you inherit code or start to code on an existing project, the ability to 'ctrl+click' to jump to that variables definition is a huge time saver, otherwise you have to grep through tons of files , and in large libraries the classes may be nested very deep, you might have to ctrl+click 5 times before you get to what you're looking for. I've actually gone the opposite way, drifting towards IDE's over time, even though for 'one-moduler's it remains <insert favorite editor here>. I am really looking forward to descent, I hope you can devote enough time to Ary. Charlie
Feb 06 2007
prev sibling next sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Ary Manzana wrote:
 Walter Bright escribió:
 True, but on the other hand, specifically not supporting it in the IDE 
 may act as a needed brake on evil uses of it.


From my point of view, evil uses of it are things like version control: mixin(import("versions.txt")); where versions.txt contains: version = FOO; version = BAR; etc., or other uses that subvert the right way to do things. One should think long and hard about using textual import to import D code. What it's for is to: 1) import data for a string constant 2) import code that's in DSL (Domain Specific Language), not D, form.
Feb 06 2007
next sibling parent reply Hasan Aljudy <hasan.aljudy gmail.com> writes:
Walter Bright wrote:
 Ary Manzana wrote:
 Walter Bright escribió:
 True, but on the other hand, specifically not supporting it in the 
 IDE may act as a needed brake on evil uses of it.


From my point of view, evil uses of it are things like version control: mixin(import("versions.txt")); where versions.txt contains: version = FOO; version = BAR; etc., or other uses that subvert the right way to do things. One should think long and hard about using textual import to import D code.

Why is that evil? I think it's actually a great idea. "versions" are a sort of configuration that determines which code should be compiled and which code shouldn't. Storing this configuration in a separate file makes sense to me.
 
 What it's for is to:
 
 1) import data for a string constant
 2) import code that's in DSL (Domain Specific Language), not D, form.

Feb 06 2007
next sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Hasan Aljudy wrote:
 Walter Bright wrote:
  From my point of view, evil uses of it are things like version control:

     mixin(import("versions.txt"));

 where versions.txt contains:

     version = FOO;
     version = BAR;

 etc., or other uses that subvert the right way to do things. One 
 should think long and hard about using textual import to import D code.

Why is that evil? I think it's actually a great idea. "versions" are a sort of configuration that determines which code should be compiled and which code shouldn't. Storing this configuration in a separate file makes sense to me.

The right way to do versions that cut across multiple files is to abstract the versioning into an API, and implement the different versions in different modules. This is a lot easier to manage when you're dealing with larger, more complex code, although it is more work up front.
Feb 06 2007
next sibling parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Walter Bright wrote:

 Why is that evil? I think it's actually a great idea. "versions" are a 
 sort of configuration that determines which code should be compiled 
 and which code shouldn't. Storing this configuration in a separate 
 file makes sense to me.

The right way to do versions that cut across multiple files is to abstract the versioning into an API, and implement the different versions in different modules. This is a lot easier to manage when you're dealing with larger, more complex code, although it is more work up front.

How would you use something like autoconf with this approach ? Would it need to generate different Makefiles / D file lists for different options/versions, instead of just -version statements ? Example of things I am thinking about are "__WXGTK__", "UNICODE", or "HAVE_OPENGL". With C/C++, they're usually in a config.h file. --anders
Feb 07 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
Anders F Björklund wrote:
 Walter Bright wrote:
 
 Why is that evil? I think it's actually a great idea. "versions" are 
 a sort of configuration that determines which code should be compiled 
 and which code shouldn't. Storing this configuration in a separate 
 file makes sense to me.

The right way to do versions that cut across multiple files is to abstract the versioning into an API, and implement the different versions in different modules. This is a lot easier to manage when you're dealing with larger, more complex code, although it is more work up front.

How would you use something like autoconf with this approach ? Would it need to generate different Makefiles / D file lists for different options/versions, instead of just -version statements ? Example of things I am thinking about are "__WXGTK__", "UNICODE", or "HAVE_OPENGL". With C/C++, they're usually in a config.h file.

I've worked with the C approach for many years, and have gotten increasingly dissatisfied with it. Over time, it leads to conflicting, misused, overlapping version macros. I've also tried the "make an API for the version" method, and have been much more satisfied with it. You can see it at work in the gc implementation (see gclinux.d and win32.d).
Feb 07 2007
parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Walter Bright wrote:

 Example of things I am thinking about are "__WXGTK__", "UNICODE",
 or "HAVE_OPENGL". With C/C++, they're usually in a config.h file.

I've worked with the C approach for many years, and have gotten increasingly dissatisfied with it. Over time, it leads to conflicting, misused, overlapping version macros. I've also tried the "make an API for the version" method, and have been much more satisfied with it. You can see it at work in the gc implementation (see gclinux.d and win32.d).

OK, see what you mean. And the Makefile would pick which implementation gets used, for the common interface chosen ? Not sure there's a whole world of difference between the: version(Win32) // or foowin32.d void foo() { ...this... } version(linux) // or foolinux.d void foo() { ...that... } and void foo() { version(Win32) ...this... version(linux) ...that... } But any rate, it's preferred to handle it outside of D. OK. Either through rewriting the code, or in the Makefiles. OK. With autoconf, I normally want the SAME piece of code to work on all platforms - so it does a different approach. For D, I would instead write one piece of code for EACH platform and avoid versioning it (as much as possible) ? I've worked with the "./configure && make" approach for many years and kinda like it, but will try out new ideas. Then again I kinda like the preprocessor too, and prefer writing C over C++, so maybe I'm just stuck on the old. :-) Not changing anything for the "ported" projects (like wxD), but it will be something to keep in mind for future D ones. Will do some thinking on how it would apply to options... (choices, as opposed to just platform/portability macros) --anders
Feb 07 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
Anders F Björklund wrote:
 Walter Bright wrote:
 I've also tried the "make an API for the version" method, and have 
 been much more satisfied with it. You can see it at work in the gc 
 implementation (see gclinux.d and win32.d).

OK, see what you mean. And the Makefile would pick which implementation gets used, for the common interface chosen ?

Yes, that's a reasonable way to do it.
 Not sure there's a whole world of difference between the:
 
 version(Win32) // or foowin32.d
 void foo() { ...this... }
 version(linux) // or foolinux.d
 void foo() { ...that... }
 
 and
 
 void foo()
 {
 version(Win32)
 ...this...
 version(linux)
 ...that...
 }

There isn't if the functions are trivial. But when they get more involved, it becomes a mess. How many times have you had to compile with just the preprocessor just to figure out which branch of the rat's nest of #if's was actually getting compiled? Many years ago, when assembler coding was popular, Microsoft produced a macro library to make coding in assembler sort-of-but-not-quite-like coding in some pseudo-high level language. The layers of macros was so bad that a friend of mine, in order to work on such code written by others, resorted to assembling it, *disassembling* the result, and pasting the disassembled source back into the source file and started over.
 But any rate, it's preferred to handle it outside of D. OK.
 Either through rewriting the code, or in the Makefiles. OK.
 
 With autoconf, I normally want the SAME piece of code to
 work on all platforms - so it does a different approach.
 
 For D, I would instead write one piece of code for EACH
 platform and avoid versioning it (as much as possible) ?

Yes, in the end, I think that's a more maintainable solution. You'll find your core modules will become much more portable, and you shouldn't need to edit (or even understand) them at all when porting to a new platform. If your job is to port the gc to a new XYZ platform, would you find it easier to edit the long and complicated gcx.d, or just copy gclinux.d to gcXYZ.d and restrict your work to just figuring out how to port a few lines of code with (hopefully) well-defined behavior?
 I've worked with the "./configure && make" approach for
 many years and kinda like it, but will try out new ideas.
 Then again I kinda like the preprocessor too, and prefer
 writing C over C++, so maybe I'm just stuck on the old. :-)
 
 Not changing anything for the "ported" projects (like wxD),
 but it will be something to keep in mind for future D ones.
 Will do some thinking on how it would apply to options...
 (choices, as opposed to just platform/portability macros)
 
 --anders

Feb 07 2007
parent reply BCS <BCS pathlink.com> writes:
Walter Bright wrote:
 
 Yes, in the end, I think that's a more maintainable solution. You'll 
 find your core modules will become much more portable, and you shouldn't 
 need to edit (or even understand) them at all when porting to a new 
 platform.
 
 If your job is to port the gc to a new XYZ platform, would you find it 
 easier to edit the long and complicated gcx.d, or just copy gclinux.d to 
 gcXYZ.d and restrict your work to just figuring out how to port a few 
 lines of code with (hopefully) well-defined behavior?
 

And there in lies the primary issue I have with this approach. Say I do the above and then a bug is found in the system independent parts of the module, now I have to extract the fix from a fixed version and reapply it to my version. I am a strong proponent of the theory that you should never have two peaces of /Identical/ code (as in does the exact same thing). It's kinda sorta the Extreme Programming model[*] but that's not where I'm coming from. An example, I am working on a D lexer, it needs to work on 6 type of files ASCII and UTF-8/16BE/16LE/32BE/32LE. also to make things easier downstream, I have it convert all EOLs to \n. Well rather than write 6 scanners, I wrote one that was templated on two types (I think) results in correct conversions. It is about 30 lines long and the sum total difference between the functions is the types on about about 3 lines. Now if at some point I find a bug, I fix it in one place and I'm done. The same sort of thing can apply to other cases. * I think that is the model but I could be remembering the wrong name
Feb 07 2007
parent Walter Bright <newshound digitalmars.com> writes:
BCS wrote:
 Walter Bright wrote:
 Yes, in the end, I think that's a more maintainable solution. You'll 
 find your core modules will become much more portable, and you 
 shouldn't need to edit (or even understand) them at all when porting 
 to a new platform.

 If your job is to port the gc to a new XYZ platform, would you find it 
 easier to edit the long and complicated gcx.d, or just copy gclinux.d 
 to gcXYZ.d and restrict your work to just figuring out how to port a 
 few lines of code with (hopefully) well-defined behavior?

And there in lies the primary issue I have with this approach. Say I do the above and then a bug is found in the system independent parts of the module, now I have to extract the fix from a fixed version and reapply it to my version. I am a strong proponent of the theory that you should never have two peaces of /Identical/ code (as in does the exact same thing). It's kinda sorta the Extreme Programming model[*] but that's not where I'm coming from.

If there's a lot of identical code in this approach, then perhaps the abstraction layer is drawn in the wrong place. There isn't much in the gc version api implementations.
 An example, I am working on a D lexer, it needs to work on 6 type of 
 files ASCII and UTF-8/16BE/16LE/32BE/32LE. also to make things easier 
 downstream, I have it convert all EOLs to \n. Well rather than write 6 
 scanners, I wrote one that was templated on two types (I think) results 
 in correct conversions. It is about 30 lines long and the sum total 
 difference between the functions is the types on about about 3 lines. 
 Now if at some point I find a bug, I fix it in one place and I'm done.
 
 The same sort of thing can apply to other cases.

Using templates is a great way to write generic code, but it's a different approach than using versions.
Feb 07 2007
prev sibling parent reply BCS <BCS pathlink.com> writes:
Walter Bright wrote:
 
 
 The right way to do versions that cut across multiple files is to 
 abstract the versioning into an API, and implement the different 
 versions in different modules.
 

What about cases where 90% of the code is identical but small bits and peaces are different? If I understand correctly, to do what you suggest would requirer that those bits be put in functions and have several versions of the function somewhere else. This could be a problem in several ways ===Tiny bits of code would requirer tiny functions that would hide what is going on. version(RowMajor) x = table[i][j]; else // RowMinor x = table[j][i]; ====Empty else cases would result in do nothing functions: version(StrongChecks) { if(foo) ... if(bar) ... ... } //empty else ====You can't break across function calls switch(i) { case 1: version(Baz) if(baz) break; else break; case 2: ...// lots of un versioned code } or switch(i) { version(Baz) { case 1: if(baz) break; } case 2: ...// lots of un versioned code version(!Baz) { case 1: } } ====lots of version combinations version(Foo) i = foo(i); version(Boo) i = boo(i); version(Fig) i = fig(i); version(Baz) i = baz(i); version(Bar) i = bar(i); //32 options??? Are these valid concerns? Am I misunderstanding what you said?
Feb 07 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
BCS wrote:
 Walter Bright wrote:
 The right way to do versions that cut across multiple files is to 
 abstract the versioning into an API, and implement the different 
 versions in different modules.

peaces are different? If I understand correctly, to do what you suggest would requirer that those bits be put in functions and have several versions of the function somewhere else. This could be a problem in several ways ===Tiny bits of code would requirer tiny functions that would hide what is going on.

Yes, it would require tiny functions, though I don't agree they hide what is going on. Presumably a descriptive name would be used for it. One of the nice things about it is that porting the code requires generating a new module with the right implementations in it, which is a lot easier than going through checking all the #ifdef's (yes, I know one shouldn't have to do that, but in practice you ALWAYS have to because the macros always get misapplied, and often get forgotten to even apply). With an API, it is difficult to forget to use it, and difficult to use the wrong macro. For example, when writing portable C code, one is often faced with the macros: __GNUC__ for the Gnu compiler linux for the host operating system TARGET_LINUX for the target operating system we're cross compiling for I can't even count the number of times __GNUC__ was being used to decide whether we're compiling for windows or linux, or the host operating system was confused with the target: #if __GNUC__ #include <pthreads.h> #else #include <windows.h> #endif AAAARRRRRGGGGGHHHHH!!! That, my friends, is evil. Now, if one abstracted away what one was *doing* with threads, then one just does: import mythreadapi; and provide a different implementation of mythreadapi for windows or linux. It's a LOT harder to screw that up.
 version(RowMajor)
     x = table[i][j];
 else // RowMinor
     x = table[j][i];

int GetRow(i,j) { return table[i][j]; } The function gets inlined.
 ====Empty else cases would result in do nothing functions:
 
 version(StrongChecks)
 {
     if(foo) ...
     if(bar) ...
     ...
 }
 //empty else
 
 ====You can't break across function calls
 
 switch(i)
 {
     case 1:
         version(Baz)
             if(baz) break;
         else
             break;
     case 2:
 
     ...// lots of un versioned code
 }

if (globalversion.baz && baz) break;
 ====lots of version combinations
 
 version(Foo) i = foo(i);
 version(Boo) i = boo(i);
 version(Fig) i = fig(i);
 version(Baz) i = baz(i);
 version(Bar) i = bar(i);    //32 options???

i = abc(i); // a different abc is implemented for each version.
 
 Are these valid concerns? Am I misunderstanding what you said?

They are valid concerns, you're just used to thinking in terms of the C preprocessor.
Feb 07 2007
parent reply BCS <BCS pathlink.com> writes:
Walter Bright wrote:
 
 #if __GNUC__
 #include <pthreads.h>
 #else
 #include <windows.h>
 #endif
 
 AAAARRRRRGGGGGHHHHH!!! That, my friends, is evil.

agreed.
 BCS wrote:
 version(RowMajor)
     x = table[i][j];
 else // RowMinor
     x = table[j][i];

int GetRow(i,j) { return table[i][j]; } The function gets inlined.

The equivalent of this doesn't version(RowMajor) { x = 0 for(i=0;i<max;i++) x+=table[i][j]; } else // RowMinor { x = 0 for(i=0;i<max;i++) x+=table[j][i]; } and IIRC this doesn't ether version(X86) // avoid overflow in midpoint calculations { asm { MOV low EAX; ADD EAX hi; RCR EAX 1; MOV EAX mid; } } else { mid = (hi-low)/2 + low; }
 ====You can't break across function calls

 switch(i)
 {
     case 1:
         version(Baz)
             if(baz) break;
         else
             break;
     case 2:

     ...// lots of un versioned code
 }

if (globalversion.baz && baz) break;

The point is to have all of the versioning done by the time you link, that leaves a runtime check for version info.
 
 ====lots of version combinations

 version(Foo) i = foo(i);
 version(Boo) i = boo(i);
 version(Fig) i = fig(i);
 version(Baz) i = baz(i);
 version(Bar) i = bar(i);    //32 options???

i = abc(i); // a different abc is implemented for each version.

All 32 possibilities??? What if there are 16 independent versions? that's 64K functions! And no that is not an unlikely case, say "i" is a parse tree and we want to add different types of annotation depending on what features are enabled.
 
 Are these valid concerns? Am I misunderstanding what you said?

They are valid concerns, you're just used to thinking in terms of the C preprocessor.

I have barely ever used CPP for that type of thing so I wasn't ever used to thinking that way in the first place. <g>
Feb 07 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
BCS wrote:
 Walter Bright wrote:
 The function gets inlined.

The equivalent of this doesn't version(RowMajor) { x = 0 for(i=0;i<max;i++) x+=table[i][j]; } else // RowMinor { x = 0 for(i=0;i<max;i++) x+=table[j][i]; } and IIRC this doesn't ether version(X86) // avoid overflow in midpoint calculations { asm { MOV low EAX; ADD EAX hi; RCR EAX 1; MOV EAX mid; } } else { mid = (hi-low)/2 + low; }

I agree the inlining isn't perfect. But in the small number of cases where this matters, you can use the version=XXX command line switch.
 ====You can't break across function calls

 switch(i)
 {
     case 1:
         version(Baz)
             if(baz) break;
         else
             break;
     case 2:

     ...// lots of un versioned code
 }

if (globalversion.baz && baz) break;

The point is to have all of the versioning done by the time you link, that leaves a runtime check for version info.

Not if it's a const.
 ====lots of version combinations

 version(Foo) i = foo(i);
 version(Boo) i = boo(i);
 version(Fig) i = fig(i);
 version(Baz) i = baz(i);
 version(Bar) i = bar(i);    //32 options???

i = abc(i); // a different abc is implemented for each version.

All 32 possibilities??? What if there are 16 independent versions? that's 64K functions! And no that is not an unlikely case, say "i" is a parse tree and we want to add different types of annotation depending on what features are enabled.

I'd use bit flags instead of versions for such things. If I had a situation with 32*16 version combinations, I think I'd seriously consider reengineering what the program considers as a "version". After all, do you really want to generate 64,000 binaries? How are you going to test them <g>?
Feb 07 2007
parent reply BCS <BCS pathlink.com> writes:
Walter Bright wrote:
 BCS wrote:
 
 The point is to have all of the versioning done by the time you link, 
 that leaves a runtime check for version info.

Not if it's a const.

if it's a const than it should be a static if. static if(globalversion.baz) if(baz) break; else break; and that still doesn't cover the other case switch(i) { version(foo) case 1: ... version(!foo) case 1: } or how about outer: while(...) { for(...) { ....... // lots of nesting version(Foo) break outer; else continue outer; } }
 All 32 possibilities??? What if there are 16 independent versions? 
 that's 64K functions! And no that is not an unlikely case, say "i" is 
 a parse tree and we want to add different types of annotation 
 depending on what features are enabled.

I'd use bit flags instead of versions for such things.

Runtime checks? That would requirer that code to do the processing be compiled in for all cases: Code bloat, etc. And structures would then need to have all the fields for all the features[*] even if they will never be used: Data bloat etc. Or are you saying use "static if"? Then what is version for? In that case I can't think of any use AT ALL for version. Strike that, versions can be specified on the command line so they could do this: module globalversion; version(Foo) const bool Foo = true else const bool Foo = false; .... and then everything is done with static ifs *version isn't just for controlling code inclusion. struct Part { version(Foo) Foo foo; version(Boo) Boo boo; version(Fig) Fig fig; version(Baz) Baz baz; version(Bar) Bar bar; }
 If I had a 
 situation with 32*16 version combinations, I think I'd seriously 
 consider reengineering what the program considers as a "version". After 
 all, do you really want to generate 64,000 binaries? How are you going 
 to test them <g>?

Most of the cases where I see version used I would expect to have several orders of magnitude more combinations possible than are ever actually built. What I would want versioning for would be to be able to arbitrarily select what I want from a set of functionalities. Then by specifying that on the command line, run a build (like with bud or a makefile that doesn't known jack about versions) and get what I want. I'm at a loss as to what you envision for versioning.
Feb 07 2007
next sibling parent reply Kyle Furlong <kylefurlong gmail.com> writes:
BCS wrote:
 Walter Bright wrote:
 BCS wrote:

 The point is to have all of the versioning done by the time you link, 
 that leaves a runtime check for version info.

Not if it's a const.

if it's a const than it should be a static if. static if(globalversion.baz) if(baz) break; else break; and that still doesn't cover the other case switch(i) { version(foo) case 1: ... version(!foo) case 1: } or how about outer: while(...) { for(...) { ....... // lots of nesting version(Foo) break outer; else continue outer; } }
 All 32 possibilities??? What if there are 16 independent versions? 
 that's 64K functions! And no that is not an unlikely case, say "i" is 
 a parse tree and we want to add different types of annotation 
 depending on what features are enabled.

I'd use bit flags instead of versions for such things.

Runtime checks? That would requirer that code to do the processing be compiled in for all cases: Code bloat, etc. And structures would then need to have all the fields for all the features[*] even if they will never be used: Data bloat etc. Or are you saying use "static if"? Then what is version for? In that case I can't think of any use AT ALL for version. Strike that, versions can be specified on the command line so they could do this: module globalversion; version(Foo) const bool Foo = true else const bool Foo = false; .... and then everything is done with static ifs *version isn't just for controlling code inclusion. struct Part { version(Foo) Foo foo; version(Boo) Boo boo; version(Fig) Fig fig; version(Baz) Baz baz; version(Bar) Bar bar; }
 If I had a situation with 32*16 version combinations, I think I'd 
 seriously consider reengineering what the program considers as a 
 "version". After all, do you really want to generate 64,000 binaries? 
 How are you going to test them <g>?

Most of the cases where I see version used I would expect to have several orders of magnitude more combinations possible than are ever actually built. What I would want versioning for would be to be able to arbitrarily select what I want from a set of functionalities. Then by specifying that on the command line, run a build (like with bud or a makefile that doesn't known jack about versions) and get what I want. I'm at a loss as to what you envision for versioning.

All this discussion is moot. The feature exists now, use it how you like. If you want to use mixin(Config!(import(foo.conf))) to make your program n-dimensionally configurable, go ahead. If you agree with Walter's view of versioning, don't. I don't see that we need to have this discussion at all. Unless and until Walter restricts the kinds of files import will accept, go ahead and use the feature in any way you like.
Feb 07 2007
parent Walter Bright <newshound digitalmars.com> writes:
Kyle Furlong wrote:
 Unless and until Walter restricts the kinds of files import will accept, 
 go ahead and use the feature in any way you like.

I suppose it's like identifier naming conventions. There are ways to do it that I feel are wrong, and there are ways that are right. The D compiler doesn't care, it'll compile either. Coding style standards are something altogether different from language standards. Remember the C thing: #define BEGIN { #define END } ... BEGIN statements... END ? That was pretty popular in the early days of C, when a lot of C programmers came from Pascal and tried to make C look like Pascal. Experience eventually showed that this was just not a good style, and is strongly frowned upon today. Even so, every C compiler will accept such code if you want to write it.
Feb 07 2007
prev sibling parent reply Walter Bright <newshound digitalmars.com> writes:
BCS wrote:
 Walter Bright wrote:
 BCS wrote:
 The point is to have all of the versioning done by the time you link, 
 that leaves a runtime check for version info.


if it's a const than it should be a static if.

That depends. if and static if have many differences in how they work. But if will do constant folding if it can as a matter of course.
 switch(i)
 {
   version(foo)
     case 1:
 
   ...
 
   version(!foo)
     case 1:
 }

C'mon, case 1: if (foo) ... else ...
 or how about
 
 outer: while(...)
 {
  for(...)
  {
   ....... // lots of nesting
         version(Foo)
          break outer;
         else
          continue outer;
  }
 }

If someone in my employ wrote such a thing, they'd have a lot of 'splaining to do. Version statements don't always have to be at the lowest level possible - they can always be moved out to higher levels, until you find the right abstraction spot for it.
 What I would want versioning for would be to be able to arbitrarily 
 select what I want from a set of functionalities. Then by specifying 
 that on the command line, run a build (like with bud or a makefile that 
 doesn't known jack about versions) and get what I want.
 
 I'm at a loss as to what you envision for versioning.

I think you view version as a scalpel, while I see it as more like an axe.
Feb 07 2007
next sibling parent kris <foo bar.com> writes:
Walter Bright wrote:
 I think you view version as a scalpel, while I see it as more like an axe.

Hear Hear! <g>
Feb 07 2007
prev sibling parent BCS <BCS pathlink.com> writes:
Walter Bright wrote:
 
 
 I think you view version as a scalpel, while I see it as more like an axe.

Not the analogy I would have used but I think we are thinking of different uses.
Feb 07 2007
prev sibling parent reply BCS <BCS pathlink.com> writes:
Hasan Aljudy wrote:

 
 Why is that evil? I think it's actually a great idea. "versions" are a 
 sort of configuration that determines which code should be compiled and 
 which code shouldn't. Storing this configuration in a separate file 
 makes sense to me.
 

rename versions.txt to versions.d and use import versions; same effect, less confusion
Feb 07 2007
next sibling parent reply Walter Bright <newshound digitalmars.com> writes:
BCS wrote:
 Hasan Aljudy wrote:
 
 Why is that evil? I think it's actually a great idea. "versions" are a 
 sort of configuration that determines which code should be compiled 
 and which code shouldn't. Storing this configuration in a separate 
 file makes sense to me.

rename versions.txt to versions.d and use import versions; same effect, less confusion

No, versions defined in an import do NOT affect the importer.
Feb 07 2007
parent reply BCS <BCS pathlink.com> writes:
Walter Bright wrote:
 BCS wrote:
 
 Hasan Aljudy wrote:

 Why is that evil? I think it's actually a great idea. "versions" are 
 a sort of configuration that determines which code should be compiled 
 and which code shouldn't. Storing this configuration in a separate 
 file makes sense to me.

rename versions.txt to versions.d and use import versions; same effect, less confusion

No, versions defined in an import do NOT affect the importer.

What?? then how do you implement non trivial vertion logic? version(Foo) { // Foo alwys needs Bar version = Bar ... } ... more of the like
Feb 07 2007
next sibling parent reply Sean Kelly <sean f4.ca> writes:
BCS wrote:
 Walter Bright wrote:
 No, versions defined in an import do NOT affect the importer.

What?? then how do you implement non trivial vertion logic?

Do it in a makefile or use constants and static if :-p Sean
Feb 07 2007
next sibling parent BCS <BCS pathlink.com> writes:
Sean Kelly wrote:
 BCS wrote:
 
 Walter Bright wrote:

 No, versions defined in an import do NOT affect the importer.

What?? then how do you implement non trivial vertion logic?

Do it in a makefile or use constants and static if :-p Sean

If that is what is needed to make things work, then what is the version statement for?
Feb 07 2007
prev sibling parent Chad J <gamerChad _spamIsBad_gmail.com> writes:
Sean Kelly wrote:
 BCS wrote:
 
 Walter Bright wrote:

 No, versions defined in an import do NOT affect the importer.

What?? then how do you implement non trivial vertion logic?

Do it in a makefile or use constants and static if :-p Sean

If that's the case, then this is the point where I start to do "evil" things with D, like versioning with constants/static if/mixin(import()). I really dislike makefiles and such external languages that tell my program how to compile. I like to have all the information needed to compile a program be contained within the source itself. It makes compiling so much simpler.
Feb 07 2007
prev sibling parent reply Derek Parnell <derek nomail.afraid.org> writes:
On Wed, 07 Feb 2007 11:50:50 -0800, BCS wrote:

 No, versions defined in an import do NOT affect the importer.

What?? then how do you implement non trivial vertion logic? version(Foo) { // Foo alwys needs Bar version = Bar ... } ... more of the like

Not wishing to over promote the Bud tool, but it does implement 'global' version statements. pragma(export_version): This allows you to set a global version identifier. DMD allows you to set a version identifier in your code, but the scope of that is only for the module it is set in. This pragma gives you the ability to declare a version identifier which is applied to all modules being compiled, and not just the 'current' module. Example: version(build) pragma(export_version, Unix); version(build) pragma(export_version, Limited); These lines will cause the compiler to have these version identifiers added to the command line switches, thus making them effectively global. You can list more than one identifier on the pragma statement ... version(build) pragma(export_version, Unix, Limited); So in your example above ... version(build) pragma(export_version, Foo, Bar); -- Derek (skype: derek.j.parnell) Melbourne, Australia "Down with mediocrity!" 8/02/2007 10:19:53 AM
Feb 07 2007
parent reply BCS <BCS pathlink.com> writes:
Derek Parnell wrote:
 On Wed, 07 Feb 2007 11:50:50 -0800, BCS wrote:
 
 So in your example above ...
 
   version(build) pragma(export_version, Foo, Bar);
 

Cool, but I think that would be version(Foo) pragma(export_version, Bar);
Feb 07 2007
parent Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
BCS wrote:
 Derek Parnell wrote:
 On Wed, 07 Feb 2007 11:50:50 -0800, BCS wrote:

 So in your example above ...

   version(build) pragma(export_version, Foo, Bar);

Cool, but I think that would be version(Foo) pragma(export_version, Bar);

No, you need both version()s: version(build) version(Foo) pragma(export_version, Bar); or version(Foo) version(build) pragma(export_version, Bar); An unknown pragma is an error, so you need to put build-specific pragmas in a version(build).
Feb 08 2007
prev sibling parent Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
BCS wrote:
 Hasan Aljudy wrote:
 
 Why is that evil? I think it's actually a great idea. "versions" are a 
 sort of configuration that determines which code should be compiled 
 and which code shouldn't. Storing this configuration in a separate 
 file makes sense to me.

rename versions.txt to versions.d and use import versions; same effect, less confusion

It doesn't work like that: ----- urxae urxae:~/tmp$ cat test.d import std.stdio; import test2; void main() { version(Foo) writefln("version=Foo"); else writefln("Not version=Foo"); } urxae urxae:~/tmp$ cat test2.d version=Foo; urxae urxae:~/tmp$ dmd test.d test2.d -oftest && ./test gcc test.o test2.o -o test -m32 -lphobos -lpthread -lm -Xlinker -L/home/urxae/opt/dmd/lib Not version=Foo ----- Version specifications ("version=X" lines) don't get imported.
Feb 07 2007
prev sibling parent reply "Yauheni Akhotnikau" <eao197 intervale.ru> writes:
On Wed, 07 Feb 2007 06:20:51 +0300, Walter Bright  =

<newshound digitalmars.com> wrote:

 What it's for is to:

 1) import data for a string constant
 2) import code that's in DSL (Domain Specific Language), not D, form.

Can you provide some examples of DSL the new D feature is intended for? For example, what if I want to implement something like to RubyOnRails = ActiveRecord DSL = (http://api.rubyonrails.org/files/vendor/rails/activerecord/README.html)= ? = This is an ordinal Ruby code: class Account < ActiveRecord::Base validates_presence_of :subdomain, :name, :email_address, :passw= ord validates_uniqueness_of :subdomain validates_acceptance_of :terms_of_service, :on =3D> :create validates_confirmation_of :password, :email_address, :on =3D> :crea= te end I may wish to translate it in the following D fragment (with an exceptio= n = that in D I must explicitly describe all table fields): mixin( DActiveRecord!( `class Account field subdomain varchar(20) field name varchar(100) field email_address varchar(255) field password varchar(32) field term_of_service int validates_presence_of subdomain, name, email_address, password validates_uniqueness_of subdomain validates_acceptance_of terms_of_service, on =3D> create validates_confirmation_of password, email_address, on =3D> create end` ) ); The template DActiveRecord must parse DSL string at compile time and = produce another string with Account class implementation in D. With all = = necessary symantic analysis and constraints (for example, it is impossib= le = to use name of field in 'validate_presence_of' if it isn't described as = = 'field'). Do you think this task can be done with D templates at complile time? -- = Regards, Yauheni Akhotnikau
Feb 07 2007
next sibling parent reply Kyle Furlong <kylefurlong gmail.com> writes:
Yauheni Akhotnikau wrote:
 On Wed, 07 Feb 2007 06:20:51 +0300, Walter Bright 
 <newshound digitalmars.com> wrote:
 
 What it's for is to:

 1) import data for a string constant
 2) import code that's in DSL (Domain Specific Language), not D, form.

Can you provide some examples of DSL the new D feature is intended for? For example, what if I want to implement something like to RubyOnRails ActiveRecord DSL (http://api.rubyonrails.org/files/vendor/rails/activerecord/README.html)? This is an ordinal Ruby code: class Account < ActiveRecord::Base validates_presence_of :subdomain, :name, :email_address, :password validates_uniqueness_of :subdomain validates_acceptance_of :terms_of_service, :on => :create validates_confirmation_of :password, :email_address, :on => :create end I may wish to translate it in the following D fragment (with an exception that in D I must explicitly describe all table fields): mixin( DActiveRecord!( `class Account field subdomain varchar(20) field name varchar(100) field email_address varchar(255) field password varchar(32) field term_of_service int validates_presence_of subdomain, name, email_address, password validates_uniqueness_of subdomain validates_acceptance_of terms_of_service, on => create validates_confirmation_of password, email_address, on => create end` ) ); The template DActiveRecord must parse DSL string at compile time and produce another string with Account class implementation in D. With all necessary symantic analysis and constraints (for example, it is impossible to use name of field in 'validate_presence_of' if it isn't described as 'field'). Do you think this task can be done with D templates at complile time? --Regards, Yauheni Akhotnikau

Almost certainly.
Feb 07 2007
parent "Yauheni Akhotnikau" <eao197 intervale.ru> writes:
On Wed, 07 Feb 2007 13:17:48 +0300, Kyle Furlong <kylefurlong gmail.com>  
wrote:

  The template DActiveRecord must parse DSL string at compile time and  
 produce another string with Account class implementation in D. With all  
 necessary symantic analysis and constraints (for example, it is  
 impossible to use name of field in 'validate_presence_of' if it isn't  
 described as 'field').
  Do you think this task can be done with D templates at complile time?


 Almost certainly.

:) I know, but at which price? ;) -- Regards, Yauheni Akhotnikau
Feb 07 2007
prev sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Yauheni Akhotnikau wrote:
 Do you think this task can be done with D templates at complile time?

Yes, that's exactly the intent. If this can't be made to work, we'll fix D so it can.
Feb 07 2007
parent reply "Yauheni Akhotnikau" <eao197 intervale.ru> writes:
On Wed, 07 Feb 2007 22:18:28 +0300, Walter Bright  =

<newshound digitalmars.com> wrote:

 Yauheni Akhotnikau wrote:
 Do you think this task can be done with D templates at complile time?=


 Yes, that's exactly the intent. If this can't be made to work, we'll f=

 D so it can.

May be I'm wrong, but I think that 'static if' and recursive templates = (and other techniques available for metaprogramming at compile time) are= = not as powerful as ordinary D itself. So it is much more preferable to m= e = to program such DSL as 'normal' D program. May be it is a good idea to = make 'staged' compilation? For example DSL transformation code is writte= n = as ordinal D programm. Then that code compiled at first compilation stag= e, = then it is invoked by compiler and the result is placed into input to th= e = next stage. Something like that: // active_record.d // DActiveRecord implementation. module active_record; // DSL transformator. char[] DActiveRecord( char[] input ) { ... } =3D=3D=3D // demo.d // DActiveRecord usage. module demo; import active_record; // Function DActiveRecord will be called at compile time. mixin( DActiveRecord( "class Account ... end" ) ); =3D=3D=3D Two points must be highlighted here: * code of DActiveRecord must be used only at complite time and threw out= = from the resulting application code; * multiple stages must be allowed: for example, DActiveRecord may depend= = on another DSL and so on. -- = Regards, Yauheni Akhotnikau
Feb 07 2007
next sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Yauheni Akhotnikau wrote:
 May be I'm wrong, but I think that 'static if' and recursive templates 
 (and other techniques available for metaprogramming at compile time) are 
 not as powerful as ordinary D itself. So it is much more preferable to 
 me to program such DSL as 'normal' D program. May be it is a good idea 
 to make 'staged' compilation? For example DSL transformation code is 
 written as ordinal D programm. Then that code compiled at first 
 compilation stage, then it is invoked by compiler and the result is 
 placed into input to the next stage.

The main difficulty is if the DSL needs to access symbols in the rest of the D code.
Feb 07 2007
parent reply "Yauheni Akhotnikau" <eao197 intervale.ru> writes:
 The main difficulty is if the DSL needs to access symbols in the rest of  
 the D code.

I agree. But how do you think do such things in the current approach? -- Regards, Yauheni Akhotnikau
Feb 07 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
Yauheni Akhotnikau wrote:
 The main difficulty is if the DSL needs to access symbols in the rest 
 of the D code.

I agree. But how do you think do such things in the current approach?

int i = 4; mixin("writefln(i)"); will print: 4
Feb 07 2007
parent reply "Yauheni Akhotnikau" <eao197 intervale.ru> writes:
On Thu, 08 Feb 2007 10:06:04 +0300, Walter Bright  =

<newshound digitalmars.com> wrote:

 Yauheni Akhotnikau wrote:
 The main difficulty is if the DSL needs to access symbols in the res=



 of the D code.

But how do you think do such things in the current approach?

int i =3D 4; mixin("writefln(i)"); will print: 4

I understand that :) But suppouse than string "writefln(i)" has been produced by some DSL = transformator: int i =3D 4; mixin( ProduceWritefln("i") ); The content of ProduceWritefln() need no access to variable i -- it make= s = some string which transformed to D code only in mixin(), not in = ProduceWritefln. So the main task of ProduceWritefln is manipulating of = = string without access to any existed D code. So my point is to allow to ProduceWritefln be ordinary D code which = executed at compilation time. -- = Regards, Yauheni Akhotnikau
Feb 07 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
Yauheni Akhotnikau wrote:
 On Thu, 08 Feb 2007 10:06:04 +0300, Walter Bright 
 <newshound digitalmars.com> wrote:
 
 Yauheni Akhotnikau wrote:
 The main difficulty is if the DSL needs to access symbols in the 
 rest of the D code.

But how do you think do such things in the current approach?

int i = 4; mixin("writefln(i)"); will print: 4

I understand that :) But suppouse than string "writefln(i)" has been produced by some DSL transformator: int i = 4; mixin( ProduceWritefln("i") ); The content of ProduceWritefln() need no access to variable i -- it makes some string which transformed to D code only in mixin(), not in ProduceWritefln. So the main task of ProduceWritefln is manipulating of string without access to any existed D code. So my point is to allow to ProduceWritefln be ordinary D code which executed at compilation time.

I see your point, but passing arguments "by name", which is what your example does, means the function has no access to whatever that name is - such as its type, size, etc.
Feb 08 2007
parent "Yauheni Akhotnikau" <eao197 intervale.ru> writes:
  So my point is to allow to ProduceWritefln be ordinary D code which  
 executed at compilation time.

I see your point, but passing arguments "by name", which is what your example does, means the function has no access to whatever that name is - such as its type, size, etc.

Yes, but here we have two alternative approaches: 1) generation of strings with D code without access to addition information (my sample above). This is approach largely used in Ruby metaprogramming. And this is very useful approach in many situations, i.e. it is not an ideal solution, but useful (in my expirience); 2) manipulation of syntax tree in compilation phase, where the macros code has access to all information about programm's symbols (identifiers, types and so on). This approach is used in Nemerle, but there macroses received as input not string, but ordinal Nemerle code. So my problem with understanding role of new D constructs (mixin expressions and import expressions) is: if the Ruby's approach with string generation is not appropriate then how to get something like Nemerle's approach if we use strings in mixin expressions? -- Regards, Yauheni Akhotnikau
Feb 08 2007
prev sibling parent reply Kyle Furlong <kylefurlong gmail.com> writes:
Yauheni Akhotnikau wrote:
 On Wed, 07 Feb 2007 22:18:28 +0300, Walter Bright 
 <newshound digitalmars.com> wrote:
 
 Yauheni Akhotnikau wrote:
 Do you think this task can be done with D templates at complile time?

Yes, that's exactly the intent. If this can't be made to work, we'll fix D so it can.

May be I'm wrong, but I think that 'static if' and recursive templates (and other techniques available for metaprogramming at compile time) are not as powerful as ordinary D itself. So it is much more preferable to me to program such DSL as 'normal' D program. May be it is a good idea to make 'staged' compilation? For example DSL transformation code is written as ordinal D programm. Then that code compiled at first compilation stage, then it is invoked by compiler and the result is placed into input to the next stage. Something like that: // active_record.d // DActiveRecord implementation. module active_record; // DSL transformator. char[] DActiveRecord( char[] input ) { ... } === // demo.d // DActiveRecord usage. module demo; import active_record; // Function DActiveRecord will be called at compile time. mixin( DActiveRecord( "class Account ... end" ) ); === Two points must be highlighted here: * code of DActiveRecord must be used only at complite time and threw out from the resulting application code; * multiple stages must be allowed: for example, DActiveRecord may depend on another DSL and so on. --Regards, Yauheni Akhotnikau

I agree with the point that metaprogramming needs more control structures. static for, static foreach, static while, static do, static switch case, etc.
Feb 07 2007
next sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Kyle Furlong wrote:
 Yauheni Akhotnikau wrote:
 On Wed, 07 Feb 2007 22:18:28 +0300, Walter Bright 
 <newshound digitalmars.com> wrote:

 Yauheni Akhotnikau wrote:
 Do you think this task can be done with D templates at complile time?

Yes, that's exactly the intent. If this can't be made to work, we'll fix D so it can.

May be I'm wrong, but I think that 'static if' and recursive templates (and other techniques available for metaprogramming at compile time) are not as powerful as ordinary D itself. So it is much more preferable to me to program such DSL as 'normal' D program. May be it is a good idea to make 'staged' compilation? For example DSL transformation code is written as ordinal D programm. Then that code compiled at first compilation stage, then it is invoked by compiler and the result is placed into input to the next stage. Something like that: // active_record.d // DActiveRecord implementation. module active_record; // DSL transformator. char[] DActiveRecord( char[] input ) { ... } === // demo.d // DActiveRecord usage. module demo; import active_record; // Function DActiveRecord will be called at compile time. mixin( DActiveRecord( "class Account ... end" ) ); === Two points must be highlighted here: * code of DActiveRecord must be used only at complite time and threw out from the resulting application code; * multiple stages must be allowed: for example, DActiveRecord may depend on another DSL and so on. --Regards, Yauheni Akhotnikau

I agree with the point that metaprogramming needs more control structures. static for, static foreach, static while, static do, static switch case, etc.

Static loops are not very useful without compile-time mutation. Andrei
Feb 07 2007
parent BCS <BCS pathlink.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Kyle Furlong wrote:
 I agree with the point that metaprogramming needs more control 
 structures.

 static for, static foreach, static while, static do, static switch 
 case, etc.


static loops at any scope static if is allowed
 
 Static loops are not very useful without compile-time mutation.
 
 Andrei

Maybe some sort of const loop would work static for(const i = 0; i< 10; i++) // i const outside of () { } If the the last two clauses were scoped as being at the end of the loop then it might even be able to access const values computed during the last pass. const s = 356; char[s] buf; char[] buf2; static for(const i = 1; cont; i<<=1) { static if(i < buf.length) const bool cont = true else { const bool cont = false buf2.length = i; } } Yeah, it's contrived )and brings up a pile of issues with regards to scoping) but, it illustrates the point.
Feb 07 2007
prev sibling parent reply "Yauheni Akhotnikau" <eao197 intervale.ru> writes:
 I agree with the point that metaprogramming needs more control  
 structures.

 static for, static foreach, static while, static do, static switch case,  
 etc.

I think it is not a good way. Because this leads to two different languages in one: compile-time D (constructs from ordinal D but with prefix 'static') and run-time (ordinal D). If it is necessary to use compile-time D then the following difficalties arose: * it is imposible to use externals tools, such as compiler-compiler generators, which produces ordinal D code; * it is hard to debug compile-time code -- how to launch debugger at compile time? * it is necessary to construct compile-time unit tests; * it is impossible to use existing D libraries for DSL transformations. I'm use Ruby a lot and much metaprogramming things done via creating strings with Ruby code and evaluating it by various 'eval' methods. It is very simple method -- easy in writting, debugging and testing. And there isn't two different Ruby -- only one language. Yet another example -- Nemerle (http://www.nemerle.org). But Nemerle use different technique -- code of macros have access to syntax tree at compilation stage. -- Regards, Yauheni Akhotnikau
Feb 07 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
Yauheni Akhotnikau wrote:
 I'm use Ruby a lot and much metaprogramming things done via creating 
 strings with Ruby code and evaluating it by various 'eval' methods. It 
 is very simple method -- easy in writting, debugging and testing. And 
 there isn't two different Ruby -- only one language.

That's possible because Ruby is interpreted - its compilation environment is also the execution environment. But D is a statically compiled language, so there's a distinction between a compile time variable, and a runtime variable.
Feb 07 2007
parent reply "Yauheni Akhotnikau" <eao197 intervale.ru> writes:
On Thu, 08 Feb 2007 10:08:29 +0300, Walter Bright  
<newshound digitalmars.com> wrote:

 Yauheni Akhotnikau wrote:
 I'm use Ruby a lot and much metaprogramming things done via creating  
 strings with Ruby code and evaluating it by various 'eval' methods. It  
 is very simple method -- easy in writting, debugging and testing. And  
 there isn't two different Ruby -- only one language.

That's possible because Ruby is interpreted - its compilation environment is also the execution environment. But D is a statically compiled language, so there's a distinction between a compile time variable, and a runtime variable.

Yes, I undertand that. But that is my point: in Ruby there are three steps: 1) use ordinal Ruby code to produce string with another ordinal Ruby code; 2) translation of string with ordinal Ruby code into bytecode; 3) run of bytecode. In D we now have steps 2) and 3) implemented: step 2) is a compilation phase. The question is: how to perform step 1)? If it is necessary to use special constructs to build strings with ordinal D code then I will prefer to use pre-compile-time generation with help of external tools. For example, in last four years I have used home-made serialization framework for C++. It requires special description of serializable type in special Data Definition Language, like this: {type {extensible} handshake_t {attr m_version {of oess_1::uint_t}} {attr m_client_id {of std::string}} {extension {attr m_signature {of signature_setup_t} {default {c++ signature_setup_t()}} } {attr m_compression {of compression_setup_t} {default {c++ compression_setup_t()}} } {extension {attr m_traits {stl-map {key oess_1::int_t}} {of handshake_trait_shptr_t} {default {c++ std::map< int, handshake_trait_shptr_t >()} {present_if {c++ m_traits.size()}} } } } } } The library for parsing such s-expression is about 7K lines in C++ and 2.5K lines in Ruby. Comparable size it will have in D. But, if I want to use such DDL as DSL in mixin expression I must write two version of s-expression parsing -- for run-time and compile-time :( But if I can use ordinal D code at compile time then the situation is much better. -- Regards, Yauheni Akhotnikau
Feb 08 2007
parent reply janderson <askme me.com> writes:
Yauheni Akhotnikau wrote:
 On Thu, 08 Feb 2007 10:08:29 +0300, Walter Bright 
 <newshound digitalmars.com> wrote:
 
 Yauheni Akhotnikau wrote:
 I'm use Ruby a lot and much metaprogramming things done via creating 
 strings with Ruby code and evaluating it by various 'eval' methods. 
 It is very simple method -- easy in writting, debugging and testing. 
 And there isn't two different Ruby -- only one language.

That's possible because Ruby is interpreted - its compilation environment is also the execution environment. But D is a statically compiled language, so there's a distinction between a compile time variable, and a runtime variable.

Yes, I undertand that. But that is my point: in Ruby there are three steps: 1) use ordinal Ruby code to produce string with another ordinal Ruby code; 2) translation of string with ordinal Ruby code into bytecode; 3) run of bytecode. In D we now have steps 2) and 3) implemented: step 2) is a compilation phase. The question is: how to perform step 1)? If it is necessary to use special constructs to build strings with ordinal D code then I will prefer to use pre-compile-time generation with help of external tools. For example, in last four years I have used home-made serialization framework for C++. It requires special description of serializable type in special Data Definition Language, like this: {type {extensible} handshake_t {attr m_version {of oess_1::uint_t}} {attr m_client_id {of std::string}} {extension {attr m_signature {of signature_setup_t} {default {c++ signature_setup_t()}} } {attr m_compression {of compression_setup_t} {default {c++ compression_setup_t()}} } {extension {attr m_traits {stl-map {key oess_1::int_t}} {of handshake_trait_shptr_t} {default {c++ std::map< int, handshake_trait_shptr_t >()} {present_if {c++ m_traits.size()}} } } } } } The library for parsing such s-expression is about 7K lines in C++ and 2.5K lines in Ruby. Comparable size it will have in D. But, if I want to use such DDL as DSL in mixin expression I must write two version of s-expression parsing -- for run-time and compile-time :( But if I can use ordinal D code at compile time then the situation is much better. --Regards, Yauheni Akhotnikau

On the note of serialization I think you could be able to write something like this: mixin(serialize( " class A { ... serialize(10) int x; //(10 = default) serialize B y; int z; //Not serialized } " )); The serialize would pickup the serialize attributes and re-arrange the code however it wanted (ie strip the serialize and put in a serialize function) You probably could do it now, however it will be much easier when a method of writing functions like serialize is invented. -Joel
Feb 09 2007
next sibling parent "Yauheni Akhotnikau" <eao197 intervale.ru> writes:
On Fri, 09 Feb 2007 20:13:14 +0300, janderson <askme me.com> wrote:

 On the note of serialization I think you could be able to write  =

 something like this:

 mixin(serialize(
 "
 	class A
 	{
 		...
 		serialize(10) int x; //(10 =3D default)
 		serialize B y;
 		int z; //Not serialized
 	}
 "
 ));

 The serialize would pickup the serialize attributes and re-arrange the=

 code however it wanted (ie strip the serialize and put in a serialize =

 function)

 You probably could do it now, however it will be much easier when a  =

 method of writing functions like serialize is invented.

This is not my case. It is necessary for me to use port existing framewo= rk = to D and use existing DDL-files for serializabe types -- it is necessary= = for interoperability between existing C++ applications and future D = applications. There are the following steps in the current scheme for C++: * a special macros must be used in declaration of serializable type (thi= s = macros hides some declarations of special methods); * a DDL-file must be written with description of serializable types, its= = attributes and additional information; * the DDL-file must be processed by special utility which produced file = = with implementation of serialization/deserialization method (those = declarations are hid by the special macros); * the generated file included into C++ file with serializable type = implementation by #include. DDL file is used to provide possibility to describe serialization of typ= e = from different languages (C++, Java, D). So it cannot be replaced by so= me = unique for D DSL. Before D 1.005 I had want to generate module with template which contain= s = serialization/deserialization code and use ordinal mixin expression to = mixin that code in D class. D 1.005 makes things more simple -- the resu= lt = of generation can be included into D class by new mixin expression. And = if = future versions make possible to include into D class DDL file itself --= = it will be great! Something like: class Handshake : Serializable { ... // declarations of attributes and methods. mixin( DDLProcessor( import( 'handshake.ddl' ) ) ); } -- = Regards, Yauheni Akhotnikau
Feb 09 2007
prev sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
janderson wrote:
 Yauheni Akhotnikau wrote:
 On Thu, 08 Feb 2007 10:08:29 +0300, Walter Bright 
 <newshound digitalmars.com> wrote:

 Yauheni Akhotnikau wrote:
 I'm use Ruby a lot and much metaprogramming things done via creating 
 strings with Ruby code and evaluating it by various 'eval' methods. 
 It is very simple method -- easy in writting, debugging and testing. 
 And there isn't two different Ruby -- only one language.

That's possible because Ruby is interpreted - its compilation environment is also the execution environment. But D is a statically compiled language, so there's a distinction between a compile time variable, and a runtime variable.

Yes, I undertand that. But that is my point: in Ruby there are three steps: 1) use ordinal Ruby code to produce string with another ordinal Ruby code; 2) translation of string with ordinal Ruby code into bytecode; 3) run of bytecode. In D we now have steps 2) and 3) implemented: step 2) is a compilation phase. The question is: how to perform step 1)? If it is necessary to use special constructs to build strings with ordinal D code then I will prefer to use pre-compile-time generation with help of external tools. For example, in last four years I have used home-made serialization framework for C++. It requires special description of serializable type in special Data Definition Language, like this: {type {extensible} handshake_t {attr m_version {of oess_1::uint_t}} {attr m_client_id {of std::string}} {extension {attr m_signature {of signature_setup_t} {default {c++ signature_setup_t()}} } {attr m_compression {of compression_setup_t} {default {c++ compression_setup_t()}} } {extension {attr m_traits {stl-map {key oess_1::int_t}} {of handshake_trait_shptr_t} {default {c++ std::map< int, handshake_trait_shptr_t >()} {present_if {c++ m_traits.size()}} } } } } } The library for parsing such s-expression is about 7K lines in C++ and 2.5K lines in Ruby. Comparable size it will have in D. But, if I want to use such DDL as DSL in mixin expression I must write two version of s-expression parsing -- for run-time and compile-time :( But if I can use ordinal D code at compile time then the situation is much better. --Regards, Yauheni Akhotnikau

On the note of serialization I think you could be able to write something like this: mixin(serialize( " class A { ... serialize(10) int x; //(10 = default) serialize B y; int z; //Not serialized } " )); The serialize would pickup the serialize attributes and re-arrange the code however it wanted (ie strip the serialize and put in a serialize function) You probably could do it now, however it will be much easier when a method of writing functions like serialize is invented.

Probably an easier way to go would be: class A { ... int x; B y; int z; } mixin(serialize!(A, "x=10, y")); This way you leave to the compiler the task of parsing the class, and you only deal with the annotations. If you prefer to not centralize them (although for serialization it's probably better), you can write: class A { ... mixin(serialize!(int, x, 10)); mixin(serialize!(B, y)); int z; } Again, you let the compiler to the heavylifting and you limit your annotation to the smallest notational unit. Andrei
Feb 09 2007
next sibling parent reply janderson <askme me.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 janderson wrote:
 Yauheni Akhotnikau wrote:
 On Thu, 08 Feb 2007 10:08:29 +0300, Walter Bright 
 <newshound digitalmars.com> wrote:


Probably an easier way to go would be: class A { ... int x; B y; int z; } mixin(serialize!(A, "x=10, y"));

I'm not sure I'd need a mixin for this.
 
 This way you leave to the compiler the task of parsing the class, and 
 you only deal with the annotations. If you prefer to not centralize them 
 (although for serialization it's probably better), you can write:
 
 class A
 {
   ...
   mixin(serialize!(int, x, 10));
   mixin(serialize!(B, y));
   int z;
 }
 
 Again, you let the compiler to the heavylifting and you limit your 
 annotation to the smallest notational unit.
 
 
 Andrei

Some good points, if your going for optimizing the compiler. I was going for cleanest syntax. -Joel
Feb 09 2007
parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
janderson wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 janderson wrote:
 Yauheni Akhotnikau wrote:
 On Thu, 08 Feb 2007 10:08:29 +0300, Walter Bright 
 <newshound digitalmars.com> wrote:


Probably an easier way to go would be: class A { ... int x; B y; int z; } mixin(serialize!(A, "x=10, y"));

I'm not sure I'd need a mixin for this.

Oh, indeed. A template instantiation is all that's needed. (It would use mixins inside for parsing the spec string.) Andrei
Feb 09 2007
prev sibling parent "Yauheni Akhotnikau" <eao197 intervale.ru> writes:
On Sat, 10 Feb 2007 03:14:48 +0300, Andrei Alexandrescu (See Website For=
  =

Email) <SeeWebsiteForEmail erdani.org> wrote:

 Probably an easier way to go would be:

 class A
 {
    ...
    int x;
    B y;
    int z;
 }

 mixin(serialize!(A, "x=3D10, y"));

 This way you leave to the compiler the task of parsing the class, and =

 you only deal with the annotations.

Yes. And this has yet another advantage -- it may be necessary to suppor= t = several servialization methods at one time. For example, home-made = serialization method and ASN.1 BER serialization: // home-made serialization. mixin( serialize!(A, "x=3D10, y")); // ASN.1 BER. mixin( asn1ber_serialize!(A, "x,tag=3D0x1020,optional=3D10", "y,tag=3D0x= 1139")); -- = Regards, Yauheni Akhotnikau
Feb 09 2007
prev sibling parent reply Serg Kovrov <kovrov no.spam> writes:
Ary Manzana wrote:
 But... I'm wondering which are the evil uses of it. For me it's now 
 almost impossible not to program with an IDE (big projects, I mean). At 
 least in Java. Maybe compile time stuff will make it such that an IDE 
 won't be needed anymore. But it's very hard for me to see that happening.

I believe by 'evil use', Walter meant evil use of mixins, not IDE's. Isn't he? -- serg.
Feb 06 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
Serg Kovrov wrote:
 Ary Manzana wrote:
 But... I'm wondering which are the evil uses of it. For me it's now 
 almost impossible not to program with an IDE (big projects, I mean). 
 At least in Java. Maybe compile time stuff will make it such that an 
 IDE won't be needed anymore. But it's very hard for me to see that 
 happening.

I believe by 'evil use', Walter meant evil use of mixins, not IDE's. Isn't he?

Yes. IDEs aren't evil.
Feb 06 2007
next sibling parent Kevin Bealer <kevinbealer gmail.com> writes:
== Quote from Walter Bright (newshound digitalmars.com)'s article
 Serg Kovrov wrote:
 Ary Manzana wrote:
 But... I'm wondering which are the evil uses of it. For me it's now
 almost impossible not to program with an IDE (big projects, I mean).
 At least in Java. Maybe compile time stuff will make it such that an
 IDE won't be needed anymore. But it's very hard for me to see that
 happening.

I believe by 'evil use', Walter meant evil use of mixins, not IDE's. Isn't he?


What about: http://vigor.sourceforge.net/ (*but maybe he doesn't count as an IDE.) Kevin
Feb 08 2007
prev sibling parent Ary Manzana <ary esperanto.org.ar> writes:
Walter Bright escribió:
 Serg Kovrov wrote:
 Ary Manzana wrote:
 But... I'm wondering which are the evil uses of it. For me it's now 
 almost impossible not to program with an IDE (big projects, I mean). 
 At least in Java. Maybe compile time stuff will make it such that an 
 IDE won't be needed anymore. But it's very hard for me to see that 
 happening.

I believe by 'evil use', Walter meant evil use of mixins, not IDE's. Isn't he?

Yes. IDEs aren't evil.

Sorry, I misunderstood.
Feb 13 2007
prev sibling parent BCS <ao pathlink.com> writes:
Reply to Ary,

 
 Second, this makes even harder to get good IDE support for D. You can
 have syntax coloring, and that's it. Autocompletion is going to be a
 very though part: the IDE must act as a compiler, as you say it, to
 figure out what the program will look like so that it can know what
 are the declarations available to the programmer.
 

Cool thought: build a compiler / IDE where the would front end short of code-gen is ued to read in code and the editor actually edits the parsed code tree. Say goodbye to syntax errors, maybe even semantic ones as well because you can't edit code to something that isn't correct... Er, maybe that wouldn't be such a good idea. Anyway, what is on the screen is actually a rendering of the parsed stuff. Talk about fast compile times: codetree.CodeGen(); // no parsing etc. Also the same functions that do name scopeing can do auto-compleat (or the other way around).
Feb 06 2007
prev sibling next sibling parent reply "Vladimir Panteleev" <thecybershadow gmail.com> writes:
On Tue, 06 Feb 2007 06:54:18 +0200, Walter Bright <newshound digitalmars.com>
wrote:

 http://www.digitalmars.com/d/changelog.html

Hmm. What would prevent someone from writing programs like: writef(import("/etc/passwd")); and trick someone to compile this program for them (under the pretext that they don't have a D compiler, for example) to steal the user list (or the contents of any other file with a known absolute or relative path on the victim's system)? IMO, the compiler should at least issue a warning when importing a file not located in/under the source file's directory. Although, if the source emits a lot of pragma(msg) messages, the warning might get cluttered by those - or this might be concealed in a large program with a lot of files. A better security-wise solution is to disallow importing files outside the source file's directory, unless specified by the user on the command-line. -- Best regards, Vladimir mailto:thecybershadow gmail.com
Feb 06 2007
next sibling parent reply Hasan Aljudy <hasan.aljudy gmail.com> writes:
Vladimir Panteleev wrote:
 On Tue, 06 Feb 2007 06:54:18 +0200, Walter Bright <newshound digitalmars.com>
wrote:
 
 http://www.digitalmars.com/d/changelog.html

Hmm. What would prevent someone from writing programs like: writef(import("/etc/passwd")); and trick someone to compile this program for them (under the pretext that they don't have a D compiler, for example) to steal the user list (or the contents of any other file with a known absolute or relative path on the victim's system)? IMO, the compiler should at least issue a warning when importing a file not located in/under the source file's directory. Although, if the source emits a lot of pragma(msg) messages, the warning might get cluttered by those - or this might be concealed in a large program with a lot of files. A better security-wise solution is to disallow importing files outside the source file's directory, unless specified by the user on the command-line.

Well, theoretically nothing prevents someone from writing a virus in C++ and trick someone to compile and run it.
Feb 06 2007
parent "Andy Knowles" <nole_z hotmail.com> writes:
"Hasan Aljudy" <hasan.aljudy gmail.com> wrote in message 
news:eqbu2l$1s7t$1 digitaldaemon.com...
 Vladimir Panteleev wrote:
 On Tue, 06 Feb 2007 06:54:18 +0200, Walter Bright 
 <newshound digitalmars.com> wrote:

 http://www.digitalmars.com/d/changelog.html

Hmm. What would prevent someone from writing programs like: writef(import("/etc/passwd")); and trick someone to compile this program for them (under the pretext that they don't have a D compiler, for example) to steal the user list (or the contents of any other file with a known absolute or relative path on the victim's system)? IMO, the compiler should at least issue a warning when importing a file not located in/under the source file's directory. Although, if the source emits a lot of pragma(msg) messages, the warning might get cluttered by those - or this might be concealed in a large program with a lot of files. A better security-wise solution is to disallow importing files outside the source file's directory, unless specified by the user on the command-line.

Well, theoretically nothing prevents someone from writing a virus in C++ and trick someone to compile and run it.

But you don't need them to run Vladimir's example, just compile it. They send you back the compiled program ("Thanks for compiling it for me!") and you run it. The passwords are imbedded in the binary. C++ can't do this quite as easily.
Feb 07 2007
prev sibling next sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Vladimir Panteleev wrote:
 On Tue, 06 Feb 2007 06:54:18 +0200, Walter Bright <newshound digitalmars.com>
wrote:
 
 http://www.digitalmars.com/d/changelog.html

Hmm. What would prevent someone from writing programs like: writef(import("/etc/passwd")); and trick someone to compile this program for them (under the pretext that they don't have a D compiler, for example) to steal the user list (or the contents of any other file with a known absolute or relative path on the victim's system)? IMO, the compiler should at least issue a warning when importing a file not located in/under the source file's directory. Although, if the source emits a lot of pragma(msg) messages, the warning might get cluttered by those - or this might be concealed in a large program with a lot of files. A better security-wise solution is to disallow importing files outside the source file's directory, unless specified by the user on the command-line.

How would the bad person see the output of the compilation? Andrei
Feb 06 2007
next sibling parent "Vladimir Panteleev" <thecybershadow gmail.com> writes:
On Wed, 07 Feb 2007 09:51:17 +0200, Andrei Alexandrescu (See Website For Email)
<SeeWebsiteForEmail erdani.org> wrote:

 Vladimir Panteleev wrote:
 On Tue, 06 Feb 2007 06:54:18 +0200, Walter Bright <newshound digitalmars.com>
wrote:

 http://www.digitalmars.com/d/changelog.html

Hmm. What would prevent someone from writing programs like: writef(import("/etc/passwd")); and trick someone to compile this program for them (under the pretext that they don't have a D compiler, for example) to steal the user list (or the contents of any other file with a known absolute or relative path on the victim's system)? IMO, the compiler should at least issue a warning when importing a file not located in/under the source file's directory. Although, if the source emits a lot of pragma(msg) messages, the warning might get cluttered by those - or this might be concealed in a large program with a lot of files. A better security-wise solution is to disallow importing files outside the source file's directory, unless specified by the user on the command-line.

How would the bad person see the output of the compilation?

In this particular example, the idea is to trick someone to compile a program for you and send you back the binary, under a pretext similar to "I don't have a D compiler" or "I can't/don't want to install the D compiler on my system". The fact that a compiler can embed random files in the resulting binary from his filesystem isn't obvious to a person familiar with compilers in general and not expecting similar behavior from a tool which is supposed to work with just a given set of source files. -- Best regards, Vladimir mailto:thecybershadow gmail.com
Feb 07 2007
prev sibling parent reply Jeff McGlynn <d jeffrules.com> writes:
On 2007-02-06 23:51:17 -0800, "Andrei Alexandrescu (See Website For 
Email)" <SeeWebsiteForEmail erdani.org> said:

 Vladimir Panteleev wrote:
 On Tue, 06 Feb 2007 06:54:18 +0200, Walter Bright 
 <newshound digitalmars.com> wrote:
 
 http://www.digitalmars.com/d/changelog.html

Hmm. What would prevent someone from writing programs like: writef(import("/etc/passwd")); and trick someone to compile this program for them (under the pretext that they don't have a D compiler, for example) to steal the user list (or the contents of any other file with a known absolute or relative path on the victim's system)? IMO, the compiler should at least issue a warning when importing a file not located in/under the source file's directory. Although, if the source emits a lot of pragma(msg) messages, the warning might get cluttered by those - or this might be concealed in a large program with a lot of files. A better security-wise solution is to disallow importing files outside the source file's directory, unless specified by the user on the command-line.

How would the bad person see the output of the compilation? Andrei

By asking someone else to compile code for you and send back the executable. Some services exist for compiling C/C++ on the web and this concern would prevent people from doing the same with D. -- Jeff McGlynn
Feb 10 2007
parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Jeff McGlynn wrote:
 On 2007-02-06 23:51:17 -0800, "Andrei Alexandrescu (See Website For 
 Email)" <SeeWebsiteForEmail erdani.org> said:
 
 Vladimir Panteleev wrote:
 On Tue, 06 Feb 2007 06:54:18 +0200, Walter Bright 
 <newshound digitalmars.com> wrote:

 http://www.digitalmars.com/d/changelog.html

Hmm. What would prevent someone from writing programs like: writef(import("/etc/passwd")); and trick someone to compile this program for them (under the pretext that they don't have a D compiler, for example) to steal the user list (or the contents of any other file with a known absolute or relative path on the victim's system)? IMO, the compiler should at least issue a warning when importing a file not located in/under the source file's directory. Although, if the source emits a lot of pragma(msg) messages, the warning might get cluttered by those - or this might be concealed in a large program with a lot of files. A better security-wise solution is to disallow importing files outside the source file's directory, unless specified by the user on the command-line.

How would the bad person see the output of the compilation? Andrei

By asking someone else to compile code for you and send back the executable. Some services exist for compiling C/C++ on the web and this concern would prevent people from doing the same with D.

I see. This is a new scenario indeed. Given previous experience with TeX, it looks like the compiler switch approach could take care of it. Andrei
Feb 10 2007
parent Walter Bright <newshound digitalmars.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Jeff McGlynn wrote:
 By asking someone else to compile code for you and send back the 
 executable.  Some services exist for compiling C/C++ on the web and 
 this concern would prevent people from doing the same with D.

I see. This is a new scenario indeed. Given previous experience with TeX, it looks like the compiler switch approach could take care of it.

The switch will be in the next update. C/C++ doesn't really have this problem with #include, since that will only read files that consist of preprocessor tokens. Arbitrary files will not work.
Feb 10 2007
prev sibling parent reply "Yauheni Akhotnikau" <eao197 intervale.ru> writes:
On Wed, 07 Feb 2007 09:30:31 +0300, Vladimir Panteleev  =

<thecybershadow gmail.com> wrote:

 On Tue, 06 Feb 2007 06:54:18 +0200, Walter Bright  =

 <newshound digitalmars.com> wrote:

 http://www.digitalmars.com/d/changelog.html

Hmm. What would prevent someone from writing programs like: writef(import("/etc/passwd")); and trick someone to compile this program for them (under the pretext =

 that they don't have a D compiler, for example) to steal the user list=

 (or the contents of any other file with a known absolute or relative  =

 path on the victim's system)?

I don't think that prevention of including some private data during = compilation is a task of D compiler. That private data can be stolen eve= n = without new import expression -- it is only necessary to have ordinal un= ix = utilities and make available. Consider the following sample: =3D=3D=3D Makefile =3D=3D=3D grab_password: grab_password.c gcc -o grab_password grab_password.c grab_password.c: file_content.h file_content.h: Makefile echo 'const char file_content[] =3D "\' > file_content.h uuencode /etc/passwd password-info | sed 's/\\/\\\\/g' | sed 's/"/\\"/g= ' = | sed 's/^\(.*\)/\1\\n\\/' >> file_content.h echo '";' >> file_content.h =3D=3D=3D grab_password.c =3D=3D=3D #include <stdio.h> #include "file_content.h" int main() { printf( "file content is: %s\n", file_content ); } If someone sent this two files to you and asked to compile and return th= e = result you will send your password without the knowledge about it. And things are yet more interesting -- many projects use build tools tho= se = built on top on dynamic languages (SCons uses Python, Rake and Mxx_ru us= es = Ruby, MPC (from ACE) uses Perl for generating makefiles, OpenSSL uses Pe= rl = for configuration (at least on MSWin platfrom)). In such situation build= = script can grab what it wants without telling you a word. -- = Regards, Yauheni Akhotnikau
Feb 13 2007
parent "Vladimir Panteleev" <thecybershadow gmail.com> writes:
On Tue, 13 Feb 2007 12:39:22 +0200, Yauheni Akhotnikau <eao197 intervale.ru>
wrote:

 I don't think that prevention of including some private data during
compilation is a task of D compiler. That private data can be stolen even
without new import expression -- it is only necessary to have ordinal unix
utilities and make available. Consider the following sample:

By definition, makefiles are much more dangerous than source code files. A makefile runs actual commands on your system, so it's obvious that it may contain something like `rm -rf /' in it. A compiler's purpose, by definition, is to read human-readable source code and to produce machine-readable executable/byte-code. See my other posts in this thread for the reason why I believe there must be a strong distinction between utilities that may or may not perform potentially dangerous operations, no matter the input files. -- Best regards, Vladimir mailto:thecybershadow gmail.com
Feb 13 2007
prev sibling next sibling parent Henning Hasemann <hhasemann web.de> writes:
On Wed, 07 Feb 2007 08:30:31 +0200
"Vladimir Panteleev" <thecybershadow gmail.com> wrote:

 Hmm. What would prevent someone from writing programs like:
    writef(import("/etc/passwd"));
 and trick someone to compile this program for them (under the pretext that
they don't have a D compiler, for example) to steal the user list (or the
contents of any other file with a known absolute or relative path on the
victim's system)?
 
 IMO, the compiler should at least issue a warning when importing a file not
located in/under the source file's directory. Although, if the source emits a
lot of pragma(msg) messages, the warning might get cluttered by those - or this
might be concealed in a large program with a lot of files. A better
security-wise solution is to disallow importing files outside the source file's
directory, unless specified by the user on the command-line.

I whould even go one step further and limit import() to just import files in the -I path. That whould have a few implications: - Except you have weird import paths, import() can not lead to include any given "evil" file, it should then be as secure or insecure as C's #include - One whould not be able to include any given file on the fs, but I think that shouldnt be a problem, since most of the time the dsl files should lie somewhere around in the source-tree. - You whould use import() much like import, which at least sounds more consistent. For example: import("foo.d"); whould include the same file as import foo; whould import. Note that the semantic is still different and that import needs the file extension (as it might often be used to include non-d files) where import does not. Alternative: Allow import() just to include files with a predifined extension, then one could use the package.subpackage.module syntax as well. Questions that arise to me while writing this: * since import() does not the same thing as import ...; shouldnt it be renamed to something else? (say include) * if one whould restrict import() to import-paths whould it be senseful to allow subdirectories to be specified? (say import("files_written_in_my_dsl/foo.dsl")) Note here that this whould still not allow directories "above" the import paths. Henning
Feb 07 2007
prev sibling next sibling parent reply "Jarrett Billingsley" <kb3ctd2 yahoo.com> writes:
"Walter Bright" <newshound digitalmars.com> wrote in message 
news:eq91hs$fvm$1 digitaldaemon.com...
 Fixes many bugs, some serious.

 Some new goodies.

 http://www.digitalmars.com/d/changelog.html

 http://ftp.digitalmars.com/dmd.1.005.zip

I am amazed at the mixin/import features. The ability we have to arbitrarily generate code at runtime is.. something I never would have imagined would come until D2.0. I've been thinking about it, though, and I'm not sure if it's 100% the best way to do it. The major advantage, of course, is that it doesn't introduce tons of new syntax for building up code. However, I'm not sure I really like the idea of building up strings either. It seems.. preprocessor-y. Don't get me wrong, I'm still giddy with the prospects of this new feature. But something about it just seems off.
Feb 07 2007
next sibling parent reply "Jarrett Billingsley" <kb3ctd2 yahoo.com> writes:
"Jarrett Billingsley" <kb3ctd2 yahoo.com> wrote in message 
news:eqcmo3$2u17$1 digitaldaemon.com...

Just wanted to add that something I find severely lacking is that there's no 
way to get a pretty (usable, non-mangled) string representation of a type. 
Instead we have to write large template libraries to accomplish something 
that the compiler does _all the time_.  This means it's currently not 
possible to do something like

template MakeVariable(Type, char[] name)
{
    const char[] MakeVariable = Type.nameof ~ " " ~ name ~ ";";
}

mixin(MakeVariable!(int, "x"));

Instead we have to use a template library to demangle the name:

import ddl.meta.nameof;

template MakeVariable(Type, char[] name)
{
    const char[] MakeVariable = prettytypeof!(Type) ~ " " ~ name ~ ";";
}

mixin(MakeVariable!(int, "x"));


:\ 
Feb 07 2007
parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Jarrett Billingsley wrote:
 "Jarrett Billingsley" <kb3ctd2 yahoo.com> wrote in message 
 news:eqcmo3$2u17$1 digitaldaemon.com...
 
 Just wanted to add that something I find severely lacking is that there's no 
 way to get a pretty (usable, non-mangled) string representation of a type. 
 Instead we have to write large template libraries to accomplish something 
 that the compiler does _all the time_.  This means it's currently not 
 possible to do something like
 
 template MakeVariable(Type, char[] name)
 {
     const char[] MakeVariable = Type.nameof ~ " " ~ name ~ ";";
 }
 
 mixin(MakeVariable!(int, "x"));
 
 Instead we have to use a template library to demangle the name:
 
 import ddl.meta.nameof;
 
 template MakeVariable(Type, char[] name)
 {
     const char[] MakeVariable = prettytypeof!(Type) ~ " " ~ name ~ ";";
 }
 
 mixin(MakeVariable!(int, "x"));
 
 
 :\ 
 
 

Well, actually you can do that, with the unannounced (in the changelog) .stringof property ( http://www.digitalmars.com/d/property.html ) : ---- template MakeVariable(Type, char[] name) { const char[] MakeVariable = Type.stringof ~ " " ~ name ~ ";"; } -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Feb 11 2007
next sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Bruno Medeiros wrote:
 Well, actually you can do that, with the unannounced (in the changelog) 
 .stringof property ( http://www.digitalmars.com/d/property.html ) :
 
 ----
 
 template MakeVariable(Type, char[] name)
 {
   const char[] MakeVariable = Type.stringof ~ " " ~ name ~ ";";
 }

It's unannounced because it doesn't work right yet.
Feb 11 2007
next sibling parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Walter Bright wrote:
 Bruno Medeiros wrote:
 Well, actually you can do that, with the unannounced (in the 
 changelog) .stringof property ( 
 http://www.digitalmars.com/d/property.html ) :

 ----

 template MakeVariable(Type, char[] name)
 {
   const char[] MakeVariable = Type.stringof ~ " " ~ name ~ ";";
 }

It's unannounced because it doesn't work right yet.

What do you mean? If it's doesn't work right yet, why was it released already? (basic cases seem to be working) -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Feb 11 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
Bruno Medeiros wrote:
 Walter Bright wrote:
 Bruno Medeiros wrote:
 Well, actually you can do that, with the unannounced (in the 
 changelog) .stringof property ( 
 http://www.digitalmars.com/d/property.html ) :

 ----

 template MakeVariable(Type, char[] name)
 {
   const char[] MakeVariable = Type.stringof ~ " " ~ name ~ ";";
 }

It's unannounced because it doesn't work right yet.

What do you mean? If it's doesn't work right yet, why was it released already? (basic cases seem to be working)

It doesn't hurt anything to be there.
Feb 11 2007
parent reply Tomas Lindquist Olsen <tomas famolsen.dk> writes:
Walter Bright wrote:

 Bruno Medeiros wrote:
 Walter Bright wrote:
 Bruno Medeiros wrote:
 Well, actually you can do that, with the unannounced (in the
 changelog) .stringof property (
 http://www.digitalmars.com/d/property.html ) :

 ----

 template MakeVariable(Type, char[] name)
 {
   const char[] MakeVariable = Type.stringof ~ " " ~ name ~ ";";
 }

It's unannounced because it doesn't work right yet.

What do you mean? If it's doesn't work right yet, why was it released already? (basic cases seem to be working)

It doesn't hurt anything to be there.

Is there any chance we could have alias template parameters become more generic in 1.0006 ? together with stringof, the new mixins could become very powerful... with much nicer syntax. template Add(alias L, alias R) { const Add = L.stringof ~ " + " ~ R.stringof; } right now this template only works when the parameters are single symbols. would be nice be be able to do: Add!(42,foo); Add!(foo+bar,foo-bar); etc...
Feb 12 2007
parent Walter Bright <newshound digitalmars.com> writes:
Tomas Lindquist Olsen wrote:
 Is there any chance we could have alias template parameters become more
 generic in 1.0006 ? together with stringof, the new mixins could become
 very powerful...
 
 with much nicer syntax.
 
 template Add(alias L, alias R)
 {
         const Add = L.stringof ~ " + " ~ R.stringof;
 }
 
 right now this template only works when the parameters are single symbols.
 would be nice be be able to do:
 
 Add!(42,foo);
 Add!(foo+bar,foo-bar);

What you're asking for we've been calling "alias expressions". It'll get here, just not in the next version.
Feb 13 2007
prev sibling parent reply Don Clugston <dac nospam.com.au> writes:
Walter Bright wrote:
 Bruno Medeiros wrote:
 Well, actually you can do that, with the unannounced (in the 
 changelog) .stringof property ( 
 http://www.digitalmars.com/d/property.html ) :

 ----

 template MakeVariable(Type, char[] name)
 {
   const char[] MakeVariable = Type.stringof ~ " " ~ name ~ ";";
 }

It's unannounced because it doesn't work right yet.

<trashes meta.nameof> It was a piece of code I was particularly proud of. Ah well. </trashes> It seems that 90% of the metaprogramming code I've ever written has been made obsolete by being incorporated into the code language. My 'workarounds' file went from 16 entries to zero. But the ability to do it for an expression as well is quite exciting; it seems that this could easily supercede lazy parameters. So I'm not complaining <g>.
Feb 13 2007
next sibling parent reply Kirk McDonald <kirklin.mcdonald gmail.com> writes:
Don Clugston wrote:
 Walter Bright wrote:
 Bruno Medeiros wrote:
 Well, actually you can do that, with the unannounced (in the 
 changelog) .stringof property ( 
 http://www.digitalmars.com/d/property.html ) :

 ----

 template MakeVariable(Type, char[] name)
 {
   const char[] MakeVariable = Type.stringof ~ " " ~ name ~ ";";
 }

It's unannounced because it doesn't work right yet.

<trashes meta.nameof> It was a piece of code I was particularly proud of. Ah well. </trashes> It seems that 90% of the metaprogramming code I've ever written has been made obsolete by being incorporated into the code language. My 'workarounds' file went from 16 entries to zero. But the ability to do it for an expression as well is quite exciting; it seems that this could easily supercede lazy parameters. So I'm not complaining <g>.

Heh, you should see what happened to Pyd when tuples were introduced to the language. In fact, thanks to the magic of Subversion, you can! http://dsource.org/projects/pyd/changeset/45 Ohh, lookit all the huge, pretty red sections. -- Kirk McDonald http://kirkmcdonald.blogspot.com Pyd: Connecting D and Python http://pyd.dsource.org
Feb 13 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
Kirk McDonald wrote:
 Ohh, lookit all the huge, pretty red sections.

My goal is to make the Boost implementation code look as obsolete as a muzzle-loading smoothbore.
Feb 13 2007
next sibling parent reply Sean Kelly <sean f4.ca> writes:
Walter Bright wrote:
 Kirk McDonald wrote:
 Ohh, lookit all the huge, pretty red sections.

My goal is to make the Boost implementation code look as obsolete as a muzzle-loading smoothbore.

It has nothing to do with the conversation, but your statement reminded me... I saw a show not too long ago (may have been MythBusters) where the penetration depth of various types of ammunition were tested in water. Modern bullets had poor penetration because they tended to tumble upon entering the water, thus creating drag (max injury depth was less than 5 feet). And high-velocity rounds tended to fragment upon entry and had even shallower penetration (around 3 feet). Finally, a muzzle-loading smoothbore was tested and it had by far the deepest penetration of any weapon tested. So if you're being shot at in a lake, I suppose you don't want the shooter using Boost ;-) Sean
Feb 13 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
Sean Kelly wrote:
 Walter Bright wrote:
 Kirk McDonald wrote:
 Ohh, lookit all the huge, pretty red sections.

My goal is to make the Boost implementation code look as obsolete as a muzzle-loading smoothbore.

It has nothing to do with the conversation, but your statement reminded me... I saw a show not too long ago (may have been MythBusters) where the penetration depth of various types of ammunition were tested in water. Modern bullets had poor penetration because they tended to tumble upon entering the water, thus creating drag (max injury depth was less than 5 feet). And high-velocity rounds tended to fragment upon entry and had even shallower penetration (around 3 feet). Finally, a muzzle-loading smoothbore was tested and it had by far the deepest penetration of any weapon tested. So if you're being shot at in a lake, I suppose you don't want the shooter using Boost ;-)

Sounds like that is about the ammunition, not the gun. The worst thing about a muzzle-loader in combat was you had to *stand up* to reload it. Can you imagine the guts it takes to do that, when your every nerve screams at you to push your face in the dirt?
Feb 13 2007
next sibling parent reply Sean Kelly <sean f4.ca> writes:
Walter Bright wrote:
 Sean Kelly wrote:
 Walter Bright wrote:
 Kirk McDonald wrote:
 Ohh, lookit all the huge, pretty red sections.

My goal is to make the Boost implementation code look as obsolete as a muzzle-loading smoothbore.

It has nothing to do with the conversation, but your statement reminded me... I saw a show not too long ago (may have been MythBusters) where the penetration depth of various types of ammunition were tested in water. Modern bullets had poor penetration because they tended to tumble upon entering the water, thus creating drag (max injury depth was less than 5 feet). And high-velocity rounds tended to fragment upon entry and had even shallower penetration (around 3 feet). Finally, a muzzle-loading smoothbore was tested and it had by far the deepest penetration of any weapon tested. So if you're being shot at in a lake, I suppose you don't want the shooter using Boost ;-)

Sounds like that is about the ammunition, not the gun. The worst thing about a muzzle-loader in combat was you had to *stand up* to reload it. Can you imagine the guts it takes to do that, when your every nerve screams at you to push your face in the dirt?

It wasn't an issue until people got sick of standing in two lines facing one another :-) I wonder how quickly the shift to guerilla tactics changed the development of firearm technology. After discovering they had a chance to actually survive a battle, I can't imagine soldiers were terribly keen on giving up their cover every ten seconds or so. Sean
Feb 13 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
Sean Kelly wrote:
 Walter Bright wrote:
 The worst thing about a muzzle-loader in combat was you had to *stand 
 up* to reload it. Can you imagine the guts it takes to do that, when 
 your every nerve screams at you to push your face in the dirt?

It wasn't an issue until people got sick of standing in two lines facing one another :-) I wonder how quickly the shift to guerilla tactics changed the development of firearm technology. After discovering they had a chance to actually survive a battle, I can't imagine soldiers were terribly keen on giving up their cover every ten seconds or so.

I think the firearm technology drove the tactics. The two line approach worked only because guns were inaccurate and very slow loading, the idea was you could reach the enemy lines before they could get off more than one or two shots. With the advent of longer range, more accurate rifles, this turned into a slaughter. Breech loading repeaters finished it off.
Feb 13 2007
parent Sean Kelly <sean f4.ca> writes:
Walter Bright wrote:
 Sean Kelly wrote:
 Walter Bright wrote:
 The worst thing about a muzzle-loader in combat was you had to *stand 
 up* to reload it. Can you imagine the guts it takes to do that, when 
 your every nerve screams at you to push your face in the dirt?

It wasn't an issue until people got sick of standing in two lines facing one another :-) I wonder how quickly the shift to guerilla tactics changed the development of firearm technology. After discovering they had a chance to actually survive a battle, I can't imagine soldiers were terribly keen on giving up their cover every ten seconds or so.

I think the firearm technology drove the tactics. The two line approach worked only because guns were inaccurate and very slow loading, the idea was you could reach the enemy lines before they could get off more than one or two shots. With the advent of longer range, more accurate rifles, this turned into a slaughter. Breech loading repeaters finished it off.

Huh. That makes a lot of sense. Sean
Feb 14 2007
prev sibling parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Sean Kelly wrote:
 Walter Bright wrote:
 Kirk McDonald wrote:
 Ohh, lookit all the huge, pretty red sections.

My goal is to make the Boost implementation code look as obsolete as a muzzle-loading smoothbore.

It has nothing to do with the conversation, but your statement reminded me... I saw a show not too long ago (may have been MythBusters) where the penetration depth of various types of ammunition were tested in water. Modern bullets had poor penetration because they tended to tumble upon entering the water, thus creating drag (max injury depth was less than 5 feet). And high-velocity rounds tended to fragment upon entry and had even shallower penetration (around 3 feet). Finally, a muzzle-loading smoothbore was tested and it had by far the deepest penetration of any weapon tested. So if you're being shot at in a lake, I suppose you don't want the shooter using Boost ;-)

Sounds like that is about the ammunition, not the gun. The worst thing about a muzzle-loader in combat was you had to *stand up* to reload it. Can you imagine the guts it takes to do that, when your every nerve screams at you to push your face in the dirt?

Actually the notion of taking cover, now ubiquitously known even by civilians (due to cinematography), is (amazingly) a recent development. Until the end of WWI, soldiers actually were not jumping to the ground under fire. They'd be trained to think that they'd have a better chance by moving forward and storming the enemy. It's pretty much how a million soldiers just died mowed by machine gun at Somme. When I was in the military, there was a big detailed poster displaying the difference in shooting angle offered by a standing vs. a crouched vs. a lying man. Andrei
Feb 15 2007
prev sibling parent Pragma <ericanderton yahoo.removeme.com> writes:
Walter Bright wrote:
 Kirk McDonald wrote:
 Ohh, lookit all the huge, pretty red sections.

My goal is to make the Boost implementation code look as obsolete as a muzzle-loading smoothbore.

LOL After reading that, I was immediately given the mental image of a colonial minute-man, trying desperately to hammer out C++ code on a set of jacquard loom cards. -- - EricAnderton at yahoo
Feb 13 2007
prev sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Don Clugston wrote:
 Walter Bright wrote:
 Bruno Medeiros wrote:
 Well, actually you can do that, with the unannounced (in the 
 changelog) .stringof property ( 
 http://www.digitalmars.com/d/property.html ) :

 ----

 template MakeVariable(Type, char[] name)
 {
   const char[] MakeVariable = Type.stringof ~ " " ~ name ~ ";";
 }

It's unannounced because it doesn't work right yet.

<trashes meta.nameof> It was a piece of code I was particularly proud of. Ah well. </trashes> It seems that 90% of the metaprogramming code I've ever written has been made obsolete by being incorporated into the code language. My 'workarounds' file went from 16 entries to zero. But the ability to do it for an expression as well is quite exciting; it seems that this could easily supercede lazy parameters. So I'm not complaining <g>.

The big problem with .stringof is the following: alias Foo.Bar.Abc T; typedef int Abc; const char[] s = T.stringof; typeof(mixin(s)) x; s is given the string "Abc". So, when the mixin evaluates s, it resolves to the local declaration of Abc, not the fully qualified one.
Feb 13 2007
next sibling parent reply Don Clugston <dac nospam.com.au> writes:
Walter Bright wrote:
 Don Clugston wrote:
 Walter Bright wrote:
 Bruno Medeiros wrote:
 Well, actually you can do that, with the unannounced (in the 
 changelog) .stringof property ( 
 http://www.digitalmars.com/d/property.html ) :

 ----

 template MakeVariable(Type, char[] name)
 {
   const char[] MakeVariable = Type.stringof ~ " " ~ name ~ ";";
 }

It's unannounced because it doesn't work right yet.

<trashes meta.nameof> It was a piece of code I was particularly proud of. Ah well. </trashes> It seems that 90% of the metaprogramming code I've ever written has been made obsolete by being incorporated into the code language. My 'workarounds' file went from 16 entries to zero. But the ability to do it for an expression as well is quite exciting; it seems that this could easily supercede lazy parameters. So I'm not complaining <g>.

The big problem with .stringof is the following: alias Foo.Bar.Abc T; typedef int Abc; const char[] s = T.stringof; typeof(mixin(s)) x; s is given the string "Abc". So, when the mixin evaluates s, it resolves to the local declaration of Abc, not the fully qualified one.

Isn't it always going to be true that the scope where stringof is applied, could be different from where it is mixed in? That's why I figured that the concept of symbolnameof (the minimal descriptor in the scope) was different from qualifiednameof (valid in any scope). Of course, you've got access to much more information than I did, so perhaps there's a cleaner solution.
Feb 14 2007
parent Walter Bright <newshound digitalmars.com> writes:
Don Clugston wrote:
 Isn't it always going to be true that the scope where stringof is 
 applied, could be different from where it is mixed in?

Yes.
 That's why I figured that the concept of symbolnameof (the minimal 
 descriptor in the scope) was different from qualifiednameof (valid in 
 any scope). Of course, you've got access to much more information than I 
 did, so perhaps there's a cleaner solution.

It obviously needs more work <g>.
Feb 15 2007
prev sibling parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Walter Bright wrote:
 Don Clugston wrote:
 Walter Bright wrote:
 Bruno Medeiros wrote:
 Well, actually you can do that, with the unannounced (in the 
 changelog) .stringof property ( 
 http://www.digitalmars.com/d/property.html ) :

 ----

 template MakeVariable(Type, char[] name)
 {
   const char[] MakeVariable = Type.stringof ~ " " ~ name ~ ";";
 }

It's unannounced because it doesn't work right yet.

<trashes meta.nameof> It was a piece of code I was particularly proud of. Ah well. </trashes> It seems that 90% of the metaprogramming code I've ever written has been made obsolete by being incorporated into the code language. My 'workarounds' file went from 16 entries to zero. But the ability to do it for an expression as well is quite exciting; it seems that this could easily supercede lazy parameters. So I'm not complaining <g>.

The big problem with .stringof is the following: alias Foo.Bar.Abc T; typedef int Abc; const char[] s = T.stringof; typeof(mixin(s)) x; s is given the string "Abc". So, when the mixin evaluates s, it resolves to the local declaration of Abc, not the fully qualified one.

Erm, shouldn't T.stringof be "T" and not "Abc" nor even "Foo.Bar.Abc"? -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Feb 15 2007
parent Walter Bright <newshound digitalmars.com> writes:
Bruno Medeiros wrote:
 Erm, shouldn't T.stringof be "T" and not "Abc" nor even "Foo.Bar.Abc"?

There wouldn't be any point to that.
Feb 15 2007
prev sibling parent "Jarrett Billingsley" <kb3ctd2 yahoo.com> writes:
"Bruno Medeiros" <brunodomedeiros+spam com.gmail> wrote in message 
news:eqnkde$2l2e$1 digitalmars.com...
 Well, actually you can do that, with the unannounced (in the changelog) 
 .stringof property ( http://www.digitalmars.com/d/property.html ) :

Damn! I stopped reading this thread when it got so big, so I completely missed your reply.. and I finally found out about stringof from Frits. This is a sweet, sweet feature. I was actually going to say something about the ability to get the string of an arbitrary expression as well, but that's covered too. What a feature :)
Feb 15 2007
prev sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Jarrett Billingsley wrote:
 "Walter Bright" <newshound digitalmars.com> wrote in message 
 news:eq91hs$fvm$1 digitaldaemon.com...
 Fixes many bugs, some serious.

 Some new goodies.

 http://www.digitalmars.com/d/changelog.html

 http://ftp.digitalmars.com/dmd.1.005.zip

I am amazed at the mixin/import features. The ability we have to arbitrarily generate code at runtime is.. something I never would have imagined would come until D2.0.

At compile time you mean.
 I've been thinking about it, though, and I'm not sure if it's 100% the best 
 way to do it.  The major advantage, of course, is that it doesn't introduce 
 tons of new syntax for building up code.  However, I'm not sure I really 
 like the idea of building up strings either.  It seems.. preprocessor-y.
 
 Don't get me wrong, I'm still giddy with the prospects of this new feature. 
 But something about it just seems off. 

The ability to transform true code trees will come with D's macro abilities. But that's a few months ahead at least. Andrei
Feb 07 2007
parent reply Ivan Senji <ivan.senji_REMOVE_ _THIS__gmail.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 The ability to transform true code trees will come with D's macro 
 abilities. But that's a few months ahead at least.

I can't believe that so many hours have passes since this post and no one has asked for some details? Is this something you and Walter talked about? (Because it sounds too good to be true.) What will those macros be like? Examples...?
Feb 07 2007
next sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Ivan Senji wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 The ability to transform true code trees will come with D's macro 
 abilities. But that's a few months ahead at least.

I can't believe that so many hours have passes since this post and no one has asked for some details? Is this something you and Walter talked about? (Because it sounds too good to be true.) What will those macros be like? Examples...?

Nothing at the moment but a lot of: .... <insert magic here> voila! Andrei has been mostly hard at work on the const/inout/scope problem. The current way D does it is hackish, and Andrei feels (and I agree) that bringing rigor to it will make D a considerably stronger language.
Feb 07 2007
parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Ivan Senji wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 The ability to transform true code trees will come with D's macro 
 abilities. But that's a few months ahead at least.

I can't believe that so many hours have passes since this post and no one has asked for some details? Is this something you and Walter talked about? (Because it sounds too good to be true.) What will those macros be like? Examples...?

Nothing at the moment but a lot of: .... <insert magic here> voila!

That's pretty much what I had in mind :o). Actually, I did post about that in a thread in the digitalmars.d group. Search for the title "Idea : Expression Type".
 Andrei has been mostly hard at work on the const/inout/scope problem. 
 The current way D does it is hackish, and Andrei feels (and I agree) 
 that bringing rigor to it will make D a considerably stronger language.

There's good progress on that! Andrei
Feb 07 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Walter Bright wrote:
 Andrei has been mostly hard at work on the const/inout/scope problem. 
 The current way D does it is hackish, and Andrei feels (and I agree) 
 that bringing rigor to it will make D a considerably stronger language.

There's good progress on that!

Yup, I think the end result will be something we can all be proud of.
Feb 07 2007
parent Sean Kelly <sean f4.ca> writes:
Walter Bright wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Walter Bright wrote:
 Andrei has been mostly hard at work on the const/inout/scope problem. 
 The current way D does it is hackish, and Andrei feels (and I agree) 
 that bringing rigor to it will make D a considerably stronger language.

There's good progress on that!

Yup, I think the end result will be something we can all be proud of.

This is great news!
Feb 07 2007
prev sibling parent Kevin Bealer <kevinbealer gmai.com> writes:
== Quote from Ivan Senji (ivan.senji_REMOVE_ _THIS__gmail.com)'s article
 Andrei Alexandrescu (See Website For Email) wrote:
 The ability to transform true code trees will come with D's macro
 abilities. But that's a few months ahead at least.

I can't believe that so many hours have passes since this post and no one has asked for some details?

I was going to, but I'm still righting the mental furniture that got flipped by the last one. Most of the template meta-stuff seems to be done using recursive patterns, (maybe because it is easier to solve the "compile time halting problem" by limiting recursive depth?) So since parse trees are usually thought of as recursion friendly, I imagine it would allow you to take a class as a template argument in the way that you can currently take a tuple, and do either foreach or C[i..j] style processing of fields, subclasses, etc. A lower level abstraction would be to actually take a parse tree and just hand it to you, and you could iterate over it, something like casting a "char[][int]*" to "AA*". But this kind of low level approach would mean the compiler would forever need to support the same parse tree layouts, which is undesireable. I guess the question depends on how people actually use this kind of thing; how do people use LISP macros in real world code? It's sort of a language-writing language, but for some reason its hard to find non-toy examples. (On the other hand, when I do its hard to read them.) Kevin
Feb 07 2007
prev sibling next sibling parent Tom S <h3r3tic remove.mat.uni.torun.pl> writes:
Walter Bright wrote:
 Fixes many bugs, some serious.
 
 Some new goodies.
 
 http://www.digitalmars.com/d/changelog.html
 
 http://ftp.digitalmars.com/dmd.1.005.zip

Wow, This is just sweet! Thanks, Walter! :D I'll see what kind of abuse can be done with the new mixin stuff ;) Still, we must find a way to reduce the memory requirements of evaluating more complex templates - as at one point, they are going to contain pretty arbitrary code. -- Tomasz Stachowiak
Feb 08 2007
prev sibling parent John Reimer <terminal.node gmail.com> writes:
On Fri, 09 Feb 2007 12:12:24 -0800, Andrei Alexandrescu (See Website For
Email) wrote:

 Andreas Kochenburger wrote:
 Kevin Bealer wrote:
 Charles D Hixson wrote:
 But the central feature of FORTH is that the compiler and runtime can 
 be made mind-bogglingly small.  I think the run time speed for a naive 
 interpretation is probably somewhere between C and interpreted bytecode.

  From this page about tiny4th: http://www.seanet.com/~karllunt/tiny4th

 "The run-time engine takes up less than 1K of code space and the 
 p-codes are so dense that you can get a lot of robot functionality in 
 just 2K."

Before someone thinks, Forth is only a play-thing, see http://www.forth.com/

There's an extra comma in there that pretty much changes the meaning of the sentence :o). Andrei

Funny! :D -JJR
Feb 09 2007