www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.announce - DMD 1.021 and 2.004 releases

reply Walter Bright <newshound1 digitalmars.com> writes:
Mostly bug fixes for CTFE. Added library switches at Tango's request.

http://www.digitalmars.com/d/1.0/changelog.html
http://ftp.digitalmars.com/dmd.1.021.zip

http://www.digitalmars.com/d/changelog.html
http://ftp.digitalmars.com/dmd.2.004.zip
Sep 05 2007
next sibling parent Sean Kelly <sean f4.ca> writes:
Walter Bright wrote:
 Mostly bug fixes for CTFE. Added library switches at Tango's request.

Awesome! And great job, as always. Sean
Sep 05 2007
prev sibling next sibling parent reply Lars Ivar Igesund <larsivar igesund.net> writes:
Walter Bright wrote:

 Mostly bug fixes for CTFE. Added library switches at Tango's request.

Great! And now for GDC to follow suit ;) -- Lars Ivar Igesund blog at http://larsivi.net DSource, #d.tango & #D: larsivi Dancing the Tango
Sep 05 2007
parent reply Gregor Richards <Richards codu.org> writes:
Lars Ivar Igesund wrote:
 Walter Bright wrote:
 
 Mostly bug fixes for CTFE. Added library switches at Tango's request.

Great! And now for GDC to follow suit ;)

GDC followed suit roughly twenty years before GDC was written. - Gregor Richards
Sep 05 2007
parent Sean Kelly <sean f4.ca> writes:
Gregor Richards wrote:
 
 GDC followed suit roughly twenty years before GDC was written.

Now there's a paradox for you... ;-) Sean
Sep 05 2007
prev sibling next sibling parent reply BLS <nanali nospam-wanadoo.fr> writes:
Walter Bright schrieb:
 Mostly bug fixes for CTFE. Added library switches at Tango's request.
 
 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.021.zip
 
 http://www.digitalmars.com/d/changelog.html
 http://ftp.digitalmars.com/dmd.2.004.zip

Multiple Module static constructors/destructors allowed. Unfortunately I have no idea what a "multiple module constructor" is. A code snippet showing a multi. module constructor in action would help. Sorry about my ignorance and thanks in advance. Bjoern
Sep 05 2007
parent reply Sean Kelly <sean f4.ca> writes:
BLS wrote:
 Walter Bright schrieb:
 Mostly bug fixes for CTFE. Added library switches at Tango's request.

 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.021.zip

 http://www.digitalmars.com/d/changelog.html
 http://ftp.digitalmars.com/dmd.2.004.zip

Multiple Module static constructors/destructors allowed. Unfortunately I have no idea what a "multiple module constructor" is. A code snippet showing a multi. module constructor in action would help. Sorry about my ignorance and thanks in advance.

I thought they were already supported, but here's an example: module MyModule; static this() { printf( "ctor A\n" ); } static this() { printf( "ctor B\n" ); } static ~this() { printf( "dtor A\n" ); } static ~this() { printf( "dtor B\n" ); } Sean
Sep 05 2007
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Sean Kelly wrote:
 I thought they were already supported, but here's an example:
 
 
     module MyModule;
 
     static  this() { printf( "ctor A\n" ); }
     static  this() { printf( "ctor B\n" ); }
     static ~this() { printf( "dtor A\n" ); }
     static ~this() { printf( "dtor B\n" ); }

They were already supported, they just didn't work :-(
Sep 05 2007
parent reply Sean Kelly <sean f4.ca> writes:
Walter Bright wrote:
 Sean Kelly wrote:
 I thought they were already supported, but here's an example:


     module MyModule;

     static  this() { printf( "ctor A\n" ); }
     static  this() { printf( "ctor B\n" ); }
     static ~this() { printf( "dtor A\n" ); }
     static ~this() { printf( "dtor B\n" ); }

They were already supported, they just didn't work :-(

Oh! Then why not make this change to the 1.0 release as well? Sean
Sep 05 2007
parent reply Chris Nicholson-Sauls <ibisbasenji gmail.com> writes:
Sean Kelly wrote:
 Walter Bright wrote:
 Sean Kelly wrote:
 I thought they were already supported, but here's an example:


     module MyModule;

     static  this() { printf( "ctor A\n" ); }
     static  this() { printf( "ctor B\n" ); }
     static ~this() { printf( "dtor A\n" ); }
     static ~this() { printf( "dtor B\n" ); }

They were already supported, they just didn't work :-(

Oh! Then why not make this change to the 1.0 release as well? Sean

Walter: Yes please! Great job on the latest update, btw. (As if you haven't heard it yet.) -- Chris Nicholson-Sauls
Sep 05 2007
parent reply Sean Kelly <sean f4.ca> writes:
Chris Nicholson-Sauls wrote:
 Sean Kelly wrote:
 Walter Bright wrote:
 Sean Kelly wrote:
 I thought they were already supported, but here's an example:


     module MyModule;

     static  this() { printf( "ctor A\n" ); }
     static  this() { printf( "ctor B\n" ); }
     static ~this() { printf( "dtor A\n" ); }
     static ~this() { printf( "dtor B\n" ); }

They were already supported, they just didn't work :-(

Oh! Then why not make this change to the 1.0 release as well?

Walter: Yes please! Great job on the latest update, btw. (As if you haven't heard it yet.)

My mistake. I thought this was only in the 2.0 changelog but it's in both. Sean
Sep 05 2007
parent Chris Nicholson-Sauls <ibisbasenji gmail.com> writes:
Sean Kelly wrote:
 Chris Nicholson-Sauls wrote:
 Sean Kelly wrote:
 Walter Bright wrote:
 Sean Kelly wrote:
 I thought they were already supported, but here's an example:


     module MyModule;

     static  this() { printf( "ctor A\n" ); }
     static  this() { printf( "ctor B\n" ); }
     static ~this() { printf( "dtor A\n" ); }
     static ~this() { printf( "dtor B\n" ); }

They were already supported, they just didn't work :-(

Oh! Then why not make this change to the 1.0 release as well?

Walter: Yes please! Great job on the latest update, btw. (As if you haven't heard it yet.)

My mistake. I thought this was only in the 2.0 changelog but it's in both. Sean

Pardon me while I do my happy dance. -- Chris Nicholson-Sauls
Sep 05 2007
prev sibling parent reply BLS <nanali nospam-wanadoo.fr> writes:
Sean Kelly schrieb:
 BLS wrote:
 Walter Bright schrieb:
 Mostly bug fixes for CTFE. Added library switches at Tango's request.

 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.021.zip

 http://www.digitalmars.com/d/changelog.html
 http://ftp.digitalmars.com/dmd.2.004.zip

Multiple Module static constructors/destructors allowed. Unfortunately I have no idea what a "multiple module constructor" is. A code snippet showing a multi. module constructor in action would help. Sorry about my ignorance and thanks in advance.

I thought they were already supported, but here's an example: module MyModule; static this() { printf( "ctor A\n" ); } static this() { printf( "ctor B\n" ); } static ~this() { printf( "dtor A\n" ); } static ~this() { printf( "dtor B\n" ); } Sean

Hm. okay I am able to call a static ctor/dtor from an foreign module .. but the semantic association I have regarding static module constructor is different/ Something like loading one or more module at compile time, pick up some ctor infos from module A containing A.X A.Y and from module B containing B.C and init. ALL the good stuff in A and B from C. which is in your example MyModule. However. I have no idea which advantages this feature really has. Bjoern
Sep 05 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
BLS wrote:
 However. I have no idea which advantages this feature really has.

In a long module, you can organize the static constructor code in a way that makes sense, rather than being forced to put it all in one place. It also makes it practical to mixin code that requires static construction.
Sep 05 2007
parent Chad J <gamerChad _spamIsBad_gmail.com> writes:
Walter Bright wrote:
 BLS wrote:
 However. I have no idea which advantages this feature really has.

In a long module, you can organize the static constructor code in a way that makes sense, rather than being forced to put it all in one place. It also makes it practical to mixin code that requires static construction.

Badass, it is good to see this rough edge get smoothed. Thank you Walter! I think I used to have some hack where I would wrap each static ctor in its own class, and somehow this would make it work. I'm not sure if that's correct or not though.
Sep 05 2007
prev sibling next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 Mostly bug fixes for CTFE. Added library switches at Tango's request.
 
 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.021.zip
 
 http://www.digitalmars.com/d/changelog.html
 http://ftp.digitalmars.com/dmd.2.004.zip

What's std.hiddenfunc for? I looked at the code but it didn't help. --bb
Sep 05 2007
parent Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 What's std.hiddenfunc for?  I looked at the code but it didn't help.

It's an exception thrown when an overridden function that still exists in the vtbl[] gets called anyway.
Sep 05 2007
prev sibling next sibling parent reply Brad Roberts <braddr puremagic.com> writes:
On Wed, 5 Sep 2007, Walter Bright wrote:

 Mostly bug fixes for CTFE. Added library switches at Tango's request.
 
 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.021.zip
 
 http://www.digitalmars.com/d/changelog.html
 http://ftp.digitalmars.com/dmd.2.004.zip

The "Download latest D 2.0 alpha D compiler for Win32 and x86 linux" link on http://www.digitalmars.com/d/changelog.html still points to 2.002. Similarly, though at least labeled, the 1.0 changelog still points to 1.016, now 5 versions behind? Now that the 1.0 code line is no longer receiving anything other than bug fixes, is there really the need to distinguish between the latest 1.0 release and some other really stable 1.0 release? http://www.digitalmars.com/d/1.0/dcompiler.html#Win32 still lists all the 1.00 (not 1.x) mirrors. The same with http://www.digitalmars.com/d/1.0/dcompiler.html#linux. There's a non 1.0 scoped version of the page, http://www.digitalmars.com/d/dcompiler.html, that at first glance looks identical to the 1.0/dcompiler.html page with the same problems. I know I've brought some of these things up at least a handfull of times in the past.. can they finally be cleaned up, pretty please? Thanks, Brad
Sep 05 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Brad Roberts wrote:
 The "Download latest D 2.0 alpha D compiler for Win32 and x86 linux" link 
 on http://www.digitalmars.com/d/changelog.html still points to 2.002.
 
 Similarly, though at least labeled, the 1.0 changelog still points to 
 1.016, now 5 versions behind?  Now that the 1.0 code line is no longer 
 receiving anything other than bug fixes, is there really the need to 
 distinguish between the latest 1.0 release and some other really stable 
 1.0 release?

I think their is still a need, as there's always a risk I break something with a new release, even if it's just bug fixes.
 http://www.digitalmars.com/d/1.0/dcompiler.html#Win32 still lists all the 
 1.00 (not 1.x) mirrors.  The same with 
 http://www.digitalmars.com/d/1.0/dcompiler.html#linux.  There's a non 1.0 
 scoped version of the page, http://www.digitalmars.com/d/dcompiler.html, 
 that at first glance looks identical to the 1.0/dcompiler.html page with 
 the same problems.

I'll fix it.
Sep 05 2007
parent reply Don Clugston <dac nospam.com.au> writes:
Walter Bright wrote:
 Brad Roberts wrote:
 The "Download latest D 2.0 alpha D compiler for Win32 and x86 linux" 
 link on http://www.digitalmars.com/d/changelog.html still points to 
 2.002.

 Similarly, though at least labeled, the 1.0 changelog still points to 
 1.016, now 5 versions behind?  Now that the 1.0 code line is no longer 
 receiving anything other than bug fixes, is there really the need to 
 distinguish between the latest 1.0 release and some other really 
 stable 1.0 release?

I think their is still a need, as there's always a risk I break something with a new release, even if it's just bug fixes.

1.020 seemed to be stable. Like 1.016, it was around for a long time, and therefore particularly well tested. There were some great bug fixes in 1.018 and 1.019. There's that substantive change about .init which happened in 1.017. If that's permanent, it'd be good to stop further development relying on the old behaviour. I think we need a policy for when the 'stable version' should be updated. Also, I don't see any mention of delimited string literals in the changelog. <g>
Sep 06 2007
next sibling parent Sean Kelly <sean f4.ca> writes:
Don Clugston wrote:
 
 Also, I don't see any mention of delimited string literals in the 
 changelog. <g>

Delimited string literals? Sean
Sep 06 2007
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Don Clugston wrote:
 1.020 seemed to be stable. Like 1.016, it was around for a long time, 
 and therefore particularly well tested. There were some great bug fixes 
 in 1.018 and 1.019.

Done.
 There's that substantive change about .init which happened in 1.017. If 
 that's permanent, it'd be good to stop further development relying on 
 the old behaviour.
 I think we need a policy for when the 'stable version' should be updated.
 
 
 Also, I don't see any mention of delimited string literals in the 
 changelog. <g>

Fixed.
Sep 06 2007
next sibling parent reply BCS <ao pathlink.com> writes:
Reply to Walter,

 Don Clugston wrote:
 
 Also, I don't see any mention of delimited string literals in the
 changelog. <g>
 


where's the docs?
Sep 06 2007
next sibling parent reply Nathan Reed <nathaniel.reed gmail.com> writes:
BCS wrote:
 Reply to Walter,
 
 Don Clugston wrote:

 Also, I don't see any mention of delimited string literals in the
 changelog. <g>


where's the docs?

The docs for delimited string literals are now at http://www.digitalmars.com/d/lex.html Thanks, Nathan Reed
Sep 06 2007
next sibling parent reply Sean Kelly <sean f4.ca> writes:
Nathan Reed wrote:
 BCS wrote:
 Reply to Walter,

 Don Clugston wrote:

 Also, I don't see any mention of delimited string literals in the
 changelog. <g>


where's the docs?

The docs for delimited string literals are now at http://www.digitalmars.com/d/lex.html

And the lecture slides have more info, obviously. Sean
Sep 06 2007
next sibling parent BCS <ao pathlink.com> writes:
Reply to Sean,

 Nathan Reed wrote:
 
 BCS wrote:
 
 Reply to Walter,
 
 Don Clugston wrote:
 
 Also, I don't see any mention of delimited string literals in the
 changelog. <g>
 



http://www.digitalmars.com/d/lex.html

Sean

I wish Walter would put more links from the change log into the docs (and more labels in the docs)
Sep 06 2007
prev sibling parent Nathan Reed <nathaniel.reed gmail.com> writes:
Sean Kelly wrote:
 Nathan Reed wrote:
 BCS wrote:
 Reply to Walter,

 Don Clugston wrote:

 Also, I don't see any mention of delimited string literals in the
 changelog. <g>


where's the docs?

The docs for delimited string literals are now at http://www.digitalmars.com/d/lex.html

And the lecture slides have more info, obviously.

Actually, the docs on the web go into a bunch more detail than the lecture slides :) Thanks, Nathan Reed
Sep 06 2007
prev sibling parent reply "Stewart Gordon" <smjg_1998 yahoo.com> writes:
"Nathan Reed" <nathaniel.reed gmail.com> wrote in message 
news:fbpfek$2qpb$1 digitalmars.com...
<snip>
 The docs for delimited string literals are now at 
 http://www.digitalmars.com/d/lex.html

One thing for sure: these things are going to be a nightmare to syntax-highlight! Stewart.
Sep 09 2007
parent reply Kirk McDonald <kirklin.mcdonald gmail.com> writes:
Stewart Gordon wrote:
 "Nathan Reed" <nathaniel.reed gmail.com> wrote in message 
 news:fbpfek$2qpb$1 digitalmars.com...
 <snip>
 
 The docs for delimited string literals are now at 
 http://www.digitalmars.com/d/lex.html

One thing for sure: these things are going to be a nightmare to syntax-highlight! Stewart.

I've already updated the Pygments syntax highlighter with this new syntax. They are not fundamentally any harder to highlight than the existing nesting /+ +/ comments. -- Kirk McDonald http://kirkmcdonald.blogspot.com Pyd: Connecting D and Python http://pyd.dsource.org
Sep 10 2007
parent reply "Stewart Gordon" <smjg_1998 yahoo.com> writes:
"Kirk McDonald" <kirklin.mcdonald gmail.com> wrote in message 
news:fc2u8u$21d9$1 digitalmars.com...
<snip> [delimited string literals]
 I've already updated the Pygments syntax highlighter with this new syntax. 
 They are not fundamentally any harder to highlight than the existing 
 nesting /+ +/ comments.

Maybe. But still, nested comments are probably likely to be supported by more code editors than such an unusual feature as delimited strings. Stewart.
Sep 10 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Stewart Gordon wrote:
 Maybe.  But still, nested comments are probably likely to be supported 
 by more code editors than such an unusual feature as delimited strings.

Delimited strings are standard practice in Perl. C++0x is getting delimited strings. Code editors that can't handle them are going to become rapidly obsolete. The more unusual feature is the token delimited strings.
Sep 10 2007
next sibling parent reply Kirk McDonald <kirklin.mcdonald gmail.com> writes:
Walter Bright wrote:
 Stewart Gordon wrote:
 
 Maybe.  But still, nested comments are probably likely to be supported 
 by more code editors than such an unusual feature as delimited strings.

Delimited strings are standard practice in Perl. C++0x is getting delimited strings. Code editors that can't handle them are going to become rapidly obsolete. The more unusual feature is the token delimited strings.

Which, since there's no nesting going on, are actually very easy to match. The Pygments lexer matches them with the following regex: q"([a-zA-Z_]\w*)\n.*?\n\1" -- Kirk McDonald http://kirkmcdonald.blogspot.com Pyd: Connecting D and Python http://pyd.dsource.org
Sep 10 2007
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Kirk McDonald wrote:
 Walter Bright wrote:
 The more unusual feature is the token delimited strings.

Which, since there's no nesting going on, are actually very easy to match. The Pygments lexer matches them with the following regex: q"([a-zA-Z_]\w*)\n.*?\n\1"

I meant the: q{ these must be valid D tokens { and brackets nest } /* ignore this } */ };
Sep 10 2007
parent reply Kirk McDonald <kirklin.mcdonald gmail.com> writes:
Walter Bright wrote:
 Kirk McDonald wrote:
 
 Walter Bright wrote:

 The more unusual feature is the token delimited strings.

Which, since there's no nesting going on, are actually very easy to match. The Pygments lexer matches them with the following regex: q"([a-zA-Z_]\w*)\n.*?\n\1"

I meant the: q{ these must be valid D tokens { and brackets nest } /* ignore this } */ };

Those are also fairly easy. The Pygments lexer only highlights the opening q{ and the closing }. The tokens inside of the string are highlighted normally. Since this lexer is the one used by Dsource, I've thrown together a wiki page showing it off: http://www.dsource.org/projects/dsource/wiki/DelimitedStringHighlighting A note about this lexer: It uses a combination of regular expressions, a state machine, and a stack. When a regex matches, you usually just specify that the matching text should be highlighted as such-and-such a token. In some cases, though, you want to push a particular state onto the stack, which will then swap in a different set of regexes, until such time as this new state pops itself off the stack. Also, it is of course written in Python, so the code below is Python code. For instance, the rule for the "heredoc" strings, which I mentioned previously, looks like this: (r'q"([a-zA-Z_]\w*)\n.*?\n\1"', String), That is, it takes the chunk of text matched by that regex, and highlights it as a string. The entry point for token strings is the following rule: (r'q{', String, 'token_string'), Or: Highlight the token "q{" as a string, then push the 'token_string' state onto the stack. (This third argument is optional, and most of the rules do not have it.) The 'token_string' state looks like this: 'token_string': [ (r'{', Punctuation, 'token_string_nest'), (r'}', String, '#pop'), include('root'), ], 'token_string_nest': [ (r'{', Punctuation, '#push'), (r'}', Punctuation, '#pop'), include('root'), ], include('root') tells it to include the contents of the 'root' state. (Which is the state the D lexer starts out in, which has all of the regular tokens in it.) '#push' means to push the current state onto the stack again, and '#pop' means to pop off of the stack. By putting the rules for '{' and '}' before the 'root' state, we override their default behavior. (Which is just to be highlighted as punctuation.) These two nearly-identical states are needed because we only want to highlight '}' as a string when it is the last one in the token string. When '}' is closing a nested brace, we want to highlight it as regular punctuation, and pop off of the stack. Even if the above is gibberish to you, I still assert that it's quite straightforward, and indeed is very much like how the nesting /+ +/ comments were already highlighted. (Albeit without the include('root') call, and only one extra state.) All of this is built on the Pygments lexer framework. All I had to do was define the big list of regexes, and the occasional extra state (as I've outlined above). -- Kirk McDonald http://kirkmcdonald.blogspot.com Pyd: Connecting D and Python http://pyd.dsource.org
Sep 10 2007
next sibling parent reply Chris Nicholson-Sauls <ibisbasenji gmail.com> writes:
Kirk McDonald wrote:
 Walter Bright wrote:
 Kirk McDonald wrote:

 Walter Bright wrote:

 The more unusual feature is the token delimited strings.

Which, since there's no nesting going on, are actually very easy to match. The Pygments lexer matches them with the following regex: q"([a-zA-Z_]\w*)\n.*?\n\1"

I meant the: q{ these must be valid D tokens { and brackets nest } /* ignore this } */ };

Those are also fairly easy. The Pygments lexer only highlights the opening q{ and the closing }. The tokens inside of the string are highlighted normally. Since this lexer is the one used by Dsource, I've thrown together a wiki page showing it off: http://www.dsource.org/projects/dsource/wiki/DelimitedStringHighlighting

That's pretty danged nifty. Any chance, however, that it could apply a slight background color to the token string? -- Chris Nicholson-Sauls
Sep 10 2007
parent Kirk McDonald <kirklin.mcdonald gmail.com> writes:
Chris Nicholson-Sauls wrote:
 Kirk McDonald wrote:
 
 Walter Bright wrote:

 Kirk McDonald wrote:

 Walter Bright wrote:

 The more unusual feature is the token delimited strings.

Which, since there's no nesting going on, are actually very easy to match. The Pygments lexer matches them with the following regex: q"([a-zA-Z_]\w*)\n.*?\n\1"

I meant the: q{ these must be valid D tokens { and brackets nest } /* ignore this } */ };

Those are also fairly easy. The Pygments lexer only highlights the opening q{ and the closing }. The tokens inside of the string are highlighted normally. Since this lexer is the one used by Dsource, I've thrown together a wiki page showing it off: http://www.dsource.org/projects/dsource/wiki/DelimitedStringHighlighting

That's pretty danged nifty. Any chance, however, that it could apply a slight background color to the token string? -- Chris Nicholson-Sauls

Not really. It would require defining a new token which highlights the background for every existing token, and then updating all of the styles to provide coloring for that background... Pygments simply isn't set up to do that kind of manipulation. In fact, it would even be harder to highlight the whole thing as a string, than to highlight it the way it is now. (Unless I simply ignored the limitation that its contents consist only of valid tokens.) -- Kirk McDonald http://kirkmcdonald.blogspot.com Pyd: Connecting D and Python http://pyd.dsource.org
Sep 11 2007
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Kirk McDonald wrote:
 Those are also fairly easy. The Pygments lexer only highlights the 
 opening q{ and the closing }. The tokens inside of the string are 
 highlighted normally.

Sweet!
Sep 11 2007
prev sibling parent reply Jari-Matti =?ISO-8859-1?Q?M=E4kel=E4?= <jmjmak utu.fi.invalid> writes:
Kirk McDonald wrote:

 Walter Bright wrote:
 Stewart Gordon wrote:
 
 Maybe.  But still, nested comments are probably likely to be supported
 by more code editors than such an unusual feature as delimited strings.

Delimited strings are standard practice in Perl. C++0x is getting delimited strings. Code editors that can't handle them are going to become rapidly obsolete. The more unusual feature is the token delimited strings.

Which, since there's no nesting going on, are actually very easy to match. The Pygments lexer matches them with the following regex: q"([a-zA-Z_]\w*)\n.*?\n\1"

It's great to see Pygments handles so many possible syntaxes. Unfortunately backreferences are not part of regular expressions. I've noticed two kinds of problems in tools: a) some can't handle backreferences, but provide support for nested comments as a special case. So comments are no problem then, but all delimited strings are. b) some lexers handles both nested comments and delimited strings, but all delimiters must be enumerated in the language definition. Even worse, some highlighters only handle delimited comments, not strings. Maybe the new features (= one saves on average < 5 characters of typing per string) are more important than tool support? Maybe all tools should be rewritten in Python & Pygments?
Sep 11 2007
next sibling parent reply Jascha Wetzel <"[firstname]" mainia.de> writes:
Jari-Matti Mkel wrote:
 Kirk McDonald wrote:
 
 Walter Bright wrote:
 Stewart Gordon wrote:

 Maybe.  But still, nested comments are probably likely to be supported
 by more code editors than such an unusual feature as delimited strings.

Delimited strings are standard practice in Perl. C++0x is getting delimited strings. Code editors that can't handle them are going to become rapidly obsolete. The more unusual feature is the token delimited strings.

match. The Pygments lexer matches them with the following regex: q"([a-zA-Z_]\w*)\n.*?\n\1"

It's great to see Pygments handles so many possible syntaxes. Unfortunately backreferences are not part of regular expressions. I've noticed two kinds of problems in tools: a) some can't handle backreferences, but provide support for nested comments as a special case. So comments are no problem then, but all delimited strings are. b) some lexers handles both nested comments and delimited strings, but all delimiters must be enumerated in the language definition. Even worse, some highlighters only handle delimited comments, not strings. Maybe the new features (= one saves on average < 5 characters of typing per string) are more important than tool support? Maybe all tools should be rewritten in Python & Pygments?

D's delimited strings can (luckily) be scanned with regular languages, because the enclosing double quotes are required. else the lexical structure wouldn't even be context free and a nightmare for automatically generated lexers. therefore you can match q"[^"]*" and check the delimiters during (context sensitive) semantic analysis.
Sep 11 2007
next sibling parent reply Jari-Matti =?ISO-8859-1?Q?M=E4kel=E4?= <jmjmak utu.fi.invalid> writes:
Jascha Wetzel wrote:

 D's delimited strings can (luckily) be scanned with regular languages,
 because the enclosing double quotes are required. else the lexical
 structure wouldn't even be context free and a nightmare for
 automatically generated lexers.

Right, thanks.
 therefore you can match q"[^"]*" and check the delimiters during
 (context sensitive) semantic analysis.

But e.g. syntax highlighting needs the semantic info to change the style of the text within the delimiters. The analyser also needs to check whether the two delimiters match. Like I said above, if the tool doesn't provide enough support, you're stuck. I haven't searched for all corner cases, but wasn't the old grammar scannable and highlightable with plain regular expressions (except the nested comments of course).
Sep 11 2007
parent reply Jascha Wetzel <"[firstname]" mainia.de> writes:
Jari-Matti Mkel wrote:
 Jascha Wetzel wrote:
 
 D's delimited strings can (luckily) be scanned with regular languages,
 because the enclosing double quotes are required. else the lexical
 structure wouldn't even be context free and a nightmare for
 automatically generated lexers.

Right, thanks.
 therefore you can match q"[^"]*" and check the delimiters during
 (context sensitive) semantic analysis.

But e.g. syntax highlighting needs the semantic info to change the style of the text within the delimiters. The analyser also needs to check whether the two delimiters match. Like I said above, if the tool doesn't provide enough support, you're stuck. I haven't searched for all corner cases, but wasn't the old grammar scannable and highlightable with plain regular expressions (except the nested comments of course).

before, the lexical structure was context free because of nested comments and floats of the form "[0-9]+\.". the latter can be matched with regexps if they support lookaheads, though. if you stick to the specs verbatim, q"EOS...EOS" as a whole is a string literal. assuming that all tokens/lexemes are atomic, a lexer can't "look inside" the string literal. from that point of view, the lexical structure it's still context free. if possible, i'd add a thin wrapper around an automatically generated lexer that checks the delimiters in a postprocess.
Sep 11 2007
parent reply Jari-Matti =?ISO-8859-1?Q?M=E4kel=E4?= <jmjmak utu.fi.invalid> writes:
Jascha Wetzel wrote:

 before, the lexical structure was context free because of nested
 comments and floats of the form "[0-9]+\.". the latter can be matched
 with regexps if they support lookaheads, though.

Nested comments don't necessarily need much more than a constant size counter, either.
 if you stick to the specs verbatim, q"EOS...EOS" as a whole is a string
 literal. assuming that all tokens/lexemes are atomic, a lexer can't
 "look inside" the string literal. from that point of view, the lexical
 structure it's still context free.

But does a simple tool have to be so complex?
 if possible, i'd add a thin wrapper around an automatically generated
 lexer that checks the delimiters in a postprocess.

That's a bit harder with e.g. closed source tools. Btw, is this a bug? auto foo = q"EOS EOS EOS"; doesn't compile with dmd 2.004. Or is the " always supposed to follow \n + matching identifier?
Sep 11 2007
parent Jascha Wetzel <"[firstname]" mainia.de> writes:
Jari-Matti Mkel wrote:
 Jascha Wetzel wrote:
 
 before, the lexical structure was context free because of nested
 comments and floats of the form "[0-9]+\.". the latter can be matched
 with regexps if they support lookaheads, though.

Nested comments don't necessarily need much more than a constant size counter, either.

it makes the lexer context free, though, and it therefore cannot be implemented with regular expressions only.
 Btw, is this a bug?
 
 auto foo = q"EOS
 EOS
 EOS";
 
 doesn't compile with dmd 2.004. Or is the " always supposed to follow \n +
 matching identifier?

yep, since a non-nesting delimiter may only appear twice.
Sep 11 2007
prev sibling parent reply Kirk McDonald <kirklin.mcdonald gmail.com> writes:
Jascha Wetzel wrote:
 Jari-Matti Mkel wrote:
 
 Kirk McDonald wrote:

 Walter Bright wrote:

 Stewart Gordon wrote:

 Maybe.  But still, nested comments are probably likely to be supported
 by more code editors than such an unusual feature as delimited 
 strings.

Delimited strings are standard practice in Perl. C++0x is getting delimited strings. Code editors that can't handle them are going to become rapidly obsolete. The more unusual feature is the token delimited strings.

Which, since there's no nesting going on, are actually very easy to match. The Pygments lexer matches them with the following regex: q"([a-zA-Z_]\w*)\n.*?\n\1"

It's great to see Pygments handles so many possible syntaxes. Unfortunately backreferences are not part of regular expressions. I've noticed two kinds of problems in tools: a) some can't handle backreferences, but provide support for nested comments as a special case. So comments are no problem then, but all delimited strings are. b) some lexers handles both nested comments and delimited strings, but all delimiters must be enumerated in the language definition. Even worse, some highlighters only handle delimited comments, not strings. Maybe the new features (= one saves on average < 5 characters of typing per string) are more important than tool support? Maybe all tools should be rewritten in Python & Pygments?

D's delimited strings can (luckily) be scanned with regular languages, because the enclosing double quotes are required. else the lexical structure wouldn't even be context free and a nightmare for automatically generated lexers. therefore you can match q"[^"]*" and check the delimiters during (context sensitive) semantic analysis.

Is the following a valid string? q"/foo " bar/" The grammar does not make it clear. The Pygments lexer treats it as though it is, under the assumption that the string continues until the first matching /" is found. Walter also said, in another branch of the thread, that this is not valid: q"/foo/bar/" Since it isn't all /that/ hard to match these examples, I wonder why they are disallowed. Just to simplify the lexer that much more? And, ah! I have found a bug in the Pygments lexer already: auto a = q"/foo/"; auto b = q"/bar/"; Everything from the opening of the first string literal to the end of the second is highlighted. Oops. I have a fix for the lexer, dsource will be updated at some point. -- Kirk McDonald http://kirkmcdonald.blogspot.com Pyd: Connecting D and Python http://pyd.dsource.org
Sep 11 2007
parent reply Jascha Wetzel <"[firstname]" mainia.de> writes:
Kirk McDonald wrote:
 Jascha Wetzel wrote:
 therefore you can match q"[^"]*" and check the delimiters during 
 (context sensitive) semantic analysis.

Is the following a valid string? q"/foo " bar/"

oh, you're right of course...
 Walter also said, in another branch of the thread, that this is not valid:
 
 q"/foo/bar/"
 
 Since it isn't all /that/ hard to match these examples, I wonder why 
 they are disallowed. Just to simplify the lexer that much more?

what string would that represent? foo/bar foobar foo
Sep 11 2007
parent reply Kirk McDonald <kirklin.mcdonald gmail.com> writes:
Jascha Wetzel wrote:
 Kirk McDonald wrote:
 
 Jascha Wetzel wrote:

 therefore you can match q"[^"]*" and check the delimiters during 
 (context sensitive) semantic analysis.

Is the following a valid string? q"/foo " bar/"

oh, you're right of course...
 Walter also said, in another branch of the thread, that this is not 
 valid:

 q"/foo/bar/"

 Since it isn't all /that/ hard to match these examples, I wonder why 
 they are disallowed. Just to simplify the lexer that much more?

what string would that represent? foo/bar foobar foo

I would expect it to represent foo/bar, in the same way that q"(foo(bar))" represents foo(bar). -- Kirk McDonald http://kirkmcdonald.blogspot.com Pyd: Connecting D and Python http://pyd.dsource.org
Sep 11 2007
parent reply "Aziz K." <aziz.kerim gmail.com> writes:
Kirk McDonald wrote:
 I would expect it to represent foo/bar, in the same way that  
 q"(foo(bar))" represents foo(bar).

'/' is not a nesting delimiter. I think q"/foo/bar/" should be scanned as: q"/foo/ // Error: expected '"' after closing delimiter. "foo" would be the actual value of the literal. bar // Identifier token / // Division token " // Start of a new, normal string literal
Sep 12 2007
parent Kirk McDonald <kirklin.mcdonald gmail.com> writes:
Aziz K. wrote:
 Kirk McDonald wrote:
 
 I would expect it to represent foo/bar, in the same way that  
 q"(foo(bar))" represents foo(bar).

'/' is not a nesting delimiter. I think q"/foo/bar/" should be scanned as: q"/foo/ // Error: expected '"' after closing delimiter. "foo" would be the actual value of the literal. bar // Identifier token / // Division token " // Start of a new, normal string literal

When I updated the Pygments lexer, I interpreted it like this: It sees q"/, and matches a string until it sees /". As Pygments is merely a syntax highlighter, it is not really that important for it to correctly flag invalid code as erroneous. Obviously, it /should/ do so in the optimum case, and I may get around to fixing this at some point, but it would be nice for the lexical docs to be a little more clear on this subject. Primarily, I see no reason why q"/foo/bar/" shouldn't be scanned as the string foo/bar. (Though I hasten to add that I recognize we are speaking of edge-cases, probably of interest only to people writing D lexers.) -- Kirk McDonald http://kirkmcdonald.blogspot.com Pyd: Connecting D and Python http://pyd.dsource.org
Sep 12 2007
prev sibling next sibling parent Kirk McDonald <kirklin.mcdonald gmail.com> writes:
Jari-Matti Mkel wrote:
 Kirk McDonald wrote:
 
 
Walter Bright wrote:

Stewart Gordon wrote:


Maybe.  But still, nested comments are probably likely to be supported
by more code editors than such an unusual feature as delimited strings.

Delimited strings are standard practice in Perl. C++0x is getting delimited strings. Code editors that can't handle them are going to become rapidly obsolete. The more unusual feature is the token delimited strings.

Which, since there's no nesting going on, are actually very easy to match. The Pygments lexer matches them with the following regex: q"([a-zA-Z_]\w*)\n.*?\n\1"

It's great to see Pygments handles so many possible syntaxes. Unfortunately backreferences are not part of regular expressions. I've noticed two kinds of problems in tools: a) some can't handle backreferences, but provide support for nested comments as a special case. So comments are no problem then, but all delimited strings are. b) some lexers handles both nested comments and delimited strings, but all delimiters must be enumerated in the language definition. Even worse, some highlighters only handle delimited comments, not strings. Maybe the new features (= one saves on average < 5 characters of typing per string) are more important than tool support? Maybe all tools should be rewritten in Python & Pygments?

While D now requires a fairly powerful lexer to lex properly, it's still easier to lex than, for example, Ruby. Ruby's heredoc strings are more complicated than D's. Even Pygments requires some advanced callback trickery to lex them properly. Docs on Ruby's "here document" string literals: http://docs.huihoo.com/ruby/ruby-man-1.4/syntax.html#here_doc Pygments's Ruby lexer: http://trac.pocoo.org/browser/pygments/trunk/pygments/lexers/agile.py#L260 Also, the lexical phase is still entirely independent of the syntactical and semantic phases, even if it is a little more difficult than it was before. My point is simply that any tool capable of lexing Ruby -- and there are a number of these -- is more than powerful enough to lex D. So the bar is high, but quite reachable. I do not think it is extraordinary that a tool written in Python would take advantage of Python's regular expressions' features. -- Kirk McDonald http://kirkmcdonald.blogspot.com Pyd: Connecting D and Python http://pyd.dsource.org
Sep 11 2007
prev sibling parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Jari-Matti Mkel wrote:
 Kirk McDonald wrote:
 
 Walter Bright wrote:
 Stewart Gordon wrote:

 Maybe.  But still, nested comments are probably likely to be supported
 by more code editors than such an unusual feature as delimited strings.

Delimited strings are standard practice in Perl. C++0x is getting delimited strings. Code editors that can't handle them are going to become rapidly obsolete. The more unusual feature is the token delimited strings.

match. The Pygments lexer matches them with the following regex: q"([a-zA-Z_]\w*)\n.*?\n\1"

It's great to see Pygments handles so many possible syntaxes. Unfortunately backreferences are not part of regular expressions. I've noticed two kinds of problems in tools: a) some can't handle backreferences, but provide support for nested comments as a special case. So comments are no problem then, but all delimited strings are. b) some lexers handles both nested comments and delimited strings, but all delimiters must be enumerated in the language definition. Even worse, some highlighters only handle delimited comments, not strings. Maybe the new features (= one saves on average < 5 characters of typing per string) are more important than tool support? Maybe all tools should be rewritten in Python & Pygments?

Ok, why would syntax highlighting have to be implemented with a regexp in the first place? -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Sep 11 2007
prev sibling parent "Stewart Gordon" <smjg_1998 yahoo.com> writes:
"Walter Bright" <newshound1 digitalmars.com> wrote in message 
news:fc45ic$1k04$1 digitalmars.com...
 Stewart Gordon wrote:
 Maybe.  But still, nested comments are probably likely to be supported by 
 more code editors than such an unusual feature as delimited strings.

Delimited strings are standard practice in Perl.

But how many editors do a good job of syntax-highlighting Perl anyway, considering the mutual dependence between the lexer and the parser?
 C++0x is getting delimited strings.  Code editors that can't handle them 
 are going to become rapidly obsolete.

Maybe. But an editor being obsolete doesn't stop people from using it and even liking it for the features it does have. Take the number of people still using TextPad, for instance.
 The more unusual feature is the token delimited strings.

Indeed. Stewart.
Sep 11 2007
prev sibling parent reply BCS <ao pathlink.com> writes:
Reply to Benjamin,

 Reply to Walter,
 
 Don Clugston wrote:
 
 Also, I don't see any mention of delimited string literals in the
 changelog. <g>
 



OK I see DelimitedString and TokenString in the BNF but the doc seem to be a bit mangled about naming things (2 type in the BNF and 3 down below?)
Sep 06 2007
parent reply Nathan Reed <nathaniel.reed gmail.com> writes:
BCS wrote:
 Reply to Benjamin,
 
 Reply to Walter,

 Don Clugston wrote:

 Also, I don't see any mention of delimited string literals in the
 changelog. <g>



OK I see DelimitedString and TokenString in the BNF but the doc seem to be a bit mangled about naming things (2 type in the BNF and 3 down below?)

What are you referring to? There are two doc sections, "Delimited Strings" and "Token Strings". Thanks, Nathan Reed
Sep 06 2007
parent BCS <ao pathlink.com> writes:
Reply to Nathan,

 BCS wrote:
 
 Reply to Benjamin,
 
 Reply to Walter,
 
 Don Clugston wrote:
 
 Also, I don't see any mention of delimited string literals in the
 changelog. <g>
 



to be a bit mangled about naming things (2 type in the BNF and 3 down below?)

Strings" and "Token Strings". Thanks, Nathan Reed

oops I read the table heading "Nesting Delimiters" as a section heading and I guess heredoc and delimited are the same thing.
Sep 06 2007
prev sibling next sibling parent reply Reiner Pope <some address.com> writes:
Walter Bright wrote:
 Don Clugston wrote:
 1.020 seemed to be stable. Like 1.016, it was around for a long time, 
 and therefore particularly well tested. There were some great bug 
 fixes in 1.018 and 1.019.

Done.
 There's that substantive change about .init which happened in 1.017. 
 If that's permanent, it'd be good to stop further development relying 
 on the old behaviour.
 I think we need a policy for when the 'stable version' should be updated.


 Also, I don't see any mention of delimited string literals in the 
 changelog. <g>

Fixed.

According to the docs, q{/*}*/ } is the same as "/*?*/ " is this a feature to assist macros in parsing strings -- all comments are turned to '?', or is it a mistake? -- Reiner
Sep 06 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Reiner Pope wrote:
 According to the docs,
 
 q{/*}*/ }
 
 is the same as
 
 "/*?*/ "
 
 is this a feature to assist macros in parsing strings -- all comments 
 are turned to '?', or is it a mistake?

It's a typo. Replace the ? with }.
Sep 06 2007
parent Ary Manzana <ary esperanto.org.ar> writes:
Walter Bright wrote:
 Reiner Pope wrote:
 According to the docs,

 q{/*}*/ }

 is the same as

 "/*?*/ "

 is this a feature to assist macros in parsing strings -- all comments 
 are turned to '?', or is it a mistake?

It's a typo. Replace the ? with }.

I also thought it was a ?. Specially since the same example is in the PDF of the conference (slide 36). I also have the doubt of Bruno: what's the use of delimited strings and token strings?
Sep 07 2007
prev sibling parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Walter Bright wrote:
 Don Clugston wrote:
 1.020 seemed to be stable. Like 1.016, it was around for a long time, 
 and therefore particularly well tested. There were some great bug 
 fixes in 1.018 and 1.019.

Done.
 There's that substantive change about .init which happened in 1.017. 
 If that's permanent, it'd be good to stop further development relying 
 on the old behaviour.
 I think we need a policy for when the 'stable version' should be updated.


 Also, I don't see any mention of delimited string literals in the 
 changelog. <g>

Fixed.

Speaking of which, what is the purpose of delimiter strings and the like (token strings, identifier strings) ? Neither the docs or slides go into much detail. So far I can only see use for token strings, in string mixins. -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Sep 07 2007
parent Walter Bright <newshound1 digitalmars.com> writes:
Bruno Medeiros wrote:
 Speaking of which, what is the purpose of delimiter strings and the like 
 (token strings, identifier strings) ? Neither the docs or slides go into 
 much detail. So far I can only see use for token strings, in string mixins.

Makes it easier to insert arbitrary text as a string without having to worry about an inadvertent delimiter inside the string.
Sep 10 2007
prev sibling next sibling parent Robert Fraser <fraserofthenight gmail.com> writes:
Walter Bright Wrote:

 Mostly bug fixes for CTFE. Added library switches at Tango's request.
 
 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.021.zip
 
 http://www.digitalmars.com/d/changelog.html
 http://ftp.digitalmars.com/dmd.2.004.zip

Wow, thanks! It was definitely worth the wait! Also, thanks for adding a few non-breaking features (multiple module static constructors/destructors) to the 1.x branch to show it's still got life, and for adding the default lib switch!
Sep 05 2007
prev sibling next sibling parent reply yidabu <nosapm.admin yidabu.com> writes:
build every program, cause:
Compile error: QuadPart is not a member  of LARGE_INTEGER 

DMD 1.021, Windows XP,

I Searched the DMD directory, not find the definition of LARGE_INTEGER

 
Sep 05 2007
parent reply Sascha Katzner <sorry.no spam.invalid> writes:
yidabu wrote:
 build every program, cause:
 Compile error: QuadPart is not a member  of LARGE_INTEGER 

I've just encountered the same error ("Error: 'QuadPart' is not a member of 'LARGE_INTEGER'") when I tried to compile the WindowsAPI sources. Could be related to: http://d.puremagic.com/issues/show_bug.cgi?id=1473 DMD 1.021, Windows Vista LLAP, Sascha
Sep 06 2007
parent "Stewart Gordon" <smjg_1998 yahoo.com> writes:
"Sascha Katzner" <sorry.no spam.invalid> wrote in message 
news:fbobj2$1tdh$1 digitalmars.com...
 yidabu wrote:
 build every program, cause:
 Compile error: QuadPart is not a member  of LARGE_INTEGER

I've just encountered the same error ("Error: 'QuadPart' is not a member of 'LARGE_INTEGER'") when I tried to compile the WindowsAPI sources. Could be related to: http://d.puremagic.com/issues/show_bug.cgi?id=1473

Indeed. I might have to go back to 1.020 pending a fix. There are quite a few regressions. http://d.puremagic.com/issues/buglist.cgi?version=1.021&bug_severity=regression 1485 has broken my utility library. 1484 may have broken a project or two of mine as well. Stewart.
Sep 09 2007
prev sibling next sibling parent Daniel Keep <daniel.keep.lists gmail.com> writes:
Walter Bright wrote:
 Mostly bug fixes for CTFE. Added library switches at Tango's request.
 
 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.021.zip
 
 http://www.digitalmars.com/d/changelog.html
 http://ftp.digitalmars.com/dmd.2.004.zip

*ahem* H A L L E L U J A ! Oh you've made me a very happy boy. :) The multiple module ctors/dtors thing is *very* welcome. I'll have to poke around the new 2.0 stuff, too. The only thing left that would allow me to ditch my current, let's call it, "insane" compiler set up would be a switch to specify a different sc.ini file. But none the less, thanks very much for these! :) -- Daniel
Sep 05 2007
prev sibling next sibling parent reply Chad J <gamerChad _spamIsBad_gmail.com> writes:
Walter Bright wrote:
 Mostly bug fixes for CTFE. Added library switches at Tango's request.
 
 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.021.zip
 
 http://www.digitalmars.com/d/changelog.html
 http://ftp.digitalmars.com/dmd.2.004.zip

Sweet, I like it. Thank you!!111 One thing though, when I run this: dmd -defaultlib it outputs this: Error: unrecognized switch '-defaultlib' Same with -debuglib. Am I missing something?
Sep 05 2007
next sibling parent negerns <negerns2000 gmail.com> writes:
Also, the -defaultlib and -debuglib switches does not appear in the dmd 
usage display on the commmandline.

negerns
Sep 05 2007
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Chad J wrote:
 One thing though, when I run this:
 dmd -defaultlib
 it outputs this:
 Error: unrecognized switch '-defaultlib'

Try: dmd -defaultlib=foo test.d
Sep 05 2007
parent Chad J <gamerChad _spamIsBad_gmail.com> writes:
Walter Bright wrote:
 Chad J wrote:
 One thing though, when I run this:
 dmd -defaultlib
 it outputs this:
 Error: unrecognized switch '-defaultlib'

Try: dmd -defaultlib=foo test.d

Ah, that works. As negerns mentioned, this doesn't show in the dmd usage info. If it did, that would probably help ;)
Sep 06 2007
prev sibling parent Brad Roberts <braddr puremagic.com> writes:
Chad J wrote:
 Walter Bright wrote:
 Mostly bug fixes for CTFE. Added library switches at Tango's request.

 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.021.zip

 http://www.digitalmars.com/d/changelog.html
 http://ftp.digitalmars.com/dmd.2.004.zip

Sweet, I like it. Thank you!!111 One thing though, when I run this: dmd -defaultlib it outputs this: Error: unrecognized switch '-defaultlib' Same with -debuglib. Am I missing something?

Try running dmd by itself and checking the version. I'll bet you downloaded dmd.zip which points to 1.016 still, not 1.021.
Sep 05 2007
prev sibling next sibling parent Max Samukha <maxter ukr.net> writes:
On Wed, 05 Sep 2007 12:05:07 -0700, Walter Bright
<newshound1 digitalmars.com> wrote:

Mostly bug fixes for CTFE. Added library switches at Tango's request.

http://www.digitalmars.com/d/1.0/changelog.html
http://ftp.digitalmars.com/dmd.1.021.zip

http://www.digitalmars.com/d/changelog.html
http://ftp.digitalmars.com/dmd.2.004.zip

Thanks a lot!
Sep 06 2007
prev sibling parent reply "Aziz K." <aziz.kerim gmail.com> writes:
Hello Walter,

Thanks for the release. Could you clarify a few things regarding the new  
string literals for me, please?

Example:
q"/abc/def/" // Is this "abc/def" or is this an error?

Token string examples:
q{__TIME__} // Should special tokens be evaluated? Resulting in a  
different string than "__TIME__"?
q{666, this is super __EOF__} // Should __EOF__ be evaluated here causing  
the token string to be unterminated?
q{#line 4 "path/to/file"
} // Should the special token sequence be evaluated here?

You provided the following example on the lexer page:
q{ 67QQ }            // error, 67QQ is not a valid D token
Isn't your comment wrong? I see two valid tokens there: an integer "67"  
and an identifier "QQ"

Regards,
Aziz
Sep 10 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Aziz K. wrote:
 Could you clarify a few things regarding the new 
 string literals for me, please?
 
 Example:
 q"/abc/def/" // Is this "abc/def" or is this an error?

Error.
 Token string examples:
 q{__TIME__} // Should special tokens be evaluated? Resulting in a 
 different string than "__TIME__"?

No, no.
 q{666, this is super __EOF__} // Should __EOF__ be evaluated here 
 causing the token string to be unterminated?

Yes (__EOF__ is not a token, it's an end of file)
 q{#line 4 "path/to/file"
 } // Should the special token sequence be evaluated here?

No.
 You provided the following example on the lexer page:
 q{ 67QQ }            // error, 67QQ is not a valid D token
 Isn't your comment wrong? I see two valid tokens there: an integer "67" 
 and an identifier "QQ"

I think you're right.
Sep 11 2007
parent reply "Aziz K." <aziz.kerim gmail.com> writes:
Thanks for clarifying. While implementing the methods in my lexer for  
scanning the new string literals I found a few other ambiguities:

q"∆abcdef∆" // Might be superfluous to ask, but are (non-alpha) Unicode  
character delimiters allowed?
q" abcdef " // "abcdef". Allowed?

q"
äöüß
" // "äöüß". Should leading newlines be skipped or are they allowed as  
delimiters?

q"EOF
abcdefEOF" // Valid? Or is \nEOF a requirement? If so, how would you write  
such a string excluding the last newline? Because you say in the specs  
that the last newline is part of the string. Maybe it shouldn't be?
q"EOF
abcdef
   EOF" // Provided the previous example is an error. Is indenting the  
matching delimiter allowed (with " \t\v\f")?

Walter Bright wrote:
 Aziz K. wrote:
 q{666, this is super __EOF__} // Should __EOF__ be evaluated here  
 causing the token string to be unterminated?

Yes (__EOF__ is not a token, it's an end of file)

0x1A (^Z)? Every time one encounters '_', one would have to look ahead for "_EOF__" and one would have to make sure it's not followed by a valid identifier character. I have twelve instances where I check for \0 and ^Z. It wouldn't be that hard to adapt the code but I'm sure in general it would impact the speed of a D lexer adversely. Regards, Aziz
Sep 11 2007
next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Aziz K. wrote:
 Thanks for clarifying. While implementing the methods in my lexer for 
 scanning the new string literals I found a few other ambiguities:
 
 q"∆abcdef∆" // Might be superfluous to ask, but are (non-alpha) Unicode 
 character delimiters allowed?

Yes.
 q" abcdef " // "abcdef". Allowed?

Yes.
 q"
 äöüß
 " // "äöüß". Should leading newlines be skipped or are they allowed as 
 delimiters?

Skipped.
 q"EOF
 abcdefEOF" // Valid?

No.
 Or is \nEOF a requirement?

Yes.
 If so, how would you 
 write such a string excluding the last newline?

Can't.
 Because you say in the 
 specs that the last newline is part of the string. Maybe it shouldn't be?
 q"EOF
 abcdef
   EOF" // Provided the previous example is an error. Is indenting the 
 matching delimiter allowed (with " \t\v\f")?

No.
 Walter Bright wrote:
 Aziz K. wrote:
 q{666, this is super __EOF__} // Should __EOF__ be evaluated here 
 causing the token string to be unterminated?

Yes (__EOF__ is not a token, it's an end of file)

0x1A (^Z)? Every time one encounters '_', one would have to look ahead for "_EOF__" and one would have to make sure it's not followed by a valid identifier character. I have twelve instances where I check for \0 and ^Z. It wouldn't be that hard to adapt the code but I'm sure in general it would impact the speed of a D lexer adversely. Regards, Aziz

Sep 12 2007
prev sibling parent reply BCS <ao pathlink.com> writes:
Reply to Aziz K.,

 q"EOF
 abcdefEOF" // Valid?

 Or is \nEOF a requirement? If so, how would you
 write
 such a string excluding the last newline?

q"EOF abcdef EOF"[0..$-1]
Sep 12 2007
parent Bill Baxter <dnewsgroup billbaxter.com> writes:
BCS wrote:
 Reply to Aziz K.,
 
 q"EOF
 abcdefEOF" // Valid?

 Or is \nEOF a requirement? If so, how would you
 write
 such a string excluding the last newline?

q"EOF abcdef EOF"[0..$-1]

Or peel off the last line: q"EOF abcdef ghijkl mnop qrstuv EOF" "wxyz." Still... Why the draconian limitation that heredocs MUST always have a newline? Seems like allowing escaped newlines would make life easier. Like q"EOF abcdef ghijkl mnop qrstuv wxyz.\ EOF" Or make only the last newline escapable with something prefixing the terminator, like \: q"EOF abcdef ghijkl mnop qrstuv wxyz. \EOF" --bb
Sep 13 2007