www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Migrating D front end to D - post Dconf

reply "Iain Buclaw" <ibuclaw gdcproject.org> writes:
Daniel and/or David,

We should list down in writing the issues preventing DMD, GDC, 
and LDC having a shared code base.  From what David has shown me, 
LDC will need the most work for this, but I'll list down what I 
can remember.

1. Support extern(C++) classes so can have a split C++/D 
implementation of eg: Expression and others.

2. Support representing integers and floats to a greater 
precision than what the host can natively support. In D there's 
BigInt for integral types, and there's a possibility of using 
std.numeric for floats.  For me, painless conversion between eg: 
BigInt <-> GCC's double_int is a requirement, but that is more of 
an after thought at this point in time.

3. Array ops should be moved out of the front end. The back end 
can deal with emitting the correct Libcall if required.

4. Continue building upon Target to hide target-specific things 
from the front end.  Off the top of my head I've got two to raise 
pulls for: __VENDOR__ and retrieving memalignsize for fields.

5. DMD sends messages to stdout, GDC sends to stderr.  Just a 
small implementation detail, but worth noting where 
'printf'appears, it's almost always rewritten as fprintf(stderr) 
for GDC.

6. LDC does not implement toObjFile, toCtype, toDt, toIR, 
possibly others...

7. BUILTINxxx could be moved into Target, as there is no reason 
why each back end can't support their own builtins for the 
purpose of CTFE.

8. D front end's port.h can't be used by GDC because of 
dependency  on mars.h, this could perhaps be replaced by 
std.numeric post conversion.

9. Opaque declarations of back end types defined in front end 
differ for each compiler implementation.  Eg: elem is a typedef 
to union tree_node.

10. The main function in mars.c is not used by GDC, possibly LDC 
also.  Another implementation detail but also a note to maybe 
split out errorSuplimental and others from that file.

11. The function genCfunc does not generate the arguments of the 
extern(C) symbol.

12. LDC adds extra reserved version identifiers that are not 
allowed to be declared in D code.  This could and probably should 
be merged into D front end. Don't think it would be wise to let 
back end's have the ability to add their own.  Also this list 
needs updating regardless to reflect the documented spec.

13. LDC makes some more arbitrary changes to which the reason for 
the change has been forgotten. Get on it David!  :o)

14. Reading sources asynchronously, GDC ifdefs this out.  Do we 
really need this?  I seem to recall that the speed increase is 
either negliegable or offers no benefit to compilation speed.

15. Deal with all C++ -> D conversion
May 05 2013
next sibling parent "Iain Buclaw" <ibuclaw gdcproject.org> writes:
On Sunday, 5 May 2013 at 13:33:25 UTC, Iain Buclaw wrote:
 15. Deal with all C++ -> D conversion

15. Deal with all C++ -> D conversion issues (see all DDMD marked pull requests). 16. Testing the C++ -> D front end conversion on Linux. Daniel you can send me the sources to test that if getting a Linux box is a problem for you. Anything else I missed? Oh, perhaps licensing issues. I know the C++ sources for the D front end have been assigned to the FSF by Walter, I think the conversion to D is enough change to warrant reassignment. 1, 2, 3, get destroying... Regards Iain.
May 05 2013
prev sibling next sibling parent "David Nadlinger" <see klickverbot.at> writes:
On Sunday, 5 May 2013 at 13:33:25 UTC, Iain Buclaw wrote:
 13. LDC makes some more arbitrary changes to which the reason 
 for the change has been forgotten. Get on it David!  :o)

This applies only to a small part of the changes. The larger share of them will actually need adaption of the upstream frontend sources for a very good reason if we want to have a truly shared codebase. As for the size of the diff, don't forget that LDC doesn't enjoy the luxury of having IN_LLVM sections in the upstream source – the difference in amount of changes actually isn't that large: --- $ fgrep -rI IN_GCC dmd/src | wc -l 49 $ fgrep -rI IN_LLVM ldc/dmd2 | wc -l 57 --- David
May 05 2013
prev sibling next sibling parent "David Nadlinger" <see klickverbot.at> writes:
On Sunday, 5 May 2013 at 13:33:25 UTC, Iain Buclaw wrote:
 12. LDC adds extra reserved version identifiers that are not 
 allowed to be declared in D code.  This could and probably 
 should be merged into D front end. Don't think it would be wise 
 to let back end's have the ability to add their own.  Also this 
 list needs updating regardless to reflect the documented spec.

I think we should just add the full list from http://dlang.org/version.html. This would also resolve the issue for LDC. David
May 05 2013
prev sibling next sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
--001a11c24ed464e3b804dbf98f6e
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

On May 5, 2013 3:30 PM, "David Nadlinger" <see klickverbot.at> wrote:
 On Sunday, 5 May 2013 at 13:33:25 UTC, Iain Buclaw wrote:
 13. LDC makes some more arbitrary changes to which the reason for the


 This applies only to a small part of the changes. The larger share of

very good reason if we want to have a truly shared codebase.
 As for the size of the diff, don't forget that LDC doesn't enjoy the

in amount of changes actually isn't that large:
 ---
 $ fgrep -rI IN_GCC dmd/src | wc -l
 49

 $ fgrep -rI IN_LLVM ldc/dmd2 | wc -l
 57
 ---

 David

Indeed, but I was thinking of changes that aren't ifdef 'd. I'm sure I saw a few... Regards --=20 Iain Buclaw *(p < e ? p++ : p) =3D (c & 0x0f) + '0'; --001a11c24ed464e3b804dbf98f6e Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: quoted-printable <p><br> On May 5, 2013 3:30 PM, &quot;David Nadlinger&quot; &lt;<a href=3D"mailto:s= ee klickverbot.at">see klickverbot.at</a>&gt; wrote:<br> &gt;<br> &gt; On Sunday, 5 May 2013 at 13:33:25 UTC, Iain Buclaw wrote:<br> &gt;&gt;<br> &gt;&gt; 13. LDC makes some more arbitrary changes to which the reason for = the change has been forgotten. Get on it David! =A0:o)<br> &gt;<br> &gt;<br> &gt; This applies only to a small part of the changes. The larger share of = them will actually need adaption of the upstream frontend sources for a ver= y good reason if we want to have a truly shared codebase.<br> &gt;<br> &gt; As for the size of the diff, don&#39;t forget that LDC doesn&#39;t enj= oy the luxury of having IN_LLVM sections in the upstream source =96 the dif= ference in amount of changes actually isn&#39;t that large:<br> &gt;<br> &gt; ---<br> &gt; $ fgrep -rI IN_GCC dmd/src | wc -l<br> &gt; 49<br> &gt;<br> &gt; $ fgrep -rI IN_LLVM ldc/dmd2 | wc -l<br> &gt; 57<br> &gt; ---<br> &gt;<br> &gt; David</p> <p>Indeed, but I was thinking of changes that aren&#39;t ifdef &#39;d.=A0 I= &#39;m sure I saw a few... </p> <p>Regards<br> -- <br> Iain Buclaw</p> <p>*(p &lt; e ? p++ : p) =3D (c &amp; 0x0f) + &#39;0&#39;;<br> </p> --001a11c24ed464e3b804dbf98f6e--
May 05 2013
prev sibling next sibling parent reply =?UTF-8?B?Ikx1w61z?= Marques" <luismarques gmail.com> writes:
On Sunday, 5 May 2013 at 13:33:25 UTC, Iain Buclaw wrote:
 1. Support extern(C++) classes so can have a split C++/D 
 implementation of eg: Expression and others.

I don't know if this will be in the videos, so I'll ask here. I thought extern(C++) only supported interfaces because everything else fell into the "we'd need to pretty much include a C++ compiler into D to support that" camp. Is that not quite true for classes? Did you find some compromise between usefulness and complexity that wasn't obvious before, or did the D compiler transition just motivate adding some additional complexity that previously wasn't deemed acceptable?
May 05 2013
parent Walter Bright <newshound2 digitalmars.com> writes:
On 5/5/2013 9:17 AM, "Luís Marques" <luismarques gmail.com>" wrote:
 On Sunday, 5 May 2013 at 13:33:25 UTC, Iain Buclaw wrote:
 1. Support extern(C++) classes so can have a split C++/D implementation of eg:
 Expression and others.

I don't know if this will be in the videos, so I'll ask here. I thought extern(C++) only supported interfaces because everything else fell into the "we'd need to pretty much include a C++ compiler into D to support that" camp. Is that not quite true for classes? Did you find some compromise between usefulness and complexity that wasn't obvious before, or did the D compiler transition just motivate adding some additional complexity that previously wasn't deemed acceptable?

extern(C++) interfaces are ABI compatible with C++ "com" classes - i.e. single inheritance, no constructors or destructors.
May 05 2013
prev sibling next sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
--047d7b6da4c8d5708c04dbfbfeda
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On May 5, 2013 5:20 PM, "&lt;luismarques gmail.com&gt;&quot; puremagic.com"
<&quot;\&quot;Lu=EDs&quot;.Marques&quot;> wrote:
 On Sunday, 5 May 2013 at 13:33:25 UTC, Iain Buclaw wrote:
 1. Support extern(C++) classes so can have a split C++/D implementation


 I don't know if this will be in the videos, so I'll ask here. I thought

"we'd need to pretty much include a C++ compiler into D to support that" camp. Is that not quite true for classes? Did you find some compromise between usefulness and complexity that wasn't obvious before, or did the D compiler transition just motivate adding some additional complexity that previously wasn't deemed acceptable? It was mentioned, however I do believe there are a few more complicated things than that. Many would be in a position to educate you on that. Regards --=20 Iain Buclaw *(p < e ? p++ : p) =3D (c & 0x0f) + '0'; --047d7b6da4c8d5708c04dbfbfeda Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable <p><br> On May 5, 2013 5:20 PM, &quot;&amp;<a href=3D"mailto:lt%3Bluismarques gmail= .com">lt;luismarques gmail.com</a>&amp;<a href=3D"mailto:gt%3B%26quot%3B pu= remagic.com">gt;&amp;quot; puremagic.com</a>&quot; &lt;&amp;quot;\&amp;quot= ;Lu=EDs&amp;quot;.Marques&amp;quot;&gt; wrote:<br> &gt;<br> &gt; On Sunday, 5 May 2013 at 13:33:25 UTC, Iain Buclaw wrote:<br> &gt;&gt;<br> &gt;&gt; 1. Support extern(C++) classes so can have a split C++/D implement= ation of eg: Expression and others.<br> &gt;<br> &gt;<br> &gt; I don&#39;t know if this will be in the videos, so I&#39;ll ask here. = I thought extern(C++) only supported interfaces because everything else fel= l into the &quot;we&#39;d need to pretty much include a C++ compiler into D= to support that&quot; camp. Is that not quite true for classes? Did you fi= nd some compromise between usefulness and complexity that wasn&#39;t obviou= s before, or did the D compiler transition just motivate adding some additi= onal complexity that previously wasn&#39;t deemed acceptable?</p> <p>It was mentioned, however I do believe there are a few more complicated = things than that.=A0 Many would be in a position to educate you on that. </= p> <p>Regards<br> -- <br> Iain Buclaw</p> <p>*(p &lt; e ? p++ : p) =3D (c &amp; 0x0f) + &#39;0&#39;;<br> </p> --047d7b6da4c8d5708c04dbfbfeda--
May 05 2013
prev sibling next sibling parent =?UTF-8?B?Ikx1w61z?= Marques" <luismarques gmail.com> writes:
On Sunday, 5 May 2013 at 20:33:15 UTC, Walter Bright wrote:
 extern(C++) interfaces are ABI compatible with C++ "com" 
 classes - i.e. single inheritance, no constructors or 
 destructors.

That I know, thanks, I just understood that point one meant some additional extern(C++) support:
 1. Support extern(C++) classes so can have a split C++/D 
 implementation of eg: Expression and others.

May 05 2013
prev sibling next sibling parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Iain Buclaw" <ibuclaw gdcproject.org> wrote in message 
news:qtcogcbrhfzjvuoayyjr forum.dlang.org...
 Daniel and/or David,

 We should list down in writing the issues preventing DMD, GDC, and LDC 
 having a shared code base.  From what David has shown me, LDC will need 
 the most work for this, but I'll list down what I can remember.

oooook here we go: We have three goals: A: D frontend ported to D B: Identical frontend code shared between all three backends C: Fixing the layering violations in the glue layer (in some cases this probably blocks B)
 1. Support extern(C++) classes so can have a split C++/D implementation of 
 eg: Expression and others.

s/others/all ast classes/ Requred for A only
 2. Support representing integers and floats to a greater precision than 
 what the host can natively support.

This should be 'Support representing integers and floats to the EXACT precisison that the TARGET supports at runtime'. The old arguments about how you can't rely on floating point exactness do not hold up when cross compiling - all compilers that differ only in host compiler/machine must produce identical binaries. This is really a seperate issue.
 In D there's BigInt for integral types, and there's a possibility of using 
 std.numeric for floats.  For me, painless conversion between eg: BigInt 
 <-> GCC's double_int is a requirement, but that is more of an after 
 thought at this point in time.

Because this does not block anything it _can_ wait until the port is complete, we can live with some weirdness in floating point at compile time. I completely agree it should be fixed eventually.
 3. Array ops should be moved out of the front end. The back end can deal 
 with emitting the correct Libcall if required.

Only blocks C...
 4. Continue building upon Target to hide target-specific things from the 
 front end.  Off the top of my head I've got two to raise pulls for: 
 __VENDOR__ and retrieving memalignsize for fields.

Only blocks B (and fixing it helps C)
 5. DMD sends messages to stdout, GDC sends to stderr.  Just a small 
 implementation detail, but worth noting where 'printf'appears, it's almost 
 always rewritten as fprintf(stderr) for GDC.

Similar.
 6. LDC does not implement toObjFile, toCtype, toDt, toIR, possibly 
 others...

This is another layering violation, and eventually I believe we should migrate to an _actual_ visitor pattern, so ast classes do not need to know anything about the glue layer. I think we should work around this for now. (With #ifdef, or adding _all_ virtuals to the frontend and stubbing the unused ones)
 7. BUILTINxxx could be moved into Target, as there is no reason why each 
 back end can't support their own builtins for the purpose of CTFE.

Makes sense. I guess if Target detects a builtin it gets Port to evaluate it. Maybe we should rename Port to Host?
 8. D front end's port.h can't be used by GDC because of dependency  on 
 mars.h, this could perhaps be replaced by std.numeric post conversion.

Didn't we find it doesn't rely on anything substantial? This can certainly be cleaned up.
 9. Opaque declarations of back end types defined in front end differ for 
 each compiler implementation.  Eg: elem is a typedef to union tree_node.

Same problem as 6, except opaque types can be safely ignored/used as they are opaque.
 10. The main function in mars.c is not used by GDC, possibly LDC also. 
 Another implementation detail but also a note to maybe split out 
 errorSuplimental and others from that file.

I'm happy with each compiler having their own 'main' file. Yes we need to move the common stuff into another file.
 11. The function genCfunc does not generate the arguments of the extern(C) 
 symbol.

I think this only blocks C.
 12. LDC adds extra reserved version identifiers that are not allowed to be 
 declared in D code.  This could and probably should be merged into D front 
 end. Don't think it would be wise to let back end's have the ability to 
 add their own.  Also this list needs updating regardless to reflect the 
 documented spec.

Makes sense.
 13. LDC makes some more arbitrary changes to which the reason for the 
 change has been forgotten. Get on it David!  :o)

I know very little about this but hopefully most of it can go into Target/get merged upstream.
 14. Reading sources asynchronously, GDC ifdefs this out.  Do we really 
 need this?  I seem to recall that the speed increase is either negliegable 
 or offers no benefit to compilation speed.

I think #ifdefed or dropped are both fine.
 15. Deal with all C++ -> D conversion

Yeah.
 16. Testing the C++ -> D front end conversion on Linux.   Daniel you can 
 send me the sources to test that if getting a Linux box is a problem for 
 you.

It's not a problem, just not my primary platform and therefore not my first focus. At the moment you would need a modified porting tool to compile for anything except win32. To get here we need to fix the #ifdef-cutting-expressions-and-statements-etc mess. I'm not sure how bad this is because last time I tried I was going for the backend as well. I'll have a go on my flight until my laptop battery runs out. There is more, it's just more of the same.
May 05 2013
parent "Daniel Murphy" <yebblies nospamgmail.com> writes:
I'm expecting lots of positive comments when I get off my flight in 14 
hours.

"Daniel Murphy" <yebblies nospamgmail.com> wrote in message 
news:km7aqo$2kv4$1 digitalmars.com...
 "Iain Buclaw" <ibuclaw gdcproject.org> wrote in message 
 news:qtcogcbrhfzjvuoayyjr forum.dlang.org...
 Daniel and/or David,

 We should list down in writing the issues preventing DMD, GDC, and LDC 
 having a shared code base.  From what David has shown me, LDC will need 
 the most work for this, but I'll list down what I can remember.

oooook here we go: We have three goals: A: D frontend ported to D B: Identical frontend code shared between all three backends C: Fixing the layering violations in the glue layer (in some cases this probably blocks B)
 1. Support extern(C++) classes so can have a split C++/D implementation 
 of eg: Expression and others.

s/others/all ast classes/ Requred for A only
 2. Support representing integers and floats to a greater precision than 
 what the host can natively support.

This should be 'Support representing integers and floats to the EXACT precisison that the TARGET supports at runtime'. The old arguments about how you can't rely on floating point exactness do not hold up when cross compiling - all compilers that differ only in host compiler/machine must produce identical binaries. This is really a seperate issue.
 In D there's BigInt for integral types, and there's a possibility of 
 using std.numeric for floats.  For me, painless conversion between eg: 
 BigInt <-> GCC's double_int is a requirement, but that is more of an 
 after thought at this point in time.

Because this does not block anything it _can_ wait until the port is complete, we can live with some weirdness in floating point at compile time. I completely agree it should be fixed eventually.
 3. Array ops should be moved out of the front end. The back end can deal 
 with emitting the correct Libcall if required.

Only blocks C...
 4. Continue building upon Target to hide target-specific things from the 
 front end.  Off the top of my head I've got two to raise pulls for: 
 __VENDOR__ and retrieving memalignsize for fields.

Only blocks B (and fixing it helps C)
 5. DMD sends messages to stdout, GDC sends to stderr.  Just a small 
 implementation detail, but worth noting where 'printf'appears, it's 
 almost always rewritten as fprintf(stderr) for GDC.

Similar.
 6. LDC does not implement toObjFile, toCtype, toDt, toIR, possibly 
 others...

This is another layering violation, and eventually I believe we should migrate to an _actual_ visitor pattern, so ast classes do not need to know anything about the glue layer. I think we should work around this for now. (With #ifdef, or adding _all_ virtuals to the frontend and stubbing the unused ones)
 7. BUILTINxxx could be moved into Target, as there is no reason why each 
 back end can't support their own builtins for the purpose of CTFE.

Makes sense. I guess if Target detects a builtin it gets Port to evaluate it. Maybe we should rename Port to Host?
 8. D front end's port.h can't be used by GDC because of dependency  on 
 mars.h, this could perhaps be replaced by std.numeric post conversion.

Didn't we find it doesn't rely on anything substantial? This can certainly be cleaned up.
 9. Opaque declarations of back end types defined in front end differ for 
 each compiler implementation.  Eg: elem is a typedef to union tree_node.

Same problem as 6, except opaque types can be safely ignored/used as they are opaque.
 10. The main function in mars.c is not used by GDC, possibly LDC also. 
 Another implementation detail but also a note to maybe split out 
 errorSuplimental and others from that file.

I'm happy with each compiler having their own 'main' file. Yes we need to move the common stuff into another file.
 11. The function genCfunc does not generate the arguments of the 
 extern(C) symbol.

I think this only blocks C.
 12. LDC adds extra reserved version identifiers that are not allowed to 
 be declared in D code.  This could and probably should be merged into D 
 front end. Don't think it would be wise to let back end's have the 
 ability to add their own.  Also this list needs updating regardless to 
 reflect the documented spec.

Makes sense.
 13. LDC makes some more arbitrary changes to which the reason for the 
 change has been forgotten. Get on it David!  :o)

I know very little about this but hopefully most of it can go into Target/get merged upstream.
 14. Reading sources asynchronously, GDC ifdefs this out.  Do we really 
 need this?  I seem to recall that the speed increase is either 
 negliegable or offers no benefit to compilation speed.

I think #ifdefed or dropped are both fine.
 15. Deal with all C++ -> D conversion

Yeah.
 16. Testing the C++ -> D front end conversion on Linux.   Daniel you can 
 send me the sources to test that if getting a Linux box is a problem for 
 you.

It's not a problem, just not my primary platform and therefore not my first focus. At the moment you would need a modified porting tool to compile for anything except win32. To get here we need to fix the #ifdef-cutting-expressions-and-statements-etc mess. I'm not sure how bad this is because last time I tried I was going for the backend as well. I'll have a go on my flight until my laptop battery runs out. There is more, it's just more of the same.

May 05 2013
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
When devising solutions, I want to prefer solutions that do not rely on 
#ifdef/#endif. I've tried to scrub those out of the dmd front end source code.
May 05 2013
parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:km7fml$2rka$1 digitalmars.com...
 When devising solutions, I want to prefer solutions that do not rely on 
 #ifdef/#endif. I've tried to scrub those out of the dmd front end source 
 code.

I completely agree. But - refactoring the glue layer interface to use a proper visitor interface (what I suspect is the best solution) is a rather large change and will be much easier _after_ the conversion. While ifdefs are a pain in general, the big problem is this pattern. if (a && b && #if SOMETHING c && d && #else e && f && #endif g && h) { ...
May 06 2013
parent "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Daniel Murphy" <yebblies nospamgmail.com> wrote in message 
news:km7lir$48g$1 digitalmars.com...
 "Walter Bright" <newshound2 digitalmars.com> wrote in message 
 news:km7fml$2rka$1 digitalmars.com...
 When devising solutions, I want to prefer solutions that do not rely on 
 #ifdef/#endif. I've tried to scrub those out of the dmd front end source 
 code.

I completely agree. But - refactoring the glue layer interface to use a proper visitor interface (what I suspect is the best solution) is a rather large change and will be much easier _after_ the conversion. While ifdefs are a pain in general, the big problem is this pattern. if (a && b && #if SOMETHING c && d && #else e && f && #endif g && h) { ...

It turns out these are actually not that big a problem in the frontend - around 30 cases, all DMDV2 or 0/1. The backend is another story...
May 07 2013
prev sibling next sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
--001a11c24ed485113e04dc0a404c
Content-Type: text/plain; charset=ISO-8859-1

On 6 May 2013 08:19, Daniel Murphy <yebblies nospamgmail.com> wrote:

 "Walter Bright" <newshound2 digitalmars.com> wrote in message
 news:km7fml$2rka$1 digitalmars.com...
 When devising solutions, I want to prefer solutions that do not rely on
 #ifdef/#endif. I've tried to scrub those out of the dmd front end source
 code.

I completely agree. But - refactoring the glue layer interface to use a proper visitor interface (what I suspect is the best solution) is a rather large change and will be much easier _after_ the conversion. While ifdefs are a pain in general, the big problem is this pattern. if (a && b && #if SOMETHING c && d && #else e && f && #endif g && h) { ...

-- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0'; --001a11c24ed485113e04dc0a404c Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On 6= May 2013 08:19, Daniel Murphy <span dir=3D"ltr">&lt;<a href=3D"mailto:yebb= lies nospamgmail.com" target=3D"_blank">yebblies nospamgmail.com</a>&gt;</s= pan> wrote:<br> <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p= x #ccc solid;padding-left:1ex">&quot;Walter Bright&quot; &lt;<a href=3D"mai= lto:newshound2 digitalmars.com">newshound2 digitalmars.com</a>&gt; wrote in= message<br> news:km7fml$2rka$1 digitalmars.com...<br> <div class=3D"im">&gt; When devising solutions, I want to prefer solutions = that do not rely on<br> &gt; #ifdef/#endif. I&#39;ve tried to scrub those out of the dmd front end = source<br> &gt; code.<br> <br> </div>I completely agree. =A0But - refactoring the glue layer interface to = use a<br> proper visitor interface (what I suspect is the best solution) is a rather<= br> large change and will be much easier _after_ the conversion.<br> <br> While ifdefs are a pain in general, the big problem is this pattern.<br> if (a &amp;&amp; b &amp;&amp;<br> #if SOMETHING<br> =A0 =A0 c &amp;&amp; d &amp;&amp;<br> #else<br> =A0 =A0 e &amp;&amp; f &amp;&amp;<br> #endif<br> =A0 =A0 g &amp;&amp; h) {<br> ...<br> <br> </blockquote></div><br></div><div class=3D"gmail_extra">^^ One thing I won&= #39;t miss about removing all DMDV1 macros from GDC glue. ;)<br></div><div = class=3D"gmail_extra"><br><br clear=3D"all"><br>-- <br>Iain Buclaw<br><br>*= (p &lt; e ? p++ : p) =3D (c &amp; 0x0f) + &#39;0&#39;; </div></div> --001a11c24ed485113e04dc0a404c--
May 06 2013
prev sibling next sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
--047d7b5d836fb1ac5c04dc107b12
Content-Type: text/plain; charset=ISO-8859-1

On 6 May 2013 05:16, Daniel Murphy <yebblies nospamgmail.com> wrote:

 "Iain Buclaw" <ibuclaw gdcproject.org> wrote in message
 news:qtcogcbrhfzjvuoayyjr forum.dlang.org...
 Daniel and/or David,

 We should list down in writing the issues preventing DMD, GDC, and LDC
 having a shared code base.  From what David has shown me, LDC will need
 the most work for this, but I'll list down what I can remember.

oooook here we go: We have three goals: A: D frontend ported to D B: Identical frontend code shared between all three backends C: Fixing the layering violations in the glue layer (in some cases this probably blocks B)
 1. Support extern(C++) classes so can have a split C++/D implementation

 eg: Expression and others.

s/others/all ast classes/ Requred for A only
 2. Support representing integers and floats to a greater precision than
 what the host can natively support.

This should be 'Support representing integers and floats to the EXACT precisison that the TARGET supports at runtime'. The old arguments about how you can't rely on floating point exactness do not hold up when cross compiling - all compilers that differ only in host compiler/machine must produce identical binaries. This is really a seperate issue.

 In D there's BigInt for integral types, and there's a possibility of

 std.numeric for floats.  For me, painless conversion between eg: BigInt
 <-> GCC's double_int is a requirement, but that is more of an after
 thought at this point in time.

Because this does not block anything it _can_ wait until the port is complete, we can live with some weirdness in floating point at compile time. I completely agree it should be fixed eventually.

 3. Array ops should be moved out of the front end. The back end can deal
 with emitting the correct Libcall if required.

Only blocks C...
 4. Continue building upon Target to hide target-specific things from the
 front end.  Off the top of my head I've got two to raise pulls for:
 __VENDOR__ and retrieving memalignsize for fields.

Only blocks B (and fixing it helps C)
 5. DMD sends messages to stdout, GDC sends to stderr.  Just a small
 implementation detail, but worth noting where 'printf'appears, it's

 always rewritten as fprintf(stderr) for GDC.

Similar.
 6. LDC does not implement toObjFile, toCtype, toDt, toIR, possibly
 others...

This is another layering violation, and eventually I believe we should migrate to an _actual_ visitor pattern, so ast classes do not need to know anything about the glue layer. I think we should work around this for now. (With #ifdef, or adding _all_ virtuals to the frontend and stubbing the unused ones)
 7. BUILTINxxx could be moved into Target, as there is no reason why each
 back end can't support their own builtins for the purpose of CTFE.

Makes sense. I guess if Target detects a builtin it gets Port to evaluate it. Maybe we should rename Port to Host?
 8. D front end's port.h can't be used by GDC because of dependency  on
 mars.h, this could perhaps be replaced by std.numeric post conversion.

Didn't we find it doesn't rely on anything substantial? This can certainly be cleaned up.

spent more than 5 minutes looking at it.
 9. Opaque declarations of back end types defined in front end differ for
 each compiler implementation.  Eg: elem is a typedef to union tree_node.

Same problem as 6, except opaque types can be safely ignored/used as they are opaque.
 10. The main function in mars.c is not used by GDC, possibly LDC also.
 Another implementation detail but also a note to maybe split out
 errorSuplimental and others from that file.

I'm happy with each compiler having their own 'main' file. Yes we need to move the common stuff into another file.

 11. The function genCfunc does not generate the arguments of the

 symbol.

I think this only blocks C.
 12. LDC adds extra reserved version identifiers that are not allowed to

 declared in D code.  This could and probably should be merged into D

 end. Don't think it would be wise to let back end's have the ability to
 add their own.  Also this list needs updating regardless to reflect the
 documented spec.

Makes sense.
 13. LDC makes some more arbitrary changes to which the reason for the
 change has been forgotten. Get on it David!  :o)

I know very little about this but hopefully most of it can go into Target/get merged upstream.
 14. Reading sources asynchronously, GDC ifdefs this out.  Do we really
 need this?  I seem to recall that the speed increase is either

 or offers no benefit to compilation speed.

I think #ifdefed or dropped are both fine.
 15. Deal with all C++ -> D conversion

Yeah.
 16. Testing the C++ -> D front end conversion on Linux.   Daniel you can
 send me the sources to test that if getting a Linux box is a problem for
 you.

It's not a problem, just not my primary platform and therefore not my first focus. At the moment you would need a modified porting tool to compile for anything except win32. To get here we need to fix the #ifdef-cutting-expressions-and-statements-etc mess. I'm not sure how bad this is because last time I tried I was going for the backend as well. I'll have a go on my flight until my laptop battery runs out. There is more, it's just more of the same.

-- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0'; --047d7b5d836fb1ac5c04dc107b12 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On 6= May 2013 05:16, Daniel Murphy <span dir=3D"ltr">&lt;<a href=3D"mailto:yebb= lies nospamgmail.com" target=3D"_blank">yebblies nospamgmail.com</a>&gt;</s= pan> wrote:<br> <blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-= left:1px solid rgb(204,204,204);padding-left:1ex">&quot;Iain Buclaw&quot; &= lt;<a href=3D"mailto:ibuclaw gdcproject.org">ibuclaw gdcproject.org</a>&gt;= wrote in message<br> news:qtcogcbrhfzjvuoayyjr forum.dlang.org...<br> <div class=3D"im">&gt; Daniel and/or David,<br> &gt;<br> &gt; We should list down in writing the issues preventing DMD, GDC, and LDC= <br> &gt; having a shared code base. =A0From what David has shown me, LDC will n= eed<br> &gt; the most work for this, but I&#39;ll list down what I can remember.<br=

<br> </div>oooook here we go:<br> <br> We have three goals:<br> A: D frontend ported to D<br> B: Identical frontend code shared between all three backends<br> C: Fixing the layering violations in the glue layer (in some cases this<br> probably blocks B)<br> <div class=3D"im"><br> &gt; 1. Support extern(C++) classes so can have a split C++/D implementatio= n of<br> &gt; eg: Expression and others.<br> &gt;<br> <br> </div>s/others/all ast classes/<br> Requred for A only<br> <div class=3D"im"><br> &gt; 2. Support representing integers and floats to a greater precision tha= n<br> &gt; what the host can natively support.<br> <br> </div>This should be &#39;Support representing integers and floats to the E= XACT<br> precisison that the TARGET supports at runtime&#39;.<br> <br> The old arguments about how you can&#39;t rely on floating point exactness = do<br> not hold up when cross compiling - all compilers that differ only in host<b= r> compiler/machine must produce identical binaries.<br> <br> This is really a seperate issue.<br> <div class=3D"im"><br></div></blockquote><div><br></div><div>Probably yes, = but I cannot consider switching without it.<br></div><div><br>=A0</div><blo= ckquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left= :1px solid rgb(204,204,204);padding-left:1ex"> <div class=3D"im"> &gt; In D there&#39;s BigInt for integral types, and there&#39;s a possibil= ity of using<br> &gt; std.numeric for floats. =A0For me, painless conversion between eg: Big= Int<br> &gt; &lt;-&gt; GCC&#39;s double_int is a requirement, but that is more of a= n after<br> &gt; thought at this point in time.<br> &gt;<br> <br> </div>Because this does not block anything it _can_ wait until the port is<= br> complete, we can live with some weirdness in floating point at compile time= .<br> I completely agree it should be fixed eventually.<br> <div class=3D"im"><br></div></blockquote><div><br></div><div>Indeed, and I = can deal without BigInt.<br><br></div><div>=A0</div><blockquote class=3D"gm= ail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,= 204,204);padding-left:1ex"> <div class=3D"im"> &gt; 3. Array ops should be moved out of the front end. The back end can de= al<br> &gt; with emitting the correct Libcall if required.<br> &gt;<br> <br> </div>Only blocks C...<br> <div class=3D"im"><br> &gt; 4. Continue building upon Target to hide target-specific things from t= he<br> &gt; front end. =A0Off the top of my head I&#39;ve got two to raise pulls f= or:<br> &gt; __VENDOR__ and retrieving memalignsize for fields.<br> &gt;<br> <br> </div>Only blocks B (and fixing it helps C)<br> <div class=3D"im"><br> &gt; 5. DMD sends messages to stdout, GDC sends to stderr. =A0Just a small<= br> &gt; implementation detail, but worth noting where &#39;printf&#39;appears,= it&#39;s almost<br> &gt; always rewritten as fprintf(stderr) for GDC.<br> &gt;<br> <br> </div>Similar.<br> <div class=3D"im"><br> &gt; 6. LDC does not implement toObjFile, toCtype, toDt, toIR, possibly<br> &gt; others...<br> &gt;<br> <br> </div>This is another layering violation, and eventually I believe we shoul= d<br> migrate to an _actual_ visitor pattern, so ast classes do not need to know<= br> anything about the glue layer. =A0I think we should work around this for no= w.<br> (With #ifdef, or adding _all_ virtuals to the frontend and stubbing the<br> unused ones)<br> <div class=3D"im"><br> &gt; 7. BUILTINxxx could be moved into Target, as there is no reason why ea= ch<br> &gt; back end can&#39;t support their own builtins for the purpose of CTFE.= <br> &gt;<br> <br> </div>Makes sense. =A0I guess if Target detects a builtin it gets Port to e= valuate<br> it. =A0Maybe we should rename Port to Host?<br> <div class=3D"im"><br> &gt; 8. D front end&#39;s port.h can&#39;t be used by GDC because of depend= ency =A0on<br> &gt; mars.h, this could perhaps be replaced by std.numeric post conversion.= <br> &gt;<br> <br> </div>Didn&#39;t we find it doesn&#39;t rely on anything substantial? =A0Th= is can certainly<br> be cleaned up.<br> <div class=3D"im"><br></div></blockquote><div><br></div><div>Nothing substa= ntial, no.=A0 And cleaned up, it should be.=A0 I just haven&#39;t spent mor= e than 5 minutes looking at it.<br><br><br></div><blockquote class=3D"gmail= _quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204= ,204);padding-left:1ex"> <div class=3D"im"> &gt; 9. Opaque declarations of back end types defined in front end differ f= or<br> &gt; each compiler implementation. =A0Eg: elem is a typedef to union tree_n= ode.<br> &gt;<br> <br> </div>Same problem as 6, except opaque types can be safely ignored/used as = they<br> are opaque.<br> <div class=3D"im"><br> &gt; 10. The main function in mars.c is not used by GDC, possibly LDC also.= <br> &gt; Another implementation detail but also a note to maybe split out<br> &gt; errorSuplimental and others from that file.<br> &gt;<br> <br> </div>I&#39;m happy with each compiler having their own &#39;main&#39; file= . =A0Yes we need to<br> move the common stuff into another file.<br> <div class=3D"im"><br></div></blockquote><div><br></div><div>Have any sugge= stions for where to move this? (other than a new file)<br><br></div><div>= =A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8e= x;border-left:1px solid rgb(204,204,204);padding-left:1ex"> <div class=3D"im"> &gt; 11. The function genCfunc does not generate the arguments of the exter= n(C)<br> &gt; symbol.<br> &gt;<br> <br> </div>I think this only blocks C.<br> <div class=3D"im"><br> &gt; 12. LDC adds extra reserved version identifiers that are not allowed t= o be<br> &gt; declared in D code. =A0This could and probably should be merged into D= front<br> &gt; end. Don&#39;t think it would be wise to let back end&#39;s have the a= bility to<br> &gt; add their own. =A0Also this list needs updating regardless to reflect = the<br> &gt; documented spec.<br> &gt;<br> <br> </div>Makes sense.<br> <div class=3D"im"><br> &gt; 13. LDC makes some more arbitrary changes to which the reason for the<= br> &gt; change has been forgotten. Get on it David! =A0:o)<br> &gt;<br> <br> </div>I know very little about this but hopefully most of it can go into<br=

<div class=3D"im"><br> &gt; 14. Reading sources asynchronously, GDC ifdefs this out. =A0Do we real= ly<br> &gt; need this? =A0I seem to recall that the speed increase is either negli= egable<br> &gt; or offers no benefit to compilation speed.<br> &gt;<br> <br> </div>I think #ifdefed or dropped are both fine.<br> <div class=3D"im"><br> &gt; 15. Deal with all C++ -&gt; D conversion<br> <br> </div>Yeah.<br> <div class=3D"im"><br> &gt; 16. Testing the C++ -&gt; D front end conversion on Linux. =A0 Daniel = you can<br> &gt; send me the sources to test that if getting a Linux box is a problem f= or<br> &gt; you.<br> <br> </div>It&#39;s not a problem, just not my primary platform and therefore no= t my first<br> focus. =A0At the moment you would need a modified porting tool to compile f= or<br> anything except win32. =A0To get here we need to fix the<br> #ifdef-cutting-expressions-and-statements-etc mess. =A0I&#39;m not sure how= bad<br> this is because last time I tried I was going for the backend as well. =A0I= &#39;ll<br> have a go on my flight until my laptop battery runs out.<br> <br> There is more, it&#39;s just more of the same.<br> <br> <br> </blockquote></div><br><br clear=3D"all"><br>-- <br>Iain Buclaw<br><br>*(p = &lt; e ? p++ : p) =3D (c &amp; 0x0f) + &#39;0&#39;; </div></div> --047d7b5d836fb1ac5c04dc107b12--
May 06 2013
prev sibling next sibling parent Thomas Koch <thomas koch.ro> writes:
Do you plan to support a build path that has no circular dependendencies? 
This would be a very strong nice to have for porting D to new architectures.

So it should be possible to build a subset of D (stage 1) with gcc without 
relying on a D compiler and than using the stage 1 binary to build a 
complete D compiler.

There are languages in Debian that rely on themselves to be build and it's a 
headache to support those languages on all architectures.

Regards, Thomas Koch
May 09 2013
prev sibling next sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
--20cf3071d1ea748f2c04dc465485
Content-Type: text/plain; charset=ISO-8859-1

On 9 May 2013 10:11, Thomas Koch <thomas koch.ro> wrote:

 Do you plan to support a build path that has no circular dependendencies?
 This would be a very strong nice to have for porting D to new
 architectures.

 So it should be possible to build a subset of D (stage 1) with gcc without
 relying on a D compiler and than using the stage 1 binary to build a
 complete D compiler.

 There are languages in Debian that rely on themselves to be build and it's
 a
 headache to support those languages on all architectures.

 Regards, Thomas Koch

these purposes. But ideally we should get porting as soon as possible ahead of this move so that there are already D compilers available for said targets. Though it would be nice for the D implementation to be kept to a subset that is backwards compatible with 2.062 (or whatever version we decide to make the switch at), that is something I cannot guarantee. Regards -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0'; --20cf3071d1ea748f2c04dc465485 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On 9= May 2013 10:11, Thomas Koch <span dir=3D"ltr">&lt;<a href=3D"mailto:thomas= koch.ro" target=3D"_blank">thomas koch.ro</a>&gt;</span> wrote:<br><blockq= uote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc = solid;padding-left:1ex"> Do you plan to support a build path that has no circular dependendencies?<b= r> This would be a very strong nice to have for porting D to new architectures= .<br> <br> So it should be possible to build a subset of D (stage 1) with gcc without<= br> relying on a D compiler and than using the stage 1 binary to build a<br> complete D compiler.<br> <br> There are languages in Debian that rely on themselves to be build and it&#3= 9;s a<br> headache to support those languages on all architectures.<br> <br> Regards, Thomas Koch<br> <br> </blockquote></div><br></div><div class=3D"gmail_extra">I&#39;ll will very = likely keep a branch with the C++ implemented front end for these purposes.= But ideally we should get porting as soon as possible ahead of this move s= o that there are already D compilers available for said targets.<br> <br>Though it would be nice for the D implementation to be kept to a subset= that is backwards compatible with 2.062 (or whatever version we decide to = make the switch at), that is something I cannot guarantee.<br><br><br></div=

? p++ : p) =3D (c &amp; 0x0f) + &#39;0&#39;; </div></div> --20cf3071d1ea748f2c04dc465485--
May 09 2013
prev sibling next sibling parent "David Nadlinger" <see klickverbot.at> writes:
On Thursday, 9 May 2013 at 09:11:05 UTC, Thomas Koch wrote:
 There are languages in Debian that rely on themselves to be 
 build and it's a
 headache to support those languages on all architectures.

Wouldn't the "normal" workflow for porting to a new platform be to start out with a cross-compiler anyway? David
May 09 2013
prev sibling next sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
--001a11c2a1e80245c304dc47e29f
Content-Type: text/plain; charset=ISO-8859-1

On 9 May 2013 12:50, David Nadlinger <see klickverbot.at> wrote:

 On Thursday, 9 May 2013 at 09:11:05 UTC, Thomas Koch wrote:

 There are languages in Debian that rely on themselves to be build and
 it's a
 headache to support those languages on all architectures.

Wouldn't the "normal" workflow for porting to a new platform be to start out with a cross-compiler anyway? David

Currently... only if the target platform does not have a native c++ compiler. -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0'; --001a11c2a1e80245c304dc47e29f Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On 9= May 2013 12:50, David Nadlinger <span dir=3D"ltr">&lt;<a href=3D"mailto:se= e klickverbot.at" target=3D"_blank">see klickverbot.at</a>&gt;</span> wrote= :<br><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-le= ft:1px #ccc solid;padding-left:1ex"> <div class=3D"im">On Thursday, 9 May 2013 at 09:11:05 UTC, Thomas Koch wrot= e:<br> <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p= x #ccc solid;padding-left:1ex"> There are languages in Debian that rely on themselves to be build and it&#3= 9;s a<br> headache to support those languages on all architectures.<br> </blockquote> <br></div> Wouldn&#39;t the &quot;normal&quot; workflow for porting to a new platform = be to start out with a cross-compiler anyway?<span class=3D"HOEnZb"><font c= olor=3D"#888888"><br> <br> David<br> </font></span></blockquote></div><br></div><div class=3D"gmail_extra">Curre= ntly... only if the target platform does not have a native c++ compiler.<br= clear=3D"all"></div><div class=3D"gmail_extra"><br>-- <br>Iain Buclaw<br><= br> *(p &lt; e ? p++ : p) =3D (c &amp; 0x0f) + &#39;0&#39;; </div></div> --001a11c2a1e80245c304dc47e29f--
May 09 2013
prev sibling next sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
--001a11c24ed43dc89204dc47e49e
Content-Type: text/plain; charset=ISO-8859-1

On 9 May 2013 13:06, Iain Buclaw <ibuclaw ubuntu.com> wrote:

 On 9 May 2013 12:50, David Nadlinger <see klickverbot.at> wrote:

 On Thursday, 9 May 2013 at 09:11:05 UTC, Thomas Koch wrote:

 There are languages in Debian that rely on themselves to be build and
 it's a
 headache to support those languages on all architectures.

Wouldn't the "normal" workflow for porting to a new platform be to start out with a cross-compiler anyway? David

Currently... only if the target platform does not have a native c++ compiler.

though... :) -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0'; --001a11c24ed43dc89204dc47e49e Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On 9= May 2013 13:06, Iain Buclaw <span dir=3D"ltr">&lt;<a href=3D"mailto:ibucla= w ubuntu.com" target=3D"_blank">ibuclaw ubuntu.com</a>&gt;</span> wrote:<br=
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1=

<div dir=3D"ltr"><div><div class=3D"h5"><div class=3D"gmail_extra"><div cla= ss=3D"gmail_quote">On 9 May 2013 12:50, David Nadlinger <span dir=3D"ltr">&= lt;<a href=3D"mailto:see klickverbot.at" target=3D"_blank">see klickverbot.= at</a>&gt;</span> wrote:<br> <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p= x #ccc solid;padding-left:1ex"> <div>On Thursday, 9 May 2013 at 09:11:05 UTC, Thomas Koch wrote:<br> <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p= x #ccc solid;padding-left:1ex"> There are languages in Debian that rely on themselves to be build and it&#3= 9;s a<br> headache to support those languages on all architectures.<br> </blockquote> <br></div> Wouldn&#39;t the &quot;normal&quot; workflow for porting to a new platform = be to start out with a cross-compiler anyway?<span><font color=3D"#888888">= <br> <br> David<br> </font></span></blockquote></div><br></div></div></div><div class=3D"gmail_= extra">Currently... only if the target platform does not have a native c++ = compiler.<br clear=3D"all"></div><br></div></blockquote></div><br></div><di= v class=3D"gmail_extra"> Though that assumes that the target platform has a c compiler already thoug= h... :)<br clear=3D"all"></div><div class=3D"gmail_extra"><br>-- <br>Iain B= uclaw<br><br>*(p &lt; e ? p++ : p) =3D (c &amp; 0x0f) + &#39;0&#39;; </div></div> --001a11c24ed43dc89204dc47e49e--
May 09 2013
prev sibling next sibling parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
--001a11c24ed476795004dc72aa7f
Content-Type: text/plain; charset=ISO-8859-1

On May 5, 2013 2:36 PM, "Iain Buclaw" <ibuclaw gdcproject.org> wrote:
 Daniel and/or David,

 We should list down in writing the issues preventing DMD, GDC, and LDC

most work for this, but I'll list down what I can remember.
 1. Support extern(C++) classes so can have a split C++/D implementation

 2. Support representing integers and floats to a greater precision than

and there's a possibility of using std.numeric for floats. For me, painless conversion between eg: BigInt <-> GCC's double_int is a requirement, but that is more of an after thought at this point in time.

Actually, the more I sit down and think about it, the more I question whether or not it is a good idea for the D D front end to have a dependency on phobos. Maybe I should stop thinking in general. :) Regards -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0'; --001a11c24ed476795004dc72aa7f Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable <p><br> On May 5, 2013 2:36 PM, &quot;Iain Buclaw&quot; &lt;<a href=3D"mailto:ibucl= aw gdcproject.org">ibuclaw gdcproject.org</a>&gt; wrote:<br> &gt;<br> &gt; Daniel and/or David,<br> &gt;<br> &gt; We should list down in writing the issues preventing DMD, GDC, and LDC= having a shared code base. =A0From what David has shown me, LDC will need = the most work for this, but I&#39;ll list down what I can remember.<br> &gt;<br> &gt; 1. Support extern(C++) classes so can have a split C++/D implementatio= n of eg: Expression and others.<br> &gt;<br> &gt; 2. Support representing integers and floats to a greater precision tha= n what the host can natively support. In D there&#39;s BigInt for integral = types, and there&#39;s a possibility of using std.numeric for floats. =A0Fo= r me, painless conversion between eg: BigInt &lt;-&gt; GCC&#39;s double_int= is a requirement, but that is more of an after thought at this point in ti= me.<br> &gt;</p> <p>Actually, the more I sit down and think about it, the more I question wh= ether or not it is a good idea for the D D front end to have a dependency o= n phobos.=A0=A0 Maybe I should stop thinking in general.=A0 :)</p> <p>Regards<br> -- <br> Iain Buclaw</p> <p>*(p &lt; e ? p++ : p) =3D (c &amp; 0x0f) + &#39;0&#39;;</p> --001a11c24ed476795004dc72aa7f--
May 11 2013
parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Iain Buclaw" <ibuclaw ubuntu.com> wrote in message 
news:mailman.1201.1368284962.4724.digitalmars-d puremagic.com...
 Actually, the more I sit down and think about it, the more I question
 whether or not it is a good idea for the D D front end to have a 
 dependency
 on phobos.   Maybe I should stop thinking in general.  :)

Yeah, the compiler can't depend on phobos. But if we really need to, we can clone a chunk of phobos and add it to the compiler. Just so long as there isn't a loop. BigInt is a pretty good candidate.
May 11 2013
parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"David Nadlinger" <see klickverbot.at> wrote in message 
news:llovknbpvcnksinsnpfk forum.dlang.org...
 On Saturday, 11 May 2013 at 15:16:29 UTC, Daniel Murphy wrote:
 "Iain Buclaw" <ibuclaw ubuntu.com> wrote in message
 news:mailman.1201.1368284962.4724.digitalmars-d puremagic.com...
 Actually, the more I sit down and think about it, the more I question
 whether or not it is a good idea for the D D front end to have a 
 dependency
 on phobos.   Maybe I should stop thinking in general.  :)

Yeah, the compiler can't depend on phobos.

Why? If we keep a "must compile with several past versions" policy anyway, what would make Phobos special? David

Yes it's possible, but it seems like a really bad idea because: - Phobos is huge - Changes in phobos now have the potential to break the compiler If you decide that all later versions of the compiler must compile with all earlier versions of phobos, then those phobos modules are unable to change. If you do it the other way and say old versions of the compiler must be able to compile the newer compilers and their versions of phobos, you've locked phobos to an old subset of D. (And effectively made the compiler source base enormous) The nice middle ground is you take the chunk of phobos you need, add it to the compiler source, and say 'this must always compile with earlier versions of the compiler'.
May 11 2013
next sibling parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"David Nadlinger" <see klickverbot.at> wrote in message 
news:mwkwqttkbdpmzvyviymq forum.dlang.org...
 On Saturday, 11 May 2013 at 17:10:51 UTC, Daniel Murphy wrote:
 If you decide that all later versions of the compiler must compile with 
 all
 earlier versions of phobos, then those phobos modules are unable to 
 change.

In (the rare) case of breaking changes, we could always work around them in the compiler source (depending on __VERSION__), rather than duplicating everything up-front. I believe *this* is the nice middle ground. David

That... doesn't sound very nice to me. How much of phobos are we realistically going to need?
May 11 2013
parent "Daniel Murphy" <yebblies nospamgmail.com> writes:
"David Nadlinger" <see klickverbot.at> wrote in message 
news:bwkwvbjdykrnsdezprls forum.dlang.org...
 On Saturday, 11 May 2013 at 17:23:53 UTC, Daniel Murphy wrote:
 That... doesn't sound very nice to me.  How much of phobos are we
 realistically going to need?

All of it? Well, not quite, but large parts at least. If we are going to stick to the C subset of the language, there is little point in translating it to D in the first place.

I disagree. Phobos is great, but there are thousands of things in the language itself that make it much more pleasant and effective than C++.
 Of course, there will be some restrictions arising from the fact that the 
 code base needs to work with D versions from a year back or so. But to me 
 duplicating the whole standard library inside the compiler source seems 
 like maintenance hell.

 David

I agree. But I was thinking much longer term compatibility, and a much smaller chunk of phobos.
May 11 2013
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/11/13 1:10 PM, Daniel Murphy wrote:
 Yes it's possible, but it seems like a really bad idea because:
 - Phobos is huge
 - Changes in phobos now have the potential to break the compiler

The flipside is: - Phobos offers many amenities and opportunities for reuse - Breakages in Phobos will be experienced early on a large system using them I've talked about this with Simon Peyton-Jones who was unequivocal to assert that writing the Haskell compiler in Haskell has had enormous benefits in improving its quality. Andrei
May 11 2013
next sibling parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"David Nadlinger" <see klickverbot.at> wrote in message 
news:wynfxitcgpiggwemrmkx forum.dlang.org...
 On Saturday, 11 May 2013 at 17:36:18 UTC, Andrei Alexandrescu wrote:
 - Breakages in Phobos will be experienced early on a large system using 
 them

 I've talked about this with Simon Peyton-Jones who was unequivocal to 
 assert that writing the Haskell compiler in Haskell has had enormous 
 benefits in improving its quality.

This. If we aren't confident that we can write and maintain a large real-world application in D just yet, we must pull the emergency brakes on the whole DDDMD effort, right now. David

I'm confident in D, just not in phobos. Even if phobos didn't exist, we'd still be in better shape using D than C++. What exactly are we going to need from phobos? sockets? std.datetime? std.regex? std.container? If we use them in the compiler, we effectively freeze them. We can't use the new modules, because the old toolchains don't have them. We can't fix old broken modules because the compiler depends on them. If you add code to work around old modules being gone in later versions, you pretty much end up moving the source into the compiler after all. If we only need to be able to compile with a version from 6 months ago, this is not a problem. A year and it's still workable. But two years? Three? We can get something right here that gcc got so horribly wrong.
May 11 2013
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/11/13 2:15 PM, Daniel Murphy wrote:
 "David Nadlinger"<see klickverbot.at>  wrote in message
 news:wynfxitcgpiggwemrmkx forum.dlang.org...
 On Saturday, 11 May 2013 at 17:36:18 UTC, Andrei Alexandrescu wrote:
 - Breakages in Phobos will be experienced early on a large system using
 them

 I've talked about this with Simon Peyton-Jones who was unequivocal to
 assert that writing the Haskell compiler in Haskell has had enormous
 benefits in improving its quality.

This. If we aren't confident that we can write and maintain a large real-world application in D just yet, we must pull the emergency brakes on the whole DDDMD effort, right now. David

I'm confident in D, just not in phobos. Even if phobos didn't exist, we'd still be in better shape using D than C++. What exactly are we going to need from phobos? sockets? std.datetime? std.regex? std.container? If we use them in the compiler, we effectively freeze them. We can't use the new modules, because the old toolchains don't have them. We can't fix old broken modules because the compiler depends on them. If you add code to work around old modules being gone in later versions, you pretty much end up moving the source into the compiler after all. If we only need to be able to compile with a version from 6 months ago, this is not a problem. A year and it's still workable. But two years? Three? We can get something right here that gcc got so horribly wrong.

But you're exactly enumerating the problems any D user would face when we make breaking changes to Phobos. Andrei
May 11 2013
prev sibling parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
11-May-2013 22:15, Daniel Murphy пишет:
 If we aren't confident that we can write and maintain a large real-world
 application in D just yet, we must pull the emergency brakes on the whole
 DDDMD effort, right now.

 David

I'm confident in D, just not in phobos. Even if phobos didn't exist, we'd still be in better shape using D than C++. What exactly are we going to need from phobos? sockets? std.datetime? std.regex? std.container?

Sockets may come in handy one day. Caching compiler daemon etc. std.container well ... mm ... eventually.
 If we use them in the compiler, we effectively freeze them.  We can't use
 the new modules, because the old toolchains don't have them.  We can't fix
 old broken modules because the compiler depends on them.  If you add code to
 work around old modules being gone in later versions, you pretty much end up
 moving the source into the compiler after all.

I propose a different middle ground: Define a minimal subset of phobos, compilable and usable separately. Then full phobos will depend on it in turn (or rather contain it). Related to my recent thread on limiting inter-dependencies - we will have to face that problem while make a subset of phobos. It has some operational costs but will limit the frozen surface. -- Dmitry Olshansky
May 11 2013
prev sibling next sibling parent Paulo Pinto <pjmlp progtools.org> writes:
Am 11.05.2013 23:43, schrieb John Colvin:
 On Saturday, 11 May 2013 at 21:09:57 UTC, Jonathan M Davis wrote:
 On Saturday, May 11, 2013 20:40:46 deadalnix wrote:
 Except that now, it is a pain to migrate old haskell stuff to
 newer haskelle stuff if you missed several compile release.

 You ends up building recursively from the native version to the
 version you want.

Yeah. And I'm stuck with the opposite problem at the moment. I have to be able to build old haskell code without updating it, but I don't have an older version of ghc built currently, and getting a version old enough to compile my code has turned out to be a royal pain, because the old compiler won't compile with the new compiler. I don't even know if I'm going to be able to do it. If you're always moving forward, you're okay, but if you have to deal with older code, then you quickly run into trouble if the compiler is written in an up-to-date version of the language that it's compiling. At least at this point, if you needed something like 2.059 for some reason, you can just grab 2.059, compile it, and use it with your code. But if the compiler were written in D, and the version of D with 2.059 was not fully compatible with the current version, then compiling 2.059 would become a nightmare. The situation between a normal program and the compiler is quite different. With a normal program, if your code isn't going to work with the current compiler due to language or library changes, then you just grab an older version of the compiler and use that (possibly upgrading your code later if you intend to maintain it long term). But if it's the compiler that you're trying to compile, then you're screwed by any language or library changes that affect the compiler, because it could very well become impossible to compile older versions of the compiler. Yes, keeping language and library changes to a minimum reduces the problem, but unless they're absolutely frozen, you risk problems. Even changes with high ROI (like making implicit fall-through on switch statements illegal) could make building older compilers impossible. So, whatever we do with porting dmd to D, we need to be very careful. We don't want to lock ourselves in so that we can't make changes to the language or libraries even when we really need to, but we don't want to make it too difficult to build older versions of the compiler for people who have to either. At the extreme, we could end up in a situation where you have to grab the oldest version of the compiler which was written in C++, and then build each newer version of the compiler in turn until you get to the one that you want. - Jonathan M Davis

Can't this be eased with readily available binaries and cross compilation? E.g. We drop the C++ version in 2.7. You want DMD version 2.8.2. The minimum needed to compile 2.8.2 is 2.7.5: You can download a binary of 2.7.5 for any common system, cross compile 2.8.2 for your development system, viola! If there are binaries available for your development system, then it becomes almost trivial. Even if this wasn't possible for some reason, recursively building successive versions of the compiler is a completely automatable process. dmd+druntime+phobos compiles quickly enough that it's not a big problem.

I also don't understand the problem. This is how compilers get botstraped all the time. You just use toolchain X to build toolchain X+1. -- Paulo
May 11 2013
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/11/2013 2:09 PM, Jonathan M Davis wrote:
 I have to be able
 to build old haskell code without updating it,

I guess this is the crux of the matter. Why can't you update the source?
May 11 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/11/2013 6:09 PM, Jonathan M Davis wrote:
 So, we might be better of restricting how much the compiler depends on - or we
 may decide that the workaround is to simply build the last C++ version of the
 compiler and then move forward from there. But I think that the issue should
 at least be raised.

Last month I tried compiling an older 15 line D utility, and 10 of those lines broke due to phobos changes. I discussed this a bit with Andrei, and proposed that we keep around aliases for the old names, and put them inside a: version (OldNames) { alias newname oldname; .... } or something like that.
May 11 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/11/2013 7:30 PM, Jonathan M Davis wrote:
 But in theory, the way to solve the problem of your program not compiling with
 the new compiler is to compile with the compiler it was developed with in the
 first place, and then if you want to upgrade your code, you upgrade your code
 and use it with the new compiler. The big problem is when you need to compile
 the compiler. You have a circular dependency due to the compiler depending on
 itself, and have to break it somehow. As long as newer compilers can compiler
 older ones, you're fine, but that's bound to fall apart at some point unless
 you freeze everything. But even bug fixes could make the old compiler not
 compile anymore, so unless the language and compiler (and anything they depend
 on) is extremely stable, you risk not being able to compile older compilers,
 and it's hard to guarantee that level of stability, especially if the compiler
 is not restricted in what features it uses or in what it uses from the
 standard library.

It isn't just compiling the older compiler, it is compiling it and verifying that it works. At least for dmd, we keep all the old binaries up and downloadable for that reason.
May 11 2013
parent Jacob Carlborg <doob me.com> writes:
On 2013-05-12 05:50, Jonathan M Davis wrote:

 That helps considerably, though if the compiler is old enough, that won't work
 for Linux due to glibc changes and whatnot.

My experience is the other way around. Binaries built on newer version of Linux doesn't work on older. But binaries built on older versions usually works on newer versions. -- /Jacob Carlborg
May 12 2013
prev sibling next sibling parent "David Nadlinger" <see klickverbot.at> writes:
On Saturday, 11 May 2013 at 15:16:29 UTC, Daniel Murphy wrote:
 "Iain Buclaw" <ibuclaw ubuntu.com> wrote in message
 news:mailman.1201.1368284962.4724.digitalmars-d puremagic.com...
 Actually, the more I sit down and think about it, the more I 
 question
 whether or not it is a good idea for the D D front end to have 
 a dependency
 on phobos.   Maybe I should stop thinking in general.  :)

Yeah, the compiler can't depend on phobos.

Why? If we keep a "must compile with several past versions" policy anyway, what would make Phobos special? David
May 11 2013
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 11 May 2013 at 15:51:26 UTC, David Nadlinger wrote:
 On Saturday, 11 May 2013 at 15:16:29 UTC, Daniel Murphy wrote:
 "Iain Buclaw" <ibuclaw ubuntu.com> wrote in message
 news:mailman.1201.1368284962.4724.digitalmars-d puremagic.com...
 Actually, the more I sit down and think about it, the more I 
 question
 whether or not it is a good idea for the D D front end to 
 have a dependency
 on phobos.   Maybe I should stop thinking in general.  :)

Yeah, the compiler can't depend on phobos.

Why? If we keep a "must compile with several past versions" policy anyway, what would make Phobos special? David

It prevent the use of newer feature of D in phobos.
May 11 2013
prev sibling next sibling parent "David Nadlinger" <see klickverbot.at> writes:
On Saturday, 11 May 2013 at 16:08:02 UTC, deadalnix wrote:
 On Saturday, 11 May 2013 at 15:51:26 UTC, David Nadlinger wrote:
 If we keep a "must compile with several past versions" policy 
 anyway, what would make Phobos special?

 David

It prevent the use of newer feature of D in phobos.

?! It prevents the use of newer Phobos features in the compiler, but we would obviously use the Phobos version that comes with the host D compiler to compile the frontend, not the version shipping with the frontend. Maybe I'm missing something obvious, but I really can't see the issue here. David
May 11 2013
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 11 May 2013 at 16:15:13 UTC, David Nadlinger wrote:
 On Saturday, 11 May 2013 at 16:08:02 UTC, deadalnix wrote:
 On Saturday, 11 May 2013 at 15:51:26 UTC, David Nadlinger 
 wrote:
 If we keep a "must compile with several past versions" policy 
 anyway, what would make Phobos special?

 David

It prevent the use of newer feature of D in phobos.

?! It prevents the use of newer Phobos features in the compiler, but we would obviously use the Phobos version that comes with the host D compiler to compile the frontend, not the version shipping with the frontend. Maybe I'm missing something obvious, but I really can't see the issue here. David

No, that is what have been said : you got to fork phobos and ship your own with the compiler.
May 11 2013
prev sibling next sibling parent "David Nadlinger" <see klickverbot.at> writes:
On Saturday, 11 May 2013 at 16:27:37 UTC, deadalnix wrote:
 On Saturday, 11 May 2013 at 16:15:13 UTC, David Nadlinger wrote:
 On Saturday, 11 May 2013 at 16:08:02 UTC, deadalnix wrote:
 On Saturday, 11 May 2013 at 15:51:26 UTC, David Nadlinger 
 wrote:
 If we keep a "must compile with several past versions" 
 policy anyway, what would make Phobos special?

 David

It prevent the use of newer feature of D in phobos.

?! It prevents the use of newer Phobos features in the compiler, but we would obviously use the Phobos version that comes with the host D compiler to compile the frontend, not the version shipping with the frontend. Maybe I'm missing something obvious, but I really can't see the issue here. David

No, that is what have been said : you got to fork phobos and ship your own with the compiler.

I still don't get what your point is. To build any D application (which might be a D compiler or not), you need a D compiler on your host system. This D compiler will come with druntime, Phobos and any number of other libraries installed. Now, if the application you are building using that host compiler is DMD, you will likely use that new DMD to build a (newer) version of druntime and Phobos later on. But this doesn't have anything to do with what libraries of the host system the application can or can't use. No fork in sight anywhere. David
May 11 2013
prev sibling next sibling parent "David Nadlinger" <see klickverbot.at> writes:
On Saturday, 11 May 2013 at 17:10:51 UTC, Daniel Murphy wrote:
 If you decide that all later versions of the compiler must 
 compile with all
 earlier versions of phobos, then those phobos modules are 
 unable to change.

In (the rare) case of breaking changes, we could always work around them in the compiler source (depending on __VERSION__), rather than duplicating everything up-front. I believe *this* is the nice middle ground. David
May 11 2013
prev sibling next sibling parent "David Nadlinger" <see klickverbot.at> writes:
On Saturday, 11 May 2013 at 17:23:53 UTC, Daniel Murphy wrote:
 That... doesn't sound very nice to me.  How much of phobos are 
 we
 realistically going to need?

All of it? Well, not quite, but large parts at least. If we are going to stick to the C subset of the language, there is little point in translating it to D in the first place. Of course, there will be some restrictions arising from the fact that the code base needs to work with D versions from a year back or so. But to me duplicating the whole standard library inside the compiler source seems like maintenance hell. David
May 11 2013
prev sibling next sibling parent "David Nadlinger" <see klickverbot.at> writes:
On Saturday, 11 May 2013 at 17:36:18 UTC, Andrei Alexandrescu 
wrote:
 - Breakages in Phobos will be experienced early on a large 
 system using them

 I've talked about this with Simon Peyton-Jones who was 
 unequivocal to assert that writing the Haskell compiler in 
 Haskell has had enormous benefits in improving its quality.

This. If we aren't confident that we can write and maintain a large real-world application in D just yet, we must pull the emergency brakes on the whole DDDMD effort, right now. David
May 11 2013
prev sibling next sibling parent "David Nadlinger" <see klickverbot.at> writes:
On Saturday, 11 May 2013 at 17:48:27 UTC, David Nadlinger wrote:
 […] the whole DDDMD effort […]

Whoops, must be a Freudian slip, revealing how much I'd like to see the D compiler being written in idiomatic D. ;) David
May 11 2013
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 11 May 2013 at 17:36:18 UTC, Andrei Alexandrescu 
wrote:
 On 5/11/13 1:10 PM, Daniel Murphy wrote:
 Yes it's possible, but it seems like a really bad idea because:
 - Phobos is huge
 - Changes in phobos now have the potential to break the 
 compiler

The flipside is: - Phobos offers many amenities and opportunities for reuse - Breakages in Phobos will be experienced early on a large system using them I've talked about this with Simon Peyton-Jones who was unequivocal to assert that writing the Haskell compiler in Haskell has had enormous benefits in improving its quality.

Except that now, it is a pain to migrate old haskell stuff to newer haskelle stuff if you missed several compile release. You ends up building recursively from the native version to the version you want. We have an implementation in C+ that work, we got to ensure that whatever port of DMD is made in D, it does work with the C+ version.
May 11 2013
prev sibling next sibling parent "David Nadlinger" <see klickverbot.at> writes:
On Saturday, 11 May 2013 at 18:15:22 UTC, Daniel Murphy wrote:
 If we use them in the compiler, we effectively freeze them.  We 
 can't use
 the new modules, because the old toolchains don't have them.

Fair enough, but in such a case we could always add the parts of them we really need to the compiler source until the module is present in the last supported version. The critical difference of this scenario to your approach is that the extra maintenance burden is limited in time: The code is guaranteed to be removed again after (say) a year, and as Phobos stabilizes more and more, the total amount of such "compatibility" code will go down as well.
 We can't fix
 old broken modules because the compiler depends on them.

I don't see your point here: 1) The same is true for any client code out there. The only difference is that we now directly experience what any D library writer out there has to go through anyway, if they want their code to work with multiple compiler releases. 2) If a module is so broken that any "fix" would break all client code, we probably are not going to use it in the compiler anyway.
 If you add code to
 work around old modules being gone in later versions, you 
 pretty much end up
 moving the source into the compiler after all.

Yes, but how often do you think this will happen? At the current point, the barrier for such changes should be quite high anyway. The amount of D2 code in the wild is already non-negligible and growing steadily.
 If we only need to be able to compile with a version from 6 
 months ago, this
 is not a problem.  A year and it's still workable.  But two 
 years?  Three?
 We can get something right here that gcc got so horribly wrong.

Care to elaborate on that? David
May 11 2013
prev sibling next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday, May 11, 2013 20:40:46 deadalnix wrote:
 Except that now, it is a pain to migrate old haskell stuff to
 newer haskelle stuff if you missed several compile release.
 
 You ends up building recursively from the native version to the
 version you want.

Yeah. And I'm stuck with the opposite problem at the moment. I have to be able to build old haskell code without updating it, but I don't have an older version of ghc built currently, and getting a version old enough to compile my code has turned out to be a royal pain, because the old compiler won't compile with the new compiler. I don't even know if I'm going to be able to do it. If you're always moving forward, you're okay, but if you have to deal with older code, then you quickly run into trouble if the compiler is written in an up-to-date version of the language that it's compiling. At least at this point, if you needed something like 2.059 for some reason, you can just grab 2.059, compile it, and use it with your code. But if the compiler were written in D, and the version of D with 2.059 was not fully compatible with the current version, then compiling 2.059 would become a nightmare. The situation between a normal program and the compiler is quite different. With a normal program, if your code isn't going to work with the current compiler due to language or library changes, then you just grab an older version of the compiler and use that (possibly upgrading your code later if you intend to maintain it long term). But if it's the compiler that you're trying to compile, then you're screwed by any language or library changes that affect the compiler, because it could very well become impossible to compile older versions of the compiler. Yes, keeping language and library changes to a minimum reduces the problem, but unless they're absolutely frozen, you risk problems. Even changes with high ROI (like making implicit fall-through on switch statements illegal) could make building older compilers impossible. So, whatever we do with porting dmd to D, we need to be very careful. We don't want to lock ourselves in so that we can't make changes to the language or libraries even when we really need to, but we don't want to make it too difficult to build older versions of the compiler for people who have to either. At the extreme, we could end up in a situation where you have to grab the oldest version of the compiler which was written in C++, and then build each newer version of the compiler in turn until you get to the one that you want. - Jonathan M Davis
May 11 2013
prev sibling next sibling parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Saturday, 11 May 2013 at 21:09:57 UTC, Jonathan M Davis wrote:
 On Saturday, May 11, 2013 20:40:46 deadalnix wrote:
 Except that now, it is a pain to migrate old haskell stuff to
 newer haskelle stuff if you missed several compile release.
 
 You ends up building recursively from the native version to the
 version you want.

Yeah. And I'm stuck with the opposite problem at the moment. I have to be able to build old haskell code without updating it, but I don't have an older version of ghc built currently, and getting a version old enough to compile my code has turned out to be a royal pain, because the old compiler won't compile with the new compiler. I don't even know if I'm going to be able to do it. If you're always moving forward, you're okay, but if you have to deal with older code, then you quickly run into trouble if the compiler is written in an up-to-date version of the language that it's compiling. At least at this point, if you needed something like 2.059 for some reason, you can just grab 2.059, compile it, and use it with your code. But if the compiler were written in D, and the version of D with 2.059 was not fully compatible with the current version, then compiling 2.059 would become a nightmare. The situation between a normal program and the compiler is quite different. With a normal program, if your code isn't going to work with the current compiler due to language or library changes, then you just grab an older version of the compiler and use that (possibly upgrading your code later if you intend to maintain it long term). But if it's the compiler that you're trying to compile, then you're screwed by any language or library changes that affect the compiler, because it could very well become impossible to compile older versions of the compiler. Yes, keeping language and library changes to a minimum reduces the problem, but unless they're absolutely frozen, you risk problems. Even changes with high ROI (like making implicit fall-through on switch statements illegal) could make building older compilers impossible. So, whatever we do with porting dmd to D, we need to be very careful. We don't want to lock ourselves in so that we can't make changes to the language or libraries even when we really need to, but we don't want to make it too difficult to build older versions of the compiler for people who have to either. At the extreme, we could end up in a situation where you have to grab the oldest version of the compiler which was written in C++, and then build each newer version of the compiler in turn until you get to the one that you want. - Jonathan M Davis

Can't this be eased with readily available binaries and cross compilation? E.g. We drop the C++ version in 2.7. You want DMD version 2.8.2. The minimum needed to compile 2.8.2 is 2.7.5: You can download a binary of 2.7.5 for any common system, cross compile 2.8.2 for your development system, viola! If there are binaries available for your development system, then it becomes almost trivial. Even if this wasn't possible for some reason, recursively building successive versions of the compiler is a completely automatable process. dmd+druntime+phobos compiles quickly enough that it's not a big problem.
May 11 2013
prev sibling next sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
--20cf300fb30b32fa7c04dc792050
Content-Type: text/plain; charset=ISO-8859-1

On May 11, 2013 6:35 PM, "David Nadlinger" <see klickverbot.at> wrote:
 On Saturday, 11 May 2013 at 17:23:53 UTC, Daniel Murphy wrote:
 That... doesn't sound very nice to me.  How much of phobos are we
 realistically going to need?

All of it? Well, not quite, but large parts at least. If we are going to stick to the C subset of the language, there is little

 Of course, there will be some restrictions arising from the fact that the

duplicating the whole standard library inside the compiler source seems like maintenance hell.
 David

I don't think it would be anything in the slightest at all. For instance, Bigint implementation is big, BIG. :) What would be ported to the compiler may be influenced by BigInt, but would be a limited subset of its functionality tweaked for the purpose of use in the front end. I am more concerned from GDC's perspective of things. Especially when it comes to building from hosts that may have phobos disabled (this is a configure switch). Regards -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0'; --20cf300fb30b32fa7c04dc792050 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable <p><br> On May 11, 2013 6:35 PM, &quot;David Nadlinger&quot; &lt;<a href=3D"mailto:= see klickverbot.at">see klickverbot.at</a>&gt; wrote:<br> &gt;<br> &gt; On Saturday, 11 May 2013 at 17:23:53 UTC, Daniel Murphy wrote:<br> &gt;&gt;<br> &gt;&gt; That... doesn&#39;t sound very nice to me. =A0How much of phobos a= re we<br> &gt;&gt; realistically going to need?<br> &gt;<br> &gt;<br> &gt; All of it? Well, not quite, but large parts at least.<br> &gt;<br> &gt; If we are going to stick to the C subset of the language, there is lit= tle point in translating it to D in the first place.<br> &gt;<br> &gt; Of course, there will be some restrictions arising from the fact that = the code base needs to work with D versions from a year back or so. But to = me duplicating the whole standard library inside the compiler source seems = like maintenance hell.<br> &gt;<br> &gt; David</p> <p>I don&#39;t think it would be anything in the slightest at all.=A0=A0 Fo= r instance,=A0 Bigint implementation is big,=A0 BIG.=A0 :)</p> <p>What would be ported to the compiler may be influenced by BigInt,=A0 but= would be a limited subset of its functionality tweaked for the purpose of = use in the front end.</p> <p>I am more concerned from GDC&#39;s perspective of things.=A0 Especially = when it comes to building from hosts that may have phobos disabled (this is= a configure switch).</p> <p>Regards<br> -- <br> Iain Buclaw</p> <p>*(p &lt; e ? p++ : p) =3D (c &amp; 0x0f) + &#39;0&#39;;</p> --20cf300fb30b32fa7c04dc792050--
May 11 2013
prev sibling next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday, May 11, 2013 23:43:19 John Colvin wrote:
 Can't this be eased with readily available binaries and cross
 compilation?
 
 E.g. We drop the C++ version in 2.7. You want DMD version 2.8.2.
 The minimum needed to compile 2.8.2 is 2.7.5:
 
 You can download a binary of 2.7.5 for any common system, cross
 compile 2.8.2 for your development system, viola! If there are
 binaries available for your development system, then it becomes
 almost trivial.

Sure, but that assumes that you have access to a compatible binary. That's not always easy, and it can be particularly nasty in *nix. A binary built a few years ago stands a good chance of being completely incompatible with current systems even if all it depends on is glibc, let alone every other dependency that might have changed. It's even harder when your language is not one included by default in distros. For Windows, this probably wouldn't be an issue, but it could be a big one for *nix systems.
 Even if this wasn't possible for some reason, recursively
 building successive versions of the compiler is a completely
 automatable process. dmd+druntime+phobos compiles quickly enough
 that it's not a big problem.

Sure, assuming that you can get an old enough version of the compiler which you can actually compile. It's by no means an insurmountable problem, but you _do_ very much risk being in a situation where you literally have to compile the last C++ version of D's compiler and then compile every version of the compiler since then until you get to the one you want. And anyone who doesn't know that they could go to an older compiler which was in C++ (let alone which version it was) is going to have a lot of trouble. I don't know how much we want to worry about this, but it's very much a real world problem when you don't have a binary for an older version of the compiler that you need, and the current compiler can't build it. It's been costing me a lot of time trying to sort that out in Haskell thanks to the shift from the 98 standard to 2010. - Jonathan M Davis
May 11 2013
prev sibling next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday, May 11, 2013 17:51:24 Walter Bright wrote:
 On 5/11/2013 2:09 PM, Jonathan M Davis wrote:
 I have to be able
 to build old haskell code without updating it,

I guess this is the crux of the matter. Why can't you update the source?

Well, in this particular case, it has to do with work on my master's thesis, and I have the code in various stages of completion and need to be able to look at exactly what it was doing at each of those stages for writing the actual paper. Messing with the code risks changing what it does, and it wasn't necessarily in a great state anyway given that I'm basically dealing with snapshots of the code over time, and not all of the snapshots are necessarily fully functional. In the normal case, I'd definitely want to update my code, but I still might need to get the old code working before doing that so that I can be sure of how it works before changing it. Obviously, things like solid unit testing help with that, but if you're dealing with code that hasn't been updated in a while, it's not necessarily a straightforward task to update it, especially when it's in a language that you're less familiar with. It's even worse if it's code written by someone else entirely, and you're just trying to get it working (which isn't my current situation, but that's often the case when building old code). Ultimately, I don't know how much we need to care about situations where people need to compile an old version of the compiler, and all they have is the new compiler. Much as its been causing me quite a bit of grief in haskell, for the vast majority of people, it's not likely to come up. But I think that it at least needs to be brought up so that it can be considered when deciding what we're doing with regards to porting the front-end to D. I think that main reason that C++ avoids the problem is that it's so rarely updated (which causes a whole different set of problems). And while we obviously want to minimize breakage caused by changes to the library, language, or just due to bugs, they _are_ going to have an effect with regards to building older compilers if the compiler itself is affected by them. So, we might be better of restricting how much the compiler depends on - or we may decide that the workaround is to simply build the last C++ version of the compiler and then move forward from there. But I think that the issue should at least be raised. - Jonathan M Davis
May 11 2013
prev sibling next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday, May 11, 2013 18:18:27 Walter Bright wrote:
 On 5/11/2013 6:09 PM, Jonathan M Davis wrote:
 So, we might be better of restricting how much the compiler depends on -
 or we may decide that the workaround is to simply build the last C++
 version of the compiler and then move forward from there. But I think
 that the issue should at least be raised.

Last month I tried compiling an older 15 line D utility, and 10 of those lines broke due to phobos changes. I discussed this a bit with Andrei, and proposed that we keep around aliases for the old names, and put them inside a: version (OldNames) { alias newname oldname; .... } or something like that.

Well, that particular problem should be less of an issue in the long run. We renamed a lot of stuff in an effort to make the naming more consistent, but we haven't been doing much of that for a while now. And fortunately, those changes are obvious and quick. But in theory, the way to solve the problem of your program not compiling with the new compiler is to compile with the compiler it was developed with in the first place, and then if you want to upgrade your code, you upgrade your code and use it with the new compiler. The big problem is when you need to compile the compiler. You have a circular dependency due to the compiler depending on itself, and have to break it somehow. As long as newer compilers can compiler older ones, you're fine, but that's bound to fall apart at some point unless you freeze everything. But even bug fixes could make the old compiler not compile anymore, so unless the language and compiler (and anything they depend on) is extremely stable, you risk not being able to compile older compilers, and it's hard to guarantee that level of stability, especially if the compiler is not restricted in what features it uses or in what it uses from the standard library. - Jonathan M Davis
May 11 2013
parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Jonathan M Davis" <jmdavisProg gmx.com> wrote in message 
news:mailman.1222.1368325870.4724.digitalmars-d puremagic.com...
 The big problem is when you need to compile
 the compiler. You have a circular dependency due to the compiler depending 
 on
 itself, and have to break it somehow. As long as newer compilers can 
 compiler
 older ones, you're fine, but that's bound to fall apart at some point 
 unless
 you freeze everything. But even bug fixes could make the old compiler not
 compile anymore, so unless the language and compiler (and anything they 
 depend
 on) is extremely stable, you risk not being able to compile older 
 compilers,
 and it's hard to guarantee that level of stability, especially if the 
 compiler
 is not restricted in what features it uses or in what it uses from the
 standard library.

 - Jonathan M Davis

My thought was that you ensure (for the foreseeable future) that all D versions of the compiler compile with the most recent C++ version of the compiler.
May 11 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/11/2013 10:25 PM, Daniel Murphy wrote:
 My thought was that you ensure (for the foreseeable future) that all D
 versions of the compiler compile with the most recent C++ version of the
 compiler.

That would likely mean the the D compiler sources must be compilable with 2.063.
May 12 2013
parent "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:kmnk08$3qr$1 digitalmars.com...
 On 5/11/2013 10:25 PM, Daniel Murphy wrote:
 My thought was that you ensure (for the foreseeable future) that all D
 versions of the compiler compile with the most recent C++ version of the
 compiler.

That would likely mean the the D compiler sources must be compilable with 2.063.

Yes. And anybody with a C++ compiler can build the latest release.
May 12 2013
prev sibling next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday, May 11, 2013 19:56:00 Walter Bright wrote:
 At least for dmd, we keep all the old binaries up and downloadable for that
 reason.

That helps considerably, though if the compiler is old enough, that won't work for Linux due to glibc changes and whatnot. I expect that my particular situation is quite abnormal, but I thought that it was worth raising the point that if you're compiler has to compile itself, then changes to the language (and anything else the compiler depends on) can be that much more costly, so it may be worth minimizing what the compiler depends on (as Daniel is suggesting). As we increase our stability, the likelihood of problems will be less, but we'll probably never eliminate them. Haskell's case is as bad as it is because they released a new standard for it and did it in a way that it doesn't necessarily work to build the old one anymore (and if it does, it tends to be a pain). It would be akin to if dmd were building itself when we went from D1 to D2, and the new compiler could only compile D1 when certain flags were used, and those flags were overly complicated to boot. So, it's much worse than simply going from one version of the compiler to the next. - Jonathan M Davis
May 11 2013
prev sibling next sibling parent Johannes Pfau <nospam example.com> writes:
Am Sat, 11 May 2013 23:51:36 +0100
schrieb Iain Buclaw <ibuclaw ubuntu.com>:

 
 I am more concerned from GDC's perspective of things.  Especially
 when it comes to building from hosts that may have phobos disabled
 (this is a configure switch).
 

Indeed. Right now we can compile and run GDC on every system which has a c++ compiler. We can compile D code on all those platforms even if we don't have druntime or phobos support there. Using phobos means that we would always need a complete & working phobos port (at least some GC work, platform specific headers, TLS, ...) on the host machine, even if we: * Only want to compile D code which doesn't use phobos / druntime at all. * Create a compiler which runs on A but generates code for B. Now we also need a working phobos port on A. (Think of a sh4 -> x86 cross compiler. This works now, it won't work when the frontend has been ported to D / phobos) (I do understand why it would be nice to use phobos though. Hacking some include path code right now I wish I could use std.path...)
May 12 2013
prev sibling next sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
--20cf300512be29ae0404dc824e76
Content-Type: text/plain; charset=ISO-8859-1

On 12 May 2013 10:39, Jacob Carlborg <doob me.com> wrote:

 On 2013-05-12 05:50, Jonathan M Davis wrote:

  That helps considerably, though if the compiler is old enough, that won't
 work
 for Linux due to glibc changes and whatnot.

My experience is the other way around. Binaries built on newer version of Linux doesn't work on older. But binaries built on older versions usually works on newer versions. -- /Jacob Carlborg

Depends... statically linked binaries will probably always work on the latest version, dynamic link and then you've got yourself a 'this libstdc++v5 doesn't exist anymore' problem. -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0'; --20cf300512be29ae0404dc824e76 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On 1= 2 May 2013 10:39, Jacob Carlborg <span dir=3D"ltr">&lt;<a href=3D"mailto:do= ob me.com" target=3D"_blank">doob me.com</a>&gt;</span> wrote:<br><blockquo= te class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc so= lid;padding-left:1ex"> <div class=3D"im">On 2013-05-12 05:50, Jonathan M Davis wrote:<br> <br> <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p= x #ccc solid;padding-left:1ex"> That helps considerably, though if the compiler is old enough, that won&#39= ;t work<br> for Linux due to glibc changes and whatnot.<br> </blockquote> <br></div> My experience is the other way around. Binaries built on newer version of L= inux doesn&#39;t work on older. But binaries built on older versions usuall= y works on newer versions.<span class=3D"HOEnZb"><font color=3D"#888888"><b= r> <br> -- <br> /Jacob Carlborg<br> </font></span></blockquote></div><br></div><div class=3D"gmail_extra">Depen= ds... statically linked binaries will probably always work on the latest ve= rsion, dynamic link and then you&#39;ve got yourself a &#39;this libstdc++v= 5 doesn&#39;t exist anymore&#39; problem.<br clear=3D"all"> </div><div class=3D"gmail_extra"><br>-- <br>Iain Buclaw<br><br>*(p &lt; e ?= p++ : p) =3D (c &amp; 0x0f) + &#39;0&#39;; </div></div> --20cf300512be29ae0404dc824e76--
May 12 2013
prev sibling next sibling parent "w0rp" <devw0rp gmail.com> writes:
On Sunday, 12 May 2013 at 09:48:58 UTC, Iain Buclaw wrote:
 Depends... statically linked binaries will probably always work 
 on the
 latest version, dynamic link and then you've got yourself a 
 'this
 libstdc++v5 doesn't exist anymore' problem.

I am picturing a Linux workstation with the Post-It note ”DO NOT UPDATE" stuck to it.
May 12 2013
prev sibling next sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
--001a11c24ed4e642e304dc82cd85
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

On 12 May 2013 11:08, w0rp <devw0rp gmail.com> wrote:

 On Sunday, 12 May 2013 at 09:48:58 UTC, Iain Buclaw wrote:

 Depends... statically linked binaries will probably always work on the
 latest version, dynamic link and then you've got yourself a 'this
 libstdc++v5 doesn't exist anymore' problem.

I am picturing a Linux workstation with the Post-It note =94DO NOT UPDATE=

 stuck to it.

:D The only reason you'd have for that post-it note is if you were running some application that you; built yourself, obtained from a third party vendor, general other or not part of the distributions repository. For instance, I've had some linux ports of games break on me once after an upgrade. And I've even got a company gcc that does not work on Debian/Ubuntu. There's nothing wrong with binary compatibility, just that they implemented a multi-arch directory structure, so everything is in a different place to what the vanilla gcc expects. ;) --=20 Iain Buclaw *(p < e ? p++ : p) =3D (c & 0x0f) + '0'; --001a11c24ed4e642e304dc82cd85 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On 1= 2 May 2013 11:08, w0rp <span dir=3D"ltr">&lt;<a href=3D"mailto:devw0rp gmai= l.com" target=3D"_blank">devw0rp gmail.com</a>&gt;</span> wrote:<br><blockq= uote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc = solid;padding-left:1ex"> <div class=3D"im">On Sunday, 12 May 2013 at 09:48:58 UTC, Iain Buclaw wrote= :<br> <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p= x #ccc solid;padding-left:1ex"> Depends... statically linked binaries will probably always work on the<br> latest version, dynamic link and then you&#39;ve got yourself a &#39;this<b= r> libstdc++v5 doesn&#39;t exist anymore&#39; problem.<br> </blockquote> <br></div> I am picturing a Linux workstation with the Post-It note =94DO NOT UPDATE&q= uot; stuck to it.<br> </blockquote></div><br></div><div class=3D"gmail_extra">:D<br><br></div><di= v class=3D"gmail_extra">The only reason you&#39;d have for that post-it not= e is if you were running some application that you; built yourself, obtaine= d from a third party vendor, general other or not part of the distributions= repository.<br> <br></div><div class=3D"gmail_extra">For instance, I&#39;ve had some linux = ports of games break on me once after an upgrade.=A0 And I&#39;ve even got = a company gcc that does not work on Debian/Ubuntu.=A0 There&#39;s nothing w= rong with binary compatibility, just that they implemented a multi-arch dir= ectory structure, so everything is in a different place to what the vanilla= gcc expects.=A0 ;) <br clear=3D"all"> </div><div class=3D"gmail_extra"><br>-- <br>Iain Buclaw<br><br>*(p &lt; e ?= p++ : p) =3D (c &amp; 0x0f) + &#39;0&#39;; </div></div> --001a11c24ed4e642e304dc82cd85--
May 12 2013
prev sibling next sibling parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Sunday, 12 May 2013 at 09:48:58 UTC, Iain Buclaw wrote:
 On 12 May 2013 10:39, Jacob Carlborg <doob me.com> wrote:

 On 2013-05-12 05:50, Jonathan M Davis wrote:

  That helps considerably, though if the compiler is old 
 enough, that won't
 work
 for Linux due to glibc changes and whatnot.

My experience is the other way around. Binaries built on newer version of Linux doesn't work on older. But binaries built on older versions usually works on newer versions. -- /Jacob Carlborg

Depends... statically linked binaries will probably always work on the latest version, dynamic link and then you've got yourself a 'this libstdc++v5 doesn't exist anymore' problem.

So surely we can just offer a full history of statically linked binaries, problem solved?
May 12 2013
prev sibling next sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
--001a11c24ed404e69a04dc8376e1
Content-Type: text/plain; charset=ISO-8859-1

On 12 May 2013 11:39, John Colvin <john.loughran.colvin gmail.com> wrote:

 On Sunday, 12 May 2013 at 09:48:58 UTC, Iain Buclaw wrote:

 On 12 May 2013 10:39, Jacob Carlborg <doob me.com> wrote:

  On 2013-05-12 05:50, Jonathan M Davis wrote:
  That helps considerably, though if the compiler is old enough, that
 won't

 work
 for Linux due to glibc changes and whatnot.

Linux doesn't work on older. But binaries built on older versions usually works on newer versions. -- /Jacob Carlborg

latest version, dynamic link and then you've got yourself a 'this libstdc++v5 doesn't exist anymore' problem.

So surely we can just offer a full history of statically linked binaries, problem solved?

The historical quirk of binary compatibility on Linux is OT to the problem I questioned, so no. -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0'; --001a11c24ed404e69a04dc8376e1 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On 1= 2 May 2013 11:39, John Colvin <span dir=3D"ltr">&lt;<a href=3D"mailto:john.= loughran.colvin gmail.com" target=3D"_blank">john.loughran.colvin gmail.com= </a>&gt;</span> wrote:<br> <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p= x #ccc solid;padding-left:1ex"><div class=3D"im">On Sunday, 12 May 2013 at = 09:48:58 UTC, Iain Buclaw wrote:<br> </div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-l= eft:1px #ccc solid;padding-left:1ex"><div class=3D"im"> On 12 May 2013 10:39, Jacob Carlborg &lt;<a href=3D"mailto:doob me.com" tar= get=3D"_blank">doob me.com</a>&gt; wrote:<br> <br> </div><div><div class=3D"h5"><blockquote class=3D"gmail_quote" style=3D"mar= gin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> On 2013-05-12 05:50, Jonathan M Davis wrote:<br> <br> =A0That helps considerably, though if the compiler is old enough, that won&= #39;t<br> <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p= x #ccc solid;padding-left:1ex"> work<br> for Linux due to glibc changes and whatnot.<br> <br> </blockquote> <br> My experience is the other way around. Binaries built on newer version of<b= r> Linux doesn&#39;t work on older. But binaries built on older versions usual= ly<br> works on newer versions.<br> <br> --<br> /Jacob Carlborg<br> <br> </blockquote> <br></div></div><div class=3D"im"> Depends... statically linked binaries will probably always work on the<br> latest version, dynamic link and then you&#39;ve got yourself a &#39;this<b= r> libstdc++v5 doesn&#39;t exist anymore&#39; problem.<br> </div></blockquote> <br> So surely we can just offer a full history of statically linked binaries, p= roblem solved?<br> </blockquote></div><br></div><div class=3D"gmail_extra">The historical quir= k of binary compatibility on Linux is OT to the problem I questioned, so no= .<br></div><div class=3D"gmail_extra"><br clear=3D"all"><br>-- <br>Iain Buc= law<br> <br>*(p &lt; e ? p++ : p) =3D (c &amp; 0x0f) + &#39;0&#39;; </div></div> --001a11c24ed404e69a04dc8376e1--
May 12 2013
prev sibling next sibling parent "Jesse Phillips" <Jesse.K.Phillips+D gmail.com> writes:
On Saturday, 11 May 2013 at 15:09:24 UTC, Iain Buclaw wrote:
 Actually, the more I sit down and think about it, the more I 
 question
 whether or not it is a good idea for the D D front end to have 
 a dependency
 on phobos.   Maybe I should stop thinking in general.  :)

 Regards

Let me restate the issues to be clear on what I think is being said, and the my opinion. == On GDC: There is an flag to have the compiler built without dependencies of druntime/phobos. Someone interested in a Phobos free compiler would the be required to have Phobos to build their compiler. - While this is the same person, I don't see that they will require the same restriction when building the compiler. My guess is the environment used to build the compiler has fewer restrictions, such as the having gcc/ubuntu available. Thus it is reasonable to expect them to have the needed libraries to build their compiler. - Similarly, even if we restrict to just using druntime, the one interested in a druntime free compiler still runs into the issue. == On Compiling older Compilers: Checkout compiler source for an older compiler and gcc will build it. By switching to D, not only do we locate the source for the compiler we are building we must have the version of D used to build that compiler (or within the some window) - I think it would be positive to say, each dmd version compiles with the previous release and itself (possibly with -d). This gives a feel for what changes are happening, and the more Phobos used the better. - We can't eliminate the problem, if we only rely on druntime, everything still applies there. Instead we just need a consistent and/or well documented statement of which compiler versions compile which compiler versions. In conclusion, it is a real problem. But it is nothing we can eliminate. We should look at reducing the impact not through reducing the dependency, but instead through improvement of our processes for introducing breaking changes. Such concentration will not be limited to benefiting DMD, but instead every project which must deal with older code in some fashion.
May 13 2013
prev sibling parent "QAston" <qaston gmail.com> writes:
On Thursday, 9 May 2013 at 10:15:42 UTC, Iain Buclaw wrote:
 I'll will very likely keep a branch with the C++ implemented 
 front end for
 these purposes. But ideally we should get porting as soon as 
 possible ahead
 of this move so that there are already D compilers available 
 for said
 targets.

 Though it would be nice for the D implementation to be kept to 
 a subset
 that is backwards compatible with 2.062 (or whatever version we 
 decide to
 make the switch at), that is something I cannot guarantee.


 Regards

Could compiling the D compiler in D to llvm bytecode on a working platform and then compiling the bytecode on target platform solve the issue (at least a part of it)?
May 20 2013