www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - LLVM IR influence on compiler debugging

reply "bearophile" <bearophileHUGS lycos.com> writes:
This is a very easy to read article about the design of LLVM:
http://www.drdobbs.com/architecture-and-design/the-design-of-llvm/240001128

It explains what the IR is:

The most important aspect of its design is the LLVM Intermediate 
Representation (IR), which is the form it uses to represent code 
in the compiler. LLVM IR [...] is itself defined as a first 
class language with well-defined semantics.<

In particular, LLVM IR is both well specified and the only 
interface to the optimizer. This property means that all you 
need to know to write a front end for LLVM is what LLVM IR is, 
how it works, and the invariants it expects. Since LLVM IR has a 
first-class textual form, it is both possible and reasonable to 
build a front end that outputs LLVM IR as text, then uses UNIX 
pipes to send it through the optimizer sequence and code 
generator of your choice. It might be surprising, but this is 
actually a pretty novel property to LLVM and one of the major 
reasons for its success in a broad range of different 
applications. Even the widely successful and relatively 
well-architected GCC compiler does not have this property: its 
GIMPLE mid-level representation is not a self-contained 
representation.<

That IR has a great effect on making it simpler to debug the compiler, I think this is important (and I think it partially explains why Clang was created so quickly):
Compilers are very complicated, and quality is important, 
therefore testing is critical. For example, after fixing a bug 
that caused a crash in an optimizer, a regression test should be 
added to make sure it doesn't happen again. The traditional 
approach to testing this is to write a .c file (for example) 
that is run through the compiler, and to have a test harness 
that verifies that the compiler doesn't crash. This is the 
approach used by the GCC test suite, for example. The problem 
with this approach is that the compiler consists of many 
different subsystems and even many different passes in the 
optimizer, all of which have the opportunity to change what the 
input code looks like by the time it gets to the previously 
buggy code in question. If something changes in the front end or 
an earlier optimizer, a test case can easily fail to test what 
it is supposed to be testing. By using the textual form of LLVM 
IR with the modular optimizer, the LLVM test suite has highly 
focused regression tests that can load LLVM IR from disk, run it 
through exactly one optimization pass, and verify the expected 
behavior. Beyond crashing, a more complicated behavioral test 
wants to verify that an optimization is actually performed. 
[...] While this might seem like a really trivial example, this 
is very difficult to test by writing .c files: front ends often 
do constant folding as they parse, so it is very difficult and 
fragile to write code that makes its way downstream to a 
constant folding optimization pass. Because we can load LLVM IR 
as text and send it through the specific optimization pass we're 
interested in, then dump out the result as another text file, it 
is really straightforward to test exactly what we want, both for 
regression and feature tests.<

Bye, bearophile
Jun 28 2012
next sibling parent =?UTF-8?B?U8O2bmtlIEx1ZHdpZw==?= <sludwig outerproduct.org> writes:
I implemented a compiler back end with LLVM some time ago. The IM helped 
a lot in both, spotting errors in IM codegen and issues with target 
codegen (e.g. because of some misconfiguration). You always have the 
high level IM available as text and the unoptimized target assembler 
usually is pretty similar to the IM code and thus provides a great guide 
deciphering the assembler.

Also the fact that you can output and modify a module as IM code to try 
certain things is really useful sometimes.
Jun 29 2012
prev sibling next sibling parent reply Don Clugston <dac nospam.com> writes:
On 29/06/12 08:04, bearophile wrote:
 This is a very easy to read article about the design of LLVM:
 http://www.drdobbs.com/architecture-and-design/the-design-of-llvm/240001128

 That IR has a great effect on making it simpler to debug the compiler, I
 think this is important (and I think it partially explains why Clang was
 created so quickly):

It's a good design, especially for optimisation tests. Although I can't see an immediate application of this for D. DMD's backend is nearly bug-free. (By which I mean, it has 100X fewer bugs than the front-end).
Jun 29 2012
next sibling parent Kai Nacke <kai redstar.de> writes:
On 29.06.2012 11:27, Don Clugston wrote:
 It's a good design, especially for optimisation tests. Although I can't
 see an immediate application of this for D.

LDC (https://github.com/ldc-developers/ldc/) uses LLVM. Kai
Jul 06 2012
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/6/2012 4:50 PM, Adam Wilson wrote:
 My guess is that, unless something changes significantly, DMD will remain a
 niche tool; useful as a reference/research compiler, but for actual work people
 will use LDC or GDC.

A more diverse ecosystem that supports D is only for the better.
Jul 06 2012
parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/6/2012 9:39 PM, Adam Wilson wrote:
 If this is what you want then I can be fine with it too. I just wanted to make
 my position clear. This also means that use cases are going to need to be
 clarified and a clear story crafted around the pro's and con's of each compiler
 to help us make a decision about which option is best for our needs. I was able
 to reach my conclusions, but only after months of immersion into the community.
 Needless to say, most people, open-source or commercial won't spend that kind
of
 time...

 In short, more promotion of the options on dlang.org.

No matter what I write about the pros and cons, I will be accused of bias and there will be hard feelings. So I prefer that individuals make up their own minds.
Jul 06 2012
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2012-07-07 01:50, Adam Wilson wrote:

 My guess is that, unless something changes significantly, DMD will
 remain a niche tool; useful as a reference/research compiler, but for
 actual work people will use LDC or GDC.

One think I really like about DMD is that is really fast at compiling. It's also a lot faster to compile DMD and LDC/GDC, especially if you need to compile the backends.
 At the moment, the ONLY reasons I use DMD are to test my changes to the
 compiler and that LLVM doesn't yet support SEH. As soon as LDC supports
 SEH, and it will (I hear 3.2 will), I will move all my work to LDC. So
 what if it's a version or two behind, it has superior code generation
 and better Windows support (COFF/x64 anybody?).

That is being worked on: https://github.com/D-Programming-Language/dmd/commit/2511126cd7a234797e8b32515e419ce4f84ca928 -- /Jacob Carlborg
Jul 07 2012
parent reply =?ISO-8859-15?Q?Alex_R=F8nne_Petersen?= <alex lycus.org> writes:
On 07-07-2012 12:45, Jacob Carlborg wrote:
 On 2012-07-07 01:50, Adam Wilson wrote:

 My guess is that, unless something changes significantly, DMD will
 remain a niche tool; useful as a reference/research compiler, but for
 actual work people will use LDC or GDC.

One think I really like about DMD is that is really fast at compiling. It's also a lot faster to compile DMD and LDC/GDC, especially if you need to compile the backends.

True, but then again, DMD only targets /one/ architecture, while e.g. LLVM targets lots. On a high-end 4-core x86, building LLVM and LDC can usually be done in less than an hour, even when building them in optimized mode. Plus, you usually don't need to recompile LLVM anyway, only LDC.
 At the moment, the ONLY reasons I use DMD are to test my changes to the
 compiler and that LLVM doesn't yet support SEH. As soon as LDC supports
 SEH, and it will (I hear 3.2 will), I will move all my work to LDC. So
 what if it's a version or two behind, it has superior code generation
 and better Windows support (COFF/x64 anybody?).

That is being worked on: https://github.com/D-Programming-Language/dmd/commit/2511126cd7a234797e8b32515e419ce4f84ca928

I just hope this will mean we can use the Microsoft linker... -- Alex Rønne Petersen alex lycus.org http://lycus.org
Jul 07 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/7/2012 8:38 AM, Alex Rønne Petersen wrote:
 On a high-end 4-core x86, building LLVM and LDC can usually be
 done in less than an hour, even when building them in optimized mode.

Building dmd on my Windows box takes 26 seconds, optimized, using a single core.
Jul 07 2012
next sibling parent reply =?ISO-8859-15?Q?Alex_R=F8nne_Petersen?= <alex lycus.org> writes:
On 07-07-2012 20:48, Walter Bright wrote:
 On 7/7/2012 8:38 AM, Alex Rønne Petersen wrote:
 On a high-end 4-core x86, building LLVM and LDC can usually be
 done in less than an hour, even when building them in optimized mode.

Building dmd on my Windows box takes 26 seconds, optimized, using a single core.

Right, it's even faster for me on Linux. Keep in mind, though, that LLVM is usually a "build once, then link to/use" thing. Building LDC itself is just building DMD + the glue layer (excluding druntime and phobos), which is relatively fast. By the way, is it planned that DMD will be able to use Microsoft's linker when compiling with COFF? Or is it too early to say at this point? (It would simplify a lot of things; particularly, integration with MSVC projects. Further, Optlink's command line is really unfriendly and hard to integrate in most build systems.) -- Alex Rønne Petersen alex lycus.org http://lycus.org
Jul 07 2012
parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/7/2012 11:59 AM, Alex Rønne Petersen wrote:
 By the way, is it planned that DMD will be able to use Microsoft's linker when
 compiling with COFF?

Yes, barring some horrible obstacle.
Jul 07 2012
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/7/2012 4:08 PM, Adam Wilson wrote:
 On Sat, 07 Jul 2012 11:48:44 -0700, Walter Bright <newshound2 digitalmars.com>
 wrote:

 On 7/7/2012 8:38 AM, Alex Rønne Petersen wrote:
 On a high-end 4-core x86, building LLVM and LDC can usually be
 done in less than an hour, even when building them in optimized mode.

Building dmd on my Windows box takes 26 seconds, optimized, using a single core.

Build speed of the compiler itself is an utterly trivial matter, my primary concern is speed for the end-user. Even the build speed/memory usage of my projects is not a problem, I can always throw more money at hardware. For example, I am considering making the next round of developer box updates to Intel Xeon E1650's with 32GB RAM. Gentlemen, from a business prospective, compiler and/or project build times are the least of your problems. How well the code performs and most importantly the accuracy of the code generation is of key concern.

Throwing more hardware at a problem isn't going to get you a 120x increase in speed. While you're right that the customer cares not how long it takes to build the compiler, the speed is important for the edit-compile-debug loop of developing the compiler. For me, it matters quite a bit.
Jul 07 2012
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/7/2012 4:28 PM, Adam Wilson wrote:
 I imagine that it does, and honestly, I am not terribly concerned if DMD stays
 with it's current backend because once LLVM gets SEH, im gone. But I do wonder
 if DMD will become increasingly irrelevant as backends like GCC and LLVM
 advance. And I am particularly troubled by what seems like a duplication of
 effort in the face of more widely tested backends...

Different implementations will have their different strengths and weaknesses, and also competition between them is good. I'm very pleased that we have 3 strong implementations.
 All that said, I understand the legal predicament. You can't do anything about
 it and I'm not trying to convince you too. I just want to see more promotion
and
 support of the other options available.

I think we'd all be better off if more involved here would get more active in promotion, rather than waiting for me to do it. The whole "better mouse trap" thing is baloney. Promotion is necessary, even if you've got a great product. Even Apple has a huge promotion budget.
Jul 07 2012
parent =?UTF-8?B?QWxleCBSw7hubmUgUGV0ZXJzZW4=?= <alex lycus.org> writes:
On 08-07-2012 01:57, Jonathan M Davis wrote:
 On Saturday, July 07, 2012 16:52:25 Adam Wilson wrote:
 Agreed, but not many people have push rights to the website, which is
 where I would start.

The lack of commit rights to d-programming-language.org doesn't stop you from submitting pull requests. It just stops you from putting your edits directly on the site without anyone else looking at them first. Granted, pull requests for d-programming-language.org aren't always handled quickly, but how quickly your changes get merged doesn't really affect your ability to make the changes in the first place. - Jonathan M Davis

No, but it does affect how long it takes to make them show up on the page that we present to users of/newcomers to D. The slow pull request review/accept/reject time can be very demotivating at times. -- Alex Rønne Petersen alex lycus.org http://lycus.org
Jul 07 2012
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 07/08/2012 01:28 AM, Adam Wilson wrote:
 On Sat, 07 Jul 2012 16:15:11 -0700, Walter Bright
 <newshound2 digitalmars.com> wrote:

 On 7/7/2012 4:08 PM, Adam Wilson wrote:
 On Sat, 07 Jul 2012 11:48:44 -0700, Walter Bright
 <newshound2 digitalmars.com>
 wrote:

 On 7/7/2012 8:38 AM, Alex Rønne Petersen wrote:
 On a high-end 4-core x86, building LLVM and LDC can usually be
 done in less than an hour, even when building them in optimized mode.

Building dmd on my Windows box takes 26 seconds, optimized, using a single core.

Build speed of the compiler itself is an utterly trivial matter, my primary concern is speed for the end-user. Even the build speed/memory usage of my projects is not a problem, I can always throw more money at hardware. For example, I am considering making the next round of developer box updates to Intel Xeon E1650's with 32GB RAM. Gentlemen, from a business prospective, compiler and/or project build times are the least of your problems. How well the code performs and most importantly the accuracy of the code generation is of key concern.

Throwing more hardware at a problem isn't going to get you a 120x increase in speed.

I wont argue that, but again, that's not a primary concern. :-)
 While you're right that the customer cares not how long it takes to
 build the compiler, the speed is important for the edit-compile-debug
 loop of developing the compiler. For me, it matters quite a bit.

I imagine that it does, and honestly, I am not terribly concerned if DMD stays with it's current backend because once LLVM gets SEH, im gone. But I do wonder if DMD will become increasingly irrelevant as backends like GCC and LLVM advance. And I am particularly troubled by what seems like a duplication of effort in the face of more widely tested backends...

The DMD backend is very fast in comparison to other backends. LLVM is unlikely to catch up in speed, because it is well architectured and more general.
 All that said, I understand the legal predicament. You can't do anything
 about it and I'm not trying to convince you too. I just want to see more
 promotion and support of the other options available.

Jul 07 2012
next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 07/08/2012 01:54 AM, Adam Wilson wrote:
 On Sat, 07 Jul 2012 16:38:27 -0700, Timon Gehr <timon.gehr gmx.ch> wrote:

 On 07/08/2012 01:28 AM, Adam Wilson wrote:
 On Sat, 07 Jul 2012 16:15:11 -0700, Walter Bright
 <newshound2 digitalmars.com> wrote:

 On 7/7/2012 4:08 PM, Adam Wilson wrote:
 On Sat, 07 Jul 2012 11:48:44 -0700, Walter Bright
 <newshound2 digitalmars.com>
 wrote:

 On 7/7/2012 8:38 AM, Alex Rønne Petersen wrote:
 On a high-end 4-core x86, building LLVM and LDC can usually be
 done in less than an hour, even when building them in optimized
 mode.

Building dmd on my Windows box takes 26 seconds, optimized, using a single core.

Build speed of the compiler itself is an utterly trivial matter, my primary concern is speed for the end-user. Even the build speed/memory usage of my projects is not a problem, I can always throw more money at hardware. For example, I am considering making the next round of developer box updates to Intel Xeon E1650's with 32GB RAM. Gentlemen, from a business prospective, compiler and/or project build times are the least of your problems. How well the code performs and most importantly the accuracy of the code generation is of key concern.

Throwing more hardware at a problem isn't going to get you a 120x increase in speed.

I wont argue that, but again, that's not a primary concern. :-)
 While you're right that the customer cares not how long it takes to
 build the compiler, the speed is important for the edit-compile-debug
 loop of developing the compiler. For me, it matters quite a bit.

I imagine that it does, and honestly, I am not terribly concerned if DMD stays with it's current backend because once LLVM gets SEH, im gone. But I do wonder if DMD will become increasingly irrelevant as backends like GCC and LLVM advance. And I am particularly troubled by what seems like a duplication of effort in the face of more widely tested backends...

The DMD backend is very fast in comparison to other backends. LLVM is unlikely to catch up in speed, because it is well architectured and more general.

Oh, I agree that it is, but as I've been saying, raw compiler speed is rarely an important factor outside of small circles of developers, if it was, businesses would have given up on C++ LONG ago. It's nice to have, but the business case for it is weak comparatively.

'raw compiler speed is rarely the most important factor' does not necessarily imply 'raw compiler speed is rarely an important factor'.
Jul 07 2012
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/7/12 8:29 PM, Adam Wilson wrote:
 Sure they complain, but they would complain harder if the generated code
 was sub-optimal or had bugs in it. And I imagine that multiple hour
 build times are more the exception than rule even in C++, my
 understanding is that all 50mloc of Windows can compile overnight using
 distributed compiling. Essentially, my argument is that for business
 compilation time is something that can be attacked with money, where
 code generation and perf bugs are not.

I'm sorry, but I think you got that precisely backwards. Andrei
Jul 07 2012
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/7/12 11:26 PM, Adam Wilson wrote:
 On Sat, 07 Jul 2012 19:33:22 -0700, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 On 7/7/12 8:29 PM, Adam Wilson wrote:
 Sure they complain, but they would complain harder if the generated code
 was sub-optimal or had bugs in it. And I imagine that multiple hour
 build times are more the exception than rule even in C++, my
 understanding is that all 50mloc of Windows can compile overnight using
 distributed compiling. Essentially, my argument is that for business
 compilation time is something that can be attacked with money, where
 code generation and perf bugs are not.

I'm sorry, but I think you got that precisely backwards. Andrei

Why is that?

Compilation is a huge bottleneck for any major C++ code base, and adding hardware (distributing compilation etc) is survival, but definitely doesn't scale to make the problem negligible. In contrast, programmers have considerable control about generating fast code. Andrei
Jul 07 2012
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/7/2012 9:16 PM, Adam Wilson wrote:
 I still see pretty heinous backend problems crop up in
 the bug reports for DMD.

Come on, it's pretty stable. Do you watch the bug reports for gcc? I remember a guy recently ran some exhaustive code gen tests over C compilers, and dmc (the same back end as dmd) was the only one that did them correctly. http://news.ycombinator.com/item?id=4131508
Jul 07 2012
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2012-07-08 06:16, Adam Wilson wrote:

 As to compile speed, is LDC really *THAT* much slower than DMD so as to
 cause C++ style speed issues? I thought one of the whole points of D is
 that it doesn't need the epic numbers of passes and preprocessor that
 C++ does precisely because that's what slows down C++ so much...

No, LDC is still faster than C++ but slower than DMD. It's not the frontend that is the problem, it's the backend. -- /Jacob Carlborg
Jul 08 2012
prev sibling parent Sean Cavanaugh <WorksOnMyMachine gmail.com> writes:
On 7/7/2012 11:05 PM, Andrei Alexandrescu wrote:
 Compilation is a huge bottleneck for any major C++ code base, and adding
 hardware (distributing compilation etc) is survival, but definitely
 doesn't scale to make the problem negligible.

 In contrast, programmers have considerable control about generating fast
 code.

Our bottleneck with a large C++ codebase (Unreal Engine based game) is linking. Granted we have beefy workstations (HP Z800 with dual quad or hex core xeons and hyperthreading), but a full build+link is 4-5 min, and a single change+link is over 2 min. You can also speed up C++ compiling by merging a bunch of the .cpp files together (google "unity c++ build"), though if you go too crazy you will learn compilers eventually do explode when fed 5-10 megs of source code per translation unit heh.
Jul 08 2012
prev sibling parent =?ISO-8859-15?Q?Alex_R=F8nne_Petersen?= <alex lycus.org> writes:
On 08-07-2012 06:44, Adam Wilson wrote:
 On Sat, 07 Jul 2012 21:13:35 -0700, Jonathan M Davis
 <jmdavisProg gmx.com> wrote:

 On Saturday, July 07, 2012 20:26:56 Adam Wilson wrote:
 On Sat, 07 Jul 2012 19:33:22 -0700, Andrei Alexandrescu

 <SeeWebsiteForEmail erdani.org> wrote:
 On 7/7/12 8:29 PM, Adam Wilson wrote:
 Sure they complain, but they would complain harder if the


 was sub-optimal or had bugs in it. And I imagine that multiple hour
 build times are more the exception than rule even in C++, my
 understanding is that all 50mloc of Windows can compile overnight


 distributed compiling. Essentially, my argument is that for business
 compilation time is something that can be attacked with money, where
 code generation and perf bugs are not.

I'm sorry, but I think you got that precisely backwards. Andrei

Why is that?

Well, considering that the general trend over the last ten years has been to move to languages which focus on programmer productivity (including compilation speed) over those which focus on speed of execution, there's a definite argument that programmers generally prefer stuff that makes programming easier and faster over stuff that makes the program faster. There are obviously exceptions, and there are some signs of things shifting (due to mobile and whatnot), but that's the way that things have been trending for over a decade. - Jonathan M Davis

I won't argue with this at all, I use C# after all. But there we shuffle the "backend" off to the JIT, so compilation is really more a translation to IR. IIRC this is how most of the popular productivity languages did it (Java, .NET, etc.). It'd be an interesting research project to modify LDC to output IR only and run the IR later on an LLVM based VM and then see what kind of compile times you get...

It would be kind of useless in practice, unfortunately. LLVM IR is very unsuited for VM use: http://lists.cs.uiuc.edu/pipermail/llvmdev/2011-October/043719.html -- Alex Rønne Petersen alex lycus.org http://lycus.org
Jul 07 2012
prev sibling next sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Fri, 29 Jun 2012 02:27:19 -0700, Don Clugston <dac nospam.com> wrote:

 On 29/06/12 08:04, bearophile wrote:
 This is a very easy to read article about the design of LLVM:
 http://www.drdobbs.com/architecture-and-design/the-design-of-llvm/240001128

 That IR has a great effect on making it simpler to debug the compiler, I
 think this is important (and I think it partially explains why Clang was
 created so quickly):

It's a good design, especially for optimisation tests. Although I can't see an immediate application of this for D. DMD's backend is nearly bug-free. (By which I mean, it has 100X fewer bugs than the front-end).

Sure, but LLVM is just as bug free and spanks the current DMD backend in perf tests. Just because something is well tested and understood doesn't automatically make it superior. Also worth consideration is that moving to LLVM would neatly solve an incredible number of sticky points with the current backend, not the least of which is it's license. And lets not ven talk about the automatic multi-arch support we'd get. My guess is that, unless something changes significantly, DMD will remain a niche tool; useful as a reference/research compiler, but for actual work people will use LDC or GDC. At the moment, the ONLY reasons I use DMD are to test my changes to the compiler and that LLVM doesn't yet support SEH. As soon as LDC supports SEH, and it will (I hear 3.2 will), I will move all my work to LDC. So what if it's a version or two behind, it has superior code generation and better Windows support (COFF/x64 anybody?). -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 06 2012
prev sibling next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Adam Wilson:

 moving to LLVM would neatly solve an incredible number of 
 sticky points with the current backend,

I remember some small limits in the LLVM back-end, like not being able to use zero bits to implement fixed-size zero length arrays. And something regarding gotos in inline asm. I don't know if those little limits are now removed.
 My guess is that, unless something changes significantly, DMD 
 will remain a niche tool; useful as a reference/research 
 compiler, but for actual work people will use LDC or GDC.

The D reference compiler can't be DMD forever.
 At the moment, the ONLY reasons I use DMD are to test my 
 changes to the compiler and that LLVM doesn't yet support SEH. 
 As soon as LDC supports SEH, and it will (I hear 3.2 will),

Is LDC2 going to work on Windows32 bit too? Bye, bearophile
Jul 06 2012
prev sibling next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday, July 07, 2012 02:10:49 bearophile wrote:
 My guess is that, unless something changes significantly, DMD
 will remain a niche tool; useful as a reference/research
 compiler, but for actual work people will use LDC or GDC.

The D reference compiler can't be DMD forever.

Why not? Having multiple compilers is great, but I seriously doubt that Walter is going to work on any other compiler (I don't believe that he _can_ legally work on any other - except maybe if he writes a new one himself - because he'd get into licensing issues with dmc), and unless you're talking about years (decades?) from now, I very much doubt that the reference compiler is going to be a compiler that Walter Bright can't work on. I see no problem with dmd being the reference compiler and continuing to be so. And if other compilers get used more because their backends are faster, that's fine too. - Jonathan M Davis
Jul 06 2012
parent reply Jacob Carlborg <doob me.com> writes:
On 2012-07-07 03:17, Jonathan M Davis wrote:

 Walter refuses to look at the code for any other compiler. He has been well
 served in the past by being able to say that he has never looked at the code
 of another compiler when the lawyers come knocking. So, as I understand it,
 anything that would require him to even _look_ at the backend's code, let
 alone work on it, would make it so he won't do it. And I very much doubt that
 he'd want to work on a compiler where he can't work on the backend (plus, I
 would assume that you'd have to look at the backend to work on the glue code,
 so he'd be restricted entirely to the frontend-specific portions of the
 compiler).

Theoretically you should be able to just look at the documentation but I understand what you're meaning. -- /Jacob Carlborg
Jul 07 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/7/2012 3:46 AM, Jacob Carlborg wrote:
 Theoretically you should be able to just look at the documentation

HAHAHAHAHAHAHAHAHAHAHAAAAA!!!
Jul 07 2012
parent reply Jacob Carlborg <doob me.com> writes:
On 2012-07-07 20:49, Walter Bright wrote:
 On 7/7/2012 3:46 AM, Jacob Carlborg wrote:
 Theoretically you should be able to just look at the documentation

HAHAHAHAHAHAHAHAHAHAHAAAAA!!!

Yeah, I know how you feel about documentation. -- /Jacob Carlborg
Jul 08 2012
parent Jacob Carlborg <doob me.com> writes:
On 2012-07-09 22:43, Simen Kjaeraas wrote:

 You mean there are actually people out there who believe documentation
 can be correct, not to mention understandable, comprehensive and giving
 the information you need?

You do know there are closed source libraries where you don't have an option. -- /Jacob Carlborg
Jul 09 2012
prev sibling next sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Fri, 06 Jul 2012 17:59:36 -0700, Jonathan M Davis <jmdavisProg gmx.com>  
wrote:

 On Saturday, July 07, 2012 02:10:49 bearophile wrote:
 My guess is that, unless something changes significantly, DMD
 will remain a niche tool; useful as a reference/research
 compiler, but for actual work people will use LDC or GDC.

The D reference compiler can't be DMD forever.

Why not? Having multiple compilers is great, but I seriously doubt that Walter is going to work on any other compiler (I don't believe that he _can_ legally work on any other - except maybe if he writes a new one himself - because he'd get into licensing issues with dmc), and unless you're talking about years (decades?) from now, I very much doubt that the reference compiler is going to be a compiler that Walter Bright can't work on. I see no problem with dmd being the reference compiler and continuing to be so. And if other compilers get used more because their backends are faster, that's fine too. - Jonathan M Davis

Walter can't use LLVM? Why not? He wouldn't have to work on LLVM and the glue code is considered front-end. I admit I am not terribly well informed of the legal issues here. But it seems to me that bolting the DMDFE onto a different back--end can't be a problem because the agreement only covers the DMCBE, and the DMDFE is 100% Walter owned, he can do with it what he pleases and all Symantec can do is pout.. -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 06 2012
prev sibling next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Friday, July 06, 2012 18:07:54 Adam Wilson wrote:
 Walter can't use LLVM? Why not? He wouldn't have to work on LLVM and the
 glue code is considered front-end. I admit I am not terribly well informed
 of the legal issues here. But it seems to me that bolting the DMDFE onto a
 different back--end can't be a problem because the agreement only covers
 the DMCBE, and the DMDFE is 100% Walter owned, he can do with it what he
 pleases and all Symantec can do is pout.

Walter refuses to look at the code for any other compiler. He has been well served in the past by being able to say that he has never looked at the code of another compiler when the lawyers come knocking. So, as I understand it, anything that would require him to even _look_ at the backend's code, let alone work on it, would make it so he won't do it. And I very much doubt that he'd want to work on a compiler where he can't work on the backend (plus, I would assume that you'd have to look at the backend to work on the glue code, so he'd be restricted entirely to the frontend-specific portions of the compiler). - Jonathan M Davis
Jul 06 2012
prev sibling next sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Fri, 06 Jul 2012 18:33:02 -0700, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 7/6/2012 4:50 PM, Adam Wilson wrote:
 My guess is that, unless something changes significantly, DMD will  
 remain a
 niche tool; useful as a reference/research compiler, but for actual  
 work people
 will use LDC or GDC.

A more diverse ecosystem that supports D is only for the better.

If this is what you want then I can be fine with it too. I just wanted to make my position clear. This also means that use cases are going to need to be clarified and a clear story crafted around the pro's and con's of each compiler to help us make a decision about which option is best for our needs. I was able to reach my conclusions, but only after months of immersion into the community. Needless to say, most people, open-source or commercial won't spend that kind of time... In short, more promotion of the options on dlang.org. -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 06 2012
prev sibling next sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Saturday, 7 July 2012 at 04:39:25 UTC, Adam Wilson wrote:
 On Fri, 06 Jul 2012 18:33:02 -0700, Walter Bright 
 <newshound2 digitalmars.com> wrote:

 On 7/6/2012 4:50 PM, Adam Wilson wrote:
 My guess is that, unless something changes significantly, DMD 
 will remain a
 niche tool; useful as a reference/research compiler, but for 
 actual work people
 will use LDC or GDC.

A more diverse ecosystem that supports D is only for the better.

If this is what you want then I can be fine with it too. I just wanted to make my position clear. This also means that use cases are going to need to be clarified and a clear story crafted around the pro's and con's of each compiler to help us make a decision about which option is best for our needs. I was able to reach my conclusions, but only after months of immersion into the community. Needless to say, most people, open-source or commercial won't spend that kind of time... In short, more promotion of the options on dlang.org.

How different is this from C, C++, Pascal, Modula, Ada, Java compilers? As long as all implement the language standard, it is all for the better. -- Paulo
Jul 06 2012
prev sibling next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday, July 07, 2012 20:59:23 Alex R=C3=B8nne Petersen wrote:
 By the way, is it planned that DMD will be able to use Microsoft's
 linker when compiling with COFF? Or is it too early to say at this
 point? (It would simplify a lot of things; particularly, integration
 with MSVC projects. Further, Optlink's command line is really unfrien=

 and hard to integrate in most build systems.)

Walter announced a couple of weeks ago that he's going to work on addin= g COFF=20 support to dmd on Windows so that it can be used with Microsoft's linke= r, and=20 he's started on it (though I don't know if he's gotten very far on it y= et -=20 there's been one related commit that I'm aware of). - Jonathan M Davis
Jul 07 2012
prev sibling next sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Sat, 07 Jul 2012 11:49:16 -0700, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 7/7/2012 3:46 AM, Jacob Carlborg wrote:
 Theoretically you should be able to just look at the documentation

HAHAHAHAHAHAHAHAHAHAHAAAAA!!!

Unfortunately, I have to agree with this sentiment. I was merely under an incorrect impression of the scope of the license with which Walter is operating under. Based on my experience with D, it is utterly ridiculous to expect the documentation to be enough. However, I will maintain my position that being tied to the current backend will seriously constrain the capabilities of DMD when compared to the other options. -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 07 2012
prev sibling next sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Sat, 07 Jul 2012 11:48:44 -0700, Walter Bright  =

<newshound2 digitalmars.com> wrote:

 On 7/7/2012 8:38 AM, Alex R=F8nne Petersen wrote:
 On a high-end 4-core x86, building LLVM and LDC can usually be
 done in less than an hour, even when building them in optimized mode.=


 Building dmd on my Windows box takes 26 seconds, optimized, using a  =

 single core.

Build speed of the compiler itself is an utterly trivial matter, my = primary concern is speed for the end-user. Even the build speed/memory = usage of my projects is not a problem, I can always throw more money at = = hardware. For example, I am considering making the next round of develop= er = box updates to Intel Xeon E1650's with 32GB RAM. Gentlemen, from a business prospective, compiler and/or project build = times are the least of your problems. How well the code performs and mos= t = importantly the accuracy of the code generation is of key concern. -- = Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 07 2012
prev sibling next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Adam Wilson:

 Gentlemen, from a business prospective, compiler and/or project 
 build times are the least of your problems.

If DMD compiles quickly, I am able to compile one or more times every day, so I'm able to test it frequently. Other people do the same. The result is a better compiler for the user. Bye, bearophile
Jul 07 2012
prev sibling next sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Sat, 07 Jul 2012 16:15:11 -0700, Walter Bright  =

<newshound2 digitalmars.com> wrote:

 On 7/7/2012 4:08 PM, Adam Wilson wrote:
 On Sat, 07 Jul 2012 11:48:44 -0700, Walter Bright  =


 <newshound2 digitalmars.com>
 wrote:

 On 7/7/2012 8:38 AM, Alex R=F8nne Petersen wrote:
 On a high-end 4-core x86, building LLVM and LDC can usually be
 done in less than an hour, even when building them in optimized mod=




 Building dmd on my Windows box takes 26 seconds, optimized, using a =



 single core.

Build speed of the compiler itself is an utterly trivial matter, my =


 primary
 concern is speed for the end-user. Even the build speed/memory usage =


 my
 projects is not a problem, I can always throw more money at hardware.=


 For
 example, I am considering making the next round of developer box  =


 updates to
 Intel Xeon E1650's with 32GB RAM.

 Gentlemen, from a business prospective, compiler and/or project build=


 times are
 the least of your problems. How well the code performs and most  =


 importantly the
 accuracy of the code generation is of key concern.

Throwing more hardware at a problem isn't going to get you a 120x =

 increase in speed.

I wont argue that, but again, that's not a primary concern. :-)
 While you're right that the customer cares not how long it takes to  =

 build the compiler, the speed is important for the edit-compile-debug =

 loop of developing the compiler. For me, it matters quite a bit.

I imagine that it does, and honestly, I am not terribly concerned if DMD= = stays with it's current backend because once LLVM gets SEH, im gone. But= I = do wonder if DMD will become increasingly irrelevant as backends like GC= C = and LLVM advance. And I am particularly troubled by what seems like a = duplication of effort in the face of more widely tested backends... All that said, I understand the legal predicament. You can't do anything= = about it and I'm not trying to convince you too. I just want to see more= = promotion and support of the other options available. -- = Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 07 2012
prev sibling next sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Sat, 07 Jul 2012 16:34:53 -0700, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 7/7/2012 4:28 PM, Adam Wilson wrote:
 I imagine that it does, and honestly, I am not terribly concerned if  
 DMD stays
 with it's current backend because once LLVM gets SEH, im gone. But I do  
 wonder
 if DMD will become increasingly irrelevant as backends like GCC and LLVM
 advance. And I am particularly troubled by what seems like a  
 duplication of
 effort in the face of more widely tested backends...

Different implementations will have their different strengths and weaknesses, and also competition between them is good. I'm very pleased that we have 3 strong implementations.
 All that said, I understand the legal predicament. You can't do  
 anything about
 it and I'm not trying to convince you too. I just want to see more  
 promotion and
 support of the other options available.

I think we'd all be better off if more involved here would get more active in promotion, rather than waiting for me to do it. The whole "better mouse trap" thing is baloney. Promotion is necessary, even if you've got a great product. Even Apple has a huge promotion budget.

Agreed, but not many people have push rights to the website, which is where I would start. I am not trying to say that LLVM or DMD or GDC is better for all situations, but that we need to clear guidance as to which tools are best suited to which situations, for example, I find it incredibly hard to get a clean build of LDC on Linux (not even counting Windows) and found GDC's build process much easier to get working. However, I personally don't feel it is terribly wise for my business to by tied to the Stallmanology of GCC and GDC on Windows is nigh hopeless (no MinGW support). For the work *I* do LLVM is best, but situations vary wildly, which is why this is important. I am talking here because I don't have merge rights on the website and pull requests are usually left languishing for many moons... -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 07 2012
prev sibling next sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Sat, 07 Jul 2012 16:38:27 -0700, Timon Gehr <timon.gehr gmx.ch> wrote=
:

 On 07/08/2012 01:28 AM, Adam Wilson wrote:
 On Sat, 07 Jul 2012 16:15:11 -0700, Walter Bright
 <newshound2 digitalmars.com> wrote:

 On 7/7/2012 4:08 PM, Adam Wilson wrote:
 On Sat, 07 Jul 2012 11:48:44 -0700, Walter Bright
 <newshound2 digitalmars.com>
 wrote:

 On 7/7/2012 8:38 AM, Alex R=F8nne Petersen wrote:
 On a high-end 4-core x86, building LLVM and LDC can usually be
 done in less than an hour, even when building them in optimized  =






 mode.

Building dmd on my Windows box takes 26 seconds, optimized, using =





 single core.

Build speed of the compiler itself is an utterly trivial matter, my=




 primary
 concern is speed for the end-user. Even the build speed/memory usag=




 of my
 projects is not a problem, I can always throw more money at hardwar=




 For
 example, I am considering making the next round of developer box
 updates to
 Intel Xeon E1650's with 32GB RAM.

 Gentlemen, from a business prospective, compiler and/or project bui=




 times are
 the least of your problems. How well the code performs and most
 importantly the
 accuracy of the code generation is of key concern.

Throwing more hardware at a problem isn't going to get you a 120x increase in speed.

I wont argue that, but again, that's not a primary concern. :-)
 While you're right that the customer cares not how long it takes to
 build the compiler, the speed is important for the edit-compile-debu=



 loop of developing the compiler. For me, it matters quite a bit.

I imagine that it does, and honestly, I am not terribly concerned if =


 stays with it's current backend because once LLVM gets SEH, im gone. =


 I do wonder if DMD will become increasingly irrelevant as backends li=


 GCC and LLVM advance. And I am particularly troubled by what seems li=


 a duplication of effort in the face of more widely tested backends...=



The DMD backend is very fast in comparison to other backends. LLVM is unlikely to catch up in speed, because it is well architecture=

 and more general.

Oh, I agree that it is, but as I've been saying, raw compiler speed is = rarely an important factor outside of small circles of developers, if it= = was, businesses would have given up on C++ LONG ago. It's nice to have, = = but the business case for it is weak comparatively.
 All that said, I understand the legal predicament. You can't do anyth=


 about it and I'm not trying to convince you too. I just want to see m=


 promotion and support of the other options available.


-- = Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 07 2012
prev sibling next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday, July 07, 2012 16:52:25 Adam Wilson wrote:
 Agreed, but not many people have push rights to the website, which is
 where I would start.

The lack of commit rights to d-programming-language.org doesn't stop you from submitting pull requests. It just stops you from putting your edits directly on the site without anyone else looking at them first. Granted, pull requests for d-programming-language.org aren't always handled quickly, but how quickly your changes get merged doesn't really affect your ability to make the changes in the first place. - Jonathan M Davis
Jul 07 2012
prev sibling next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday, July 07, 2012 16:54:48 Adam Wilson wrote:
 On Sat, 07 Jul 2012 16:38:27 -0700, Timon Gehr <timon.gehr gmx.ch> wrote:
 The DMD backend is very fast in comparison to other backends.
 
 LLVM is unlikely to catch up in speed, because it is well architectured
 and more general.

Oh, I agree that it is, but as I've been saying, raw compiler speed is rarely an important factor outside of small circles of developers, if it was, businesses would have given up on C++ LONG ago. It's nice to have, but the business case for it is weak comparatively.

Just because one set of developers has priorities other than compilation speed which they consider to be more important doesn't mean that a lot of developers don't think that compilation speed is important. I've worked on projects that took over 3 hours to build but that doesn't mean that I wouldn't have wanted them to be faster. I've known programmers who complained about builds which were over a minute! If you rate something else higher than compilation speed, that's fine, but that doesn't mean that compilation speed doesn't matter, because it does. And if the various D Compilers are consistent enough, it arguably becomes a good course of action to build your ultimate release using gdc or ldc but to do most of the direct development on dmd so that you get a fast compile-test- rewrite cycle. - Jonathan M Davis
Jul 07 2012
prev sibling next sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Sat, 07 Jul 2012 17:22:27 -0700, Timon Gehr <timon.gehr gmx.ch> wrote=
:

 On 07/08/2012 01:54 AM, Adam Wilson wrote:
 On Sat, 07 Jul 2012 16:38:27 -0700, Timon Gehr <timon.gehr gmx.ch>  =


 wrote:

 On 07/08/2012 01:28 AM, Adam Wilson wrote:
 On Sat, 07 Jul 2012 16:15:11 -0700, Walter Bright
 <newshound2 digitalmars.com> wrote:

 On 7/7/2012 4:08 PM, Adam Wilson wrote:
 On Sat, 07 Jul 2012 11:48:44 -0700, Walter Bright
 <newshound2 digitalmars.com>
 wrote:

 On 7/7/2012 8:38 AM, Alex R=F8nne Petersen wrote:
 On a high-end 4-core x86, building LLVM and LDC can usually be
 done in less than an hour, even when building them in optimized=








 mode.

Building dmd on my Windows box takes 26 seconds, optimized, usin=







 single core.

Build speed of the compiler itself is an utterly trivial matter, =






 primary
 concern is speed for the end-user. Even the build speed/memory us=






 of my
 projects is not a problem, I can always throw more money at  =






 hardware.
 For
 example, I am considering making the next round of developer box
 updates to
 Intel Xeon E1650's with 32GB RAM.

 Gentlemen, from a business prospective, compiler and/or project  =






 build
 times are
 the least of your problems. How well the code performs and most
 importantly the
 accuracy of the code generation is of key concern.

Throwing more hardware at a problem isn't going to get you a 120x increase in speed.

I wont argue that, but again, that's not a primary concern. :-)
 While you're right that the customer cares not how long it takes t=





 build the compiler, the speed is important for the edit-compile-de=





 loop of developing the compiler. For me, it matters quite a bit.

I imagine that it does, and honestly, I am not terribly concerned i=




 DMD
 stays with it's current backend because once LLVM gets SEH, im gone=




 But
 I do wonder if DMD will become increasingly irrelevant as backends =




 like
 GCC and LLVM advance. And I am particularly troubled by what seems =




 like
 a duplication of effort in the face of more widely tested backends.=





The DMD backend is very fast in comparison to other backends. LLVM is unlikely to catch up in speed, because it is well architectu=



 and more general.

Oh, I agree that it is, but as I've been saying, raw compiler speed i=


 rarely an important factor outside of small circles of developers, if=


 was, businesses would have given up on C++ LONG ago. It's nice to hav=


 but the business case for it is weak comparatively.

'raw compiler speed is rarely the most important factor' does not necessarily imply 'raw compiler speed is rarely an important factor'.

Correct, just less important that other things. -- = Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 07 2012
prev sibling next sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Sat, 07 Jul 2012 17:04:35 -0700, Jonathan M Davis <jmdavisProg gmx.com>  
wrote:

 On Saturday, July 07, 2012 16:54:48 Adam Wilson wrote:
 On Sat, 07 Jul 2012 16:38:27 -0700, Timon Gehr <timon.gehr gmx.ch>  
 wrote:
 The DMD backend is very fast in comparison to other backends.

 LLVM is unlikely to catch up in speed, because it is well  

 and more general.

Oh, I agree that it is, but as I've been saying, raw compiler speed is rarely an important factor outside of small circles of developers, if it was, businesses would have given up on C++ LONG ago. It's nice to have, but the business case for it is weak comparatively.

Just because one set of developers has priorities other than compilation speed which they consider to be more important doesn't mean that a lot of developers don't think that compilation speed is important. I've worked on projects that took over 3 hours to build but that doesn't mean that I wouldn't have wanted them to be faster. I've known programmers who complained about builds which were over a minute!

Sure they complain, but they would complain harder if the generated code was sub-optimal or had bugs in it. And I imagine that multiple hour build times are more the exception than rule even in C++, my understanding is that all 50mloc of Windows can compile overnight using distributed compiling. Essentially, my argument is that for business compilation time is something that can be attacked with money, where code generation and perf bugs are not.
 If you rate something else higher than compilation speed, that's fine,  
 but that
 doesn't mean that compilation speed doesn't matter, because it does.

That's been my whole point, we need ways to tell other people about the pro's and con's of each tool, so that they can choose the right tool, knowing it's capabilities and limitations. Right now, it's all DMD.
 And if the various D Compilers are consistent enough, it arguably  
 becomes a
 good course of action to build your ultimate release using gdc or ldc  
 but to
 do most of the direct development on dmd so that you get a fast  
 compile-test-
 rewrite cycle.

 - Jonathan M Davis

Except when you are working on architectures that DMD doesn't support. Such as Win64, our primary arch here at work. We also want to get into ARM on Windows. Win32 is fast becoming irrelevant for new work as almost all Win7 machines shipped these days are x64. Essentially, I cannot justify DMD under any circumstance for production work here. However, I realize that for most people Win32 only is just fine. But we need a page that explains all these differences. As I said in my first post on the subject, I only know the differences because I've been here for many months, most decision makers aren't going to dedicate a trivial fraction of that time. They'll see that DMD doesn't support Win64 and move on without doing the investigation to find out that there is even an option for LLVM, because, as near as I can tell, it's not even mentioned. They may try GDC only to find that it's not well supported on Windows at all (i've heard mixed reports of people getting builds working and it seems extremely fragile). If D is going to use the multiple-tool approach, which I agree it should, then we should do our best to promote all the tools and more importantly, the community is actively engaged in supporting and improving all of them. I am willing to do the website work, but I have no idea when I'll get to it as I have other D projects cooking, including GSoC; and more importantly, if I even understand the differences well enough myself to make accurate statements. -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 07 2012
prev sibling next sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Sat, 07 Jul 2012 17:32:28 -0700, Alex R=F8nne Petersen <alex lycus.or=
g>  =

wrote:

 On 08-07-2012 01:57, Jonathan M Davis wrote:
 On Saturday, July 07, 2012 16:52:25 Adam Wilson wrote:
 Agreed, but not many people have push rights to the website, which i=



 where I would start.

The lack of commit rights to d-programming-language.org doesn't stop =


 you from
 submitting pull requests. It just stops you from putting your edits  =


 directly
 on the site without anyone else looking at them first. Granted, pull =


 requests
 for d-programming-language.org aren't always handled quickly, but how=


 quickly
 your changes get merged doesn't really affect your ability to make th=


 changes
 in the first place.

 - Jonathan M Davis

No, but it does affect how long it takes to make them show up on the =

 page that we present to users of/newcomers to D.

 The slow pull request review/accept/reject time can be very demotivati=

 at times.

Actually, this is the reason I haven't fixed my DI Generation pull yet. = I = have a lot of projects going right now (including GSoC) and knowing that= = fixing my pull won't get it merged even in the next few months is = incredibly demotivating. -- = Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 07 2012
prev sibling next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, July 08, 2012 02:32:28 Alex R=C3=B8nne Petersen wrote:
 On 08-07-2012 01:57, Jonathan M Davis wrote:
 On Saturday, July 07, 2012 16:52:25 Adam Wilson wrote:
 Agreed, but not many people have push rights to the website, which=



 where I would start.

The lack of commit rights to d-programming-language.org doesn't sto=


 from submitting pull requests. It just stops you from putting your =


 directly on the site without anyone else looking at them first. Gra=


 pull requests for d-programming-language.org aren't always handled
 quickly, but how quickly your changes get merged doesn't really aff=


 your ability to make the changes in the first place.
=20
 - Jonathan M Davis

No, but it does affect how long it takes to make them show up on the page that we present to users of/newcomers to D. =20 The slow pull request review/accept/reject time can be very demotivat=

 at times.

True, but saying that you can't make changes just because you don't hav= e the=20 permissions to commit directly to the main repository is patently false= , and=20 that's what Adam's post implied. - Jonathan M Davis
Jul 07 2012
prev sibling next sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 08/07/12 01:52, Adam Wilson wrote:
 I personally don't feel it is terribly wise for  my business to by tied to the
 Stallmanology of GCC

What's the problem here? The licence of the compiler places no restrictions on the licence of the code you build with it.
Jul 07 2012
prev sibling next sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Sat, 07 Jul 2012 17:39:49 -0700, Jonathan M Davis <jmdavisProg gmx.co=
m>  =

wrote:

 On Sunday, July 08, 2012 02:32:28 Alex R=F8nne Petersen wrote:
 On 08-07-2012 01:57, Jonathan M Davis wrote:
 On Saturday, July 07, 2012 16:52:25 Adam Wilson wrote:
 Agreed, but not many people have push rights to the website, which=




 where I would start.

The lack of commit rights to d-programming-language.org doesn't sto=



 you
 from submitting pull requests. It just stops you from putting your =



 edits
 directly on the site without anyone else looking at them first.  =



 Granted,
 pull requests for d-programming-language.org aren't always handled
 quickly, but how quickly your changes get merged doesn't really aff=



 your ability to make the changes in the first place.

 - Jonathan M Davis

No, but it does affect how long it takes to make them show up on the page that we present to users of/newcomers to D. The slow pull request review/accept/reject time can be very demotivat=


 at times.

True, but saying that you can't make changes just because you don't ha=

 the
 permissions to commit directly to the main repository is patently fals=

 and
 that's what Adam's post implied.

 - Jonathan M Davis

I apologize, I wasn't trying to imply that. I was trying to imply that = without such privileges it requires far too more effort on my part to = advocate and cajole for someone to merge said changes than I have time f= or = right now. I am quite exhausted after spending over a month trying to ge= t = some trivial changes merged into DRuntime. I wasn't trying to ask for = merge rights. The merge process around here is glacial and that could = easily kill D. -- = Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 07 2012
prev sibling next sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Sat, 07 Jul 2012 17:44:09 -0700, Joseph Rushton Wakeling  
<joseph.wakeling webdrake.net> wrote:

 On 08/07/12 01:52, Adam Wilson wrote:
 I personally don't feel it is terribly wise for  my business to by tied  
 to the
 Stallmanology of GCC

What's the problem here? The licence of the compiler places no restrictions on the licence of the code you build with it.

Not everyone agrees philosophically with Stallman, and I highly suggest that we be very careful not to cram ideology down potential users throats, it's a good way to scare them off. Personally, I would use either, but getting GDC to work on Windows is a nightmare. -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 07 2012
prev sibling next sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Sat, 07 Jul 2012 19:33:22 -0700, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 7/7/12 8:29 PM, Adam Wilson wrote:
 Sure they complain, but they would complain harder if the generated code
 was sub-optimal or had bugs in it. And I imagine that multiple hour
 build times are more the exception than rule even in C++, my
 understanding is that all 50mloc of Windows can compile overnight using
 distributed compiling. Essentially, my argument is that for business
 compilation time is something that can be attacked with money, where
 code generation and perf bugs are not.

I'm sorry, but I think you got that precisely backwards. Andrei

Why is that? -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 07 2012
prev sibling next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday, July 07, 2012 20:26:56 Adam Wilson wrote:
 On Sat, 07 Jul 2012 19:33:22 -0700, Andrei Alexandrescu
 
 <SeeWebsiteForEmail erdani.org> wrote:
 On 7/7/12 8:29 PM, Adam Wilson wrote:
 Sure they complain, but they would complain harder if the generated code
 was sub-optimal or had bugs in it. And I imagine that multiple hour
 build times are more the exception than rule even in C++, my
 understanding is that all 50mloc of Windows can compile overnight using
 distributed compiling. Essentially, my argument is that for business
 compilation time is something that can be attacked with money, where
 code generation and perf bugs are not.

I'm sorry, but I think you got that precisely backwards. Andrei

Why is that?

Well, considering that the general trend over the last ten years has been to move to languages which focus on programmer productivity (including compilation speed) over those which focus on speed of execution, there's a definite argument that programmers generally prefer stuff that makes programming easier and faster over stuff that makes the program faster. There are obviously exceptions, and there are some signs of things shifting (due to mobile and whatnot), but that's the way that things have been trending for over a decade. - Jonathan M Davis
Jul 07 2012
prev sibling next sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Sat, 07 Jul 2012 21:05:12 -0700, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 7/7/12 11:26 PM, Adam Wilson wrote:
 On Sat, 07 Jul 2012 19:33:22 -0700, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 On 7/7/12 8:29 PM, Adam Wilson wrote:
 Sure they complain, but they would complain harder if the generated  
 code
 was sub-optimal or had bugs in it. And I imagine that multiple hour
 build times are more the exception than rule even in C++, my
 understanding is that all 50mloc of Windows can compile overnight  
 using
 distributed compiling. Essentially, my argument is that for business
 compilation time is something that can be attacked with money, where
 code generation and perf bugs are not.

I'm sorry, but I think you got that precisely backwards. Andrei

Why is that?

Compilation is a huge bottleneck for any major C++ code base, and adding hardware (distributing compilation etc) is survival, but definitely doesn't scale to make the problem negligible. In contrast, programmers have considerable control about generating fast code. Andrei

So work around backend bugs and slowness? I could see that, but the most widely used C++ compilers are based on GCC and LLVM and those have very few backend problems to begin with. I still see pretty heinous backend problems crop up in the bug reports for DMD. As to compile speed, is LDC really *THAT* much slower than DMD so as to cause C++ style speed issues? I thought one of the whole points of D is that it doesn't need the epic numbers of passes and preprocessor that C++ does precisely because that's what slows down C++ so much... -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 07 2012
prev sibling next sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Sat, 07 Jul 2012 21:13:35 -0700, Jonathan M Davis <jmdavisProg gmx.com>  
wrote:

 On Saturday, July 07, 2012 20:26:56 Adam Wilson wrote:
 On Sat, 07 Jul 2012 19:33:22 -0700, Andrei Alexandrescu

 <SeeWebsiteForEmail erdani.org> wrote:
 On 7/7/12 8:29 PM, Adam Wilson wrote:
 Sure they complain, but they would complain harder if the generated  


 was sub-optimal or had bugs in it. And I imagine that multiple hour
 build times are more the exception than rule even in C++, my
 understanding is that all 50mloc of Windows can compile overnight  


 distributed compiling. Essentially, my argument is that for business
 compilation time is something that can be attacked with money, where
 code generation and perf bugs are not.

I'm sorry, but I think you got that precisely backwards. Andrei

Why is that?

Well, considering that the general trend over the last ten years has been to move to languages which focus on programmer productivity (including compilation speed) over those which focus on speed of execution, there's a definite argument that programmers generally prefer stuff that makes programming easier and faster over stuff that makes the program faster. There are obviously exceptions, and there are some signs of things shifting (due to mobile and whatnot), but that's the way that things have been trending for over a decade. - Jonathan M Davis

I won't argue with this at all, I use C# after all. But there we shuffle the "backend" off to the JIT, so compilation is really more a translation to IR. IIRC this is how most of the popular productivity languages did it (Java, .NET, etc.). It'd be an interesting research project to modify LDC to output IR only and run the IR later on an LLVM based VM and then see what kind of compile times you get... -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 07 2012
prev sibling next sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Sat, 07 Jul 2012 21:58:04 -0700, Alex R=F8nne Petersen <alex lycus.or=
g>  =

wrote:

 On 08-07-2012 06:44, Adam Wilson wrote:
 On Sat, 07 Jul 2012 21:13:35 -0700, Jonathan M Davis
 <jmdavisProg gmx.com> wrote:

 On Saturday, July 07, 2012 20:26:56 Adam Wilson wrote:
 On Sat, 07 Jul 2012 19:33:22 -0700, Andrei Alexandrescu

 <SeeWebsiteForEmail erdani.org> wrote:
 On 7/7/12 8:29 PM, Adam Wilson wrote:
 Sure they complain, but they would complain harder if the


 was sub-optimal or had bugs in it. And I imagine that multiple h=






 build times are more the exception than rule even in C++, my
 understanding is that all 50mloc of Windows can compile overnigh=






 using
 distributed compiling. Essentially, my argument is that for  =






 business
 compilation time is something that can be attacked with money,  =






 where
 code generation and perf bugs are not.

I'm sorry, but I think you got that precisely backwards. Andrei

Why is that?

Well, considering that the general trend over the last ten years has=



 been to
 move to languages which focus on programmer productivity (including
 compilation speed) over those which focus on speed of execution,
 there's a
 definite argument that programmers generally prefer stuff that makes=



 programming
 easier and faster over stuff that makes the program faster. There ar=



 obviously
 exceptions, and there are some signs of things shifting (due to mobi=



 and
 whatnot), but that's the way that things have been trending for over=



 decade.

 - Jonathan M Davis

I won't argue with this at all, I use C# after all. But there we shuf=


 the "backend" off to the JIT, so compilation is really more a
 translation to IR. IIRC this is how most of the popular productivity
 languages did it (Java, .NET, etc.).

 It'd be an interesting research project to modify LDC to output IR on=


 and run the IR later on an LLVM based VM and then see what kind of
 compile times you get...

It would be kind of useless in practice, unfortunately. LLVM IR is ver=

 unsuited for VM use:  =

 http://lists.cs.uiuc.edu/pipermail/llvmdev/2011-October/043719.html

Ahh, well maybe we'll have to use MCI, since I hear that the D.NET proje= ct = is quite dead. -- = Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 07 2012
prev sibling next sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Sat, 07 Jul 2012 23:47:45 -0700, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 7/7/2012 9:16 PM, Adam Wilson wrote:
 I still see pretty heinous backend problems crop up in
 the bug reports for DMD.

Come on, it's pretty stable. Do you watch the bug reports for gcc? I remember a guy recently ran some exhaustive code gen tests over C compilers, and dmc (the same back end as dmd) was the only one that did them correctly. http://news.ycombinator.com/item?id=4131508

I stand corrected. :) It is true that I don't watch the GCC/LLVM buglists. -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 07 2012
prev sibling next sibling parent "Simen Kjaeraas" <simen.kjaras gmail.com> writes:
On Sun, 08 Jul 2012 13:26:53 +0200, Jacob Carlborg <doob me.com> wrote:

 On 2012-07-07 20:49, Walter Bright wrote:
 On 7/7/2012 3:46 AM, Jacob Carlborg wrote:
 Theoretically you should be able to just look at the documentation

HAHAHAHAHAHAHAHAHAHAHAAAAA!!!

Yeah, I know how you feel about documentation.

You mean there are actually people out there who believe documentation can be correct, not to mention understandable, comprehensive and giving the information you need?
Jul 09 2012
prev sibling next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Monday, July 09, 2012 22:43:19 Simen Kjaeraas wrote:
 On Sun, 08 Jul 2012 13:26:53 +0200, Jacob Carlborg <doob me.com> wrote:
 On 2012-07-07 20:49, Walter Bright wrote:
 On 7/7/2012 3:46 AM, Jacob Carlborg wrote:
 Theoretically you should be able to just look at the documentation

HAHAHAHAHAHAHAHAHAHAHAAAAA!!!

Yeah, I know how you feel about documentation.

You mean there are actually people out there who believe documentation can be correct, not to mention understandable, comprehensive and giving the information you need?

Of course, it _can_ be, but assuming that it _is_ is another thing entirely. Good, accurate, up-to-date documentation does exist. It just isn't the norm. - Jonathan M Davis
Jul 09 2012
prev sibling parent "Simen Kjaeraas" <simen.kjaras gmail.com> writes:
On Tue, 10 Jul 2012 08:33:17 +0200, Jacob Carlborg <doob me.com> wrote:

 On 2012-07-09 22:43, Simen Kjaeraas wrote:

 You mean there are actually people out there who believe documentation
 can be correct, not to mention understandable, comprehensive and giving
 the information you need?

You do know there are closed source libraries where you don't have an option.

I know. I also know I have spent days on forums trying to find answers that were not covered by that documentation. But this is turning stupid. We both know documentation is not always 100%, and that it sometimes is good enough. -- Simen
Jul 10 2012