www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - dmd codegen improvements

reply Walter Bright <newshound2 digitalmars.com> writes:
Martin ran some benchmarks recently that showed that ddmd compiled with dmd was 
about 30% slower than when compiled with gdc/ldc. This seems to be fairly
typical.

I'm interested in ways to reduce that gap.

There are 3 broad kinds of optimizations that compilers do:

1. source translations like rewriting x*2 into x<<1, and function inlining

2. instruction selection patterns like should one generate:

     SETC AL
     MOVZ EAX,AL

or:
     SBB EAX
     NEG EAX

3. data flow analysis optimizations like constant propagation, dead code 
elimination, register allocation, loop invariants, etc.

Modern compilers (including dmd) do all three.

So if you're comparing code generated by dmd/gdc/ldc, and notice something that 
dmd could do better at (1, 2 or 3), please let me know. Often this sort of
thing 
is low hanging fruit that is fairly easily inserted into the back end.

For example, recently I improved the usage of the SETcc instructions.

https://github.com/D-Programming-Language/dmd/pull/4901
https://github.com/D-Programming-Language/dmd/pull/4904

A while back I improved usage of BT instructions, the way switch statements
were 
implemented, and fixed integer divide by a constant with multiply by its
reciprocal.
Aug 18 2015
next sibling parent reply "Etienne Cimon" <etcimon gmail.com> writes:
On Tuesday, 18 August 2015 at 10:45:49 UTC, Walter Bright wrote:
 So if you're comparing code generated by dmd/gdc/ldc, and 
 notice something that dmd could do better at (1, 2 or 3), 
 please let me know. Often this sort of thing is low hanging 
 fruit that is fairly easily inserted into the back end.
I think someone mentioned how other compilers unroll loops at more than 2 levels. Other than that, there was a recent Java vs D thread which showed it orders of magnitude faster on vtable calls. So I think the most amazing feature would be to allow profiling & sampling to compile with samples and select which functions to inline or do some magic around vtable pointers like what Java is doing. Finally, I'm going to write this down here and haven't had time to look more into it but I've never been able to compile Botan with optimizations on DMD64 Win64 VS2013 (https://github.com/etcimon/botan), it's really strange having a crypto library that you can't optimize, building -O -g also gives me a ccog.c ICE error. I think it might be something about `asm pure` that uses some locals, does that eliminate the function call parameters?
Aug 18 2015
next sibling parent "Etienne Cimon" <etcimon gmail.com> writes:
On Tuesday, 18 August 2015 at 12:32:17 UTC, Etienne Cimon wrote:
 a crypto library that you can't optimize, building -O -g also 
 gives me a ccog.c ICE error. I think it might be something 
 about `asm pure` that uses some locals, does that eliminate the 
 function call parameters?
Sorry that was cgcod.c Internal error: backend\cgcod.c 2311 FAIL .dub\build\__test__full__-unittest-windows-x86_64-dmd_2068-8073079C502FE B927744150233D4046\ __test__full__ executa ble I'll try and file a bugzilla about this. I think stability should be first concern.
Aug 18 2015
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/18/2015 5:32 AM, Etienne Cimon wrote:
 Other than that, there was a recent Java vs D thread which showed it orders of
 magnitude faster on vtable calls. So I think the most amazing feature would be
 to allow profiling & sampling to compile with samples and select which
functions
 to inline or do some magic around vtable pointers like what Java is doing.
There is some potential there, but since a static compiler doesn't do runtime profiling, some sort of hinting scheme would have to be invented.
 Finally, I'm going to write this down here and haven't had time to look more
 into it but I've never been able to compile Botan with optimizations on DMD64
 Win64 VS2013 (https://github.com/etcimon/botan), it's really strange having a
 crypto library that you can't optimize, building -O -g also gives me a ccog.c
 ICE error. I think it might be something about `asm pure` that uses some
locals,
 does that eliminate the function call parameters?
Please file a bug report for that!
Aug 18 2015
parent reply Jacob Carlborg <doob me.com> writes:
On 2015-08-18 20:59, Walter Bright wrote:

 There is some potential there, but since a static compiler doesn't do
 runtime profiling, some sort of hinting scheme would have to be invented.
There's profile guided optimization, which LLVM supports. -- /Jacob Carlborg
Aug 18 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/18/2015 1:33 PM, Jacob Carlborg wrote:
 There's profile guided optimization, which LLVM supports.
dmd does have that to some extent. If you run with -profile, the profiler will emit a trace.def file. This is a script which can be fed to the linker which controls the layout of functions in the executable. The layout is organized so that strongly connected functions reside in the same page, minimizing swapping and maximizing cache hits. Unfortunately, nobody makes use of it, which makes me reluctant to expend further effort on PGO. http://www.digitalmars.com/ctg/trace.html I wonder how many people actually use the llvm profile guided optimizations. I suspect very, very few.
Aug 18 2015
next sibling parent reply "welkam" <wwwelkam gmail.com> writes:
On Tuesday, 18 August 2015 at 21:43:44 UTC, Walter Bright wrote:
 On 8/18/2015 1:33 PM, Jacob Carlborg wrote:
 There's profile guided optimization, which LLVM supports.
dmd does have that to some extent. If you run with -profile, the profiler will emit a trace.def file. This is a script which can be fed to the linker which controls the layout of functions in the executable. The layout is organized so that strongly connected functions reside in the same page, minimizing swapping and maximizing cache hits. Unfortunately, nobody makes use of it, which makes me reluctant to expend further effort on PGO. http://www.digitalmars.com/ctg/trace.html I wonder how many people actually use the llvm profile guided optimizations. I suspect very, very few.
People are lazy and if it takes more than one click people wont use it. Just like unitesting everyone agrees that its good to write them but nobody does that. When you put unitesting in compiler more people are writing tests. PGO is awesome, but it needs to be made much simpler before people use it everyday.
Aug 18 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/18/2015 3:17 PM, welkam wrote:
 People are lazy and if it takes more than one click people wont use it. Just
 like unitesting everyone agrees that its good to write them but nobody does
 that. When you put unitesting in compiler more people are writing tests. PGO is
 awesome, but it needs to be made much simpler before people use it everyday.
Exactly. That's why people just want to type "-O" and it optimizes.
Aug 18 2015
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2015-08-19 00:55, Walter Bright wrote:

 Exactly. That's why people just want to type "-O" and it optimizes.
So why not just "-pgo" that does that you described above? -- /Jacob Carlborg
Aug 19 2015
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 19-Aug-2015 15:53, Jacob Carlborg wrote:
 On 2015-08-19 00:55, Walter Bright wrote:

 Exactly. That's why people just want to type "-O" and it optimizes.
So why not just "-pgo" that does that you described above?
+1 for -pgo to use trace.log in the same folder that way running -profile folowed by -pgo will just work (tm). -- Dmitry Olshansky
Aug 19 2015
parent Jacob Carlborg <doob me.com> writes:
On 2015-08-19 15:00, Dmitry Olshansky wrote:

 +1 for -pgo to use trace.log in the same folder that way running
 -profile folowed by -pgo will just work (tm).
I was thinking something the compiler would handle everything automatically in one command with the -pgo flag present. If necessary, one could pass arguments after the -pgo flag which will be used when running application. -- /Jacob Carlborg
Aug 19 2015
prev sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Tuesday, 18 August 2015 at 22:55:06 UTC, Walter Bright wrote:
 On 8/18/2015 3:17 PM, welkam wrote:
 People are lazy and if it takes more than one click people 
 wont use it. Just
 like unitesting everyone agrees that its good to write them 
 but nobody does
 that. When you put unitesting in compiler more people are 
 writing tests. PGO is
 awesome, but it needs to be made much simpler before people 
 use it everyday.
Exactly. That's why people just want to type "-O" and it optimizes.
At least without separate compilation, it probably wouldn't be that hard to add a compiler flag that made it so that the unit tests were run after the code was built and then made the compiler rebuild the program with the profiling results, but that would base the optimizations off of the unit tests rather than the actual program, which probably wouldn't be a good idea in general. Another possibility would be to build something into dub. If it handled it for you automatically, then that would make it comparable to just slapping on the -O flag. - Jonathan M Davis
Aug 19 2015
prev sibling next sibling parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 19-Aug-2015 00:43, Walter Bright wrote:
 On 8/18/2015 1:33 PM, Jacob Carlborg wrote:
 There's profile guided optimization, which LLVM supports.
dmd does have that to some extent. If you run with -profile, the profiler will emit a trace.def file. This is a script which can be fed to the linker which controls the layout of functions in the executable. The layout is organized so that strongly connected functions reside in the same page, minimizing swapping and maximizing cache hits. Unfortunately, nobody makes use of it, which makes me reluctant to expend further effort on PGO. http://www.digitalmars.com/ctg/trace.html I wonder how many people actually use the llvm profile guided optimizations. I suspect very, very few.
I guess this needs a prominent article to show some bung for the buck. -- Dmitry Olshansky
Aug 19 2015
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2015-08-18 23:43, Walter Bright wrote:

 I wonder how many people actually use the llvm profile guided
 optimizations. I suspect very, very few.
In Xcode there's a checkbox for PGO in the build configuration. Should be just as easy to enable as any other build setting. -- /Jacob Carlborg
Aug 19 2015
prev sibling next sibling parent reply "Vladimir Panteleev" <thecybershadow.lists gmail.com> writes:
On Tuesday, 18 August 2015 at 10:45:49 UTC, Walter Bright wrote:
 So if you're comparing code generated by dmd/gdc/ldc, and 
 notice something that dmd could do better at (1, 2 or 3), 
 please let me know. Often this sort of thing is low hanging 
 fruit that is fairly easily inserted into the back end.
Hi, From my experience reducing regressions, I have noticed that backend changes in general have a very high chance of introducing code generation regressions. Codegen bugs are nasty: they are occasionally difficult to reduce, and since software is rarely tested with its "release" build, have a habit of sneaking into published releases of otherwise bug-free software. IIRC, I have had three releases affected by optimization/inlining DMD bugs (two of Digger and one of RABCDAsm). These do not speak well for D when end-users ask me what the cause of the bug is, and I have to say "Yeah, it's a bug in the official D compiler". I think stability of the DMD backend is a goal of much higher value than the performance of the code it emits. DMD is never going to match the code generation quality of LLVM and GCC, which have had many, many man-years invested in them. Working on DMD optimizations is essentially duplicating this work, and IMHO I think it's not only a waste of time, but harmful to D because of the risk of regressions. I suggest that we revamp the compiler download page again. The lead should be a "select your compiler" which lists the advantages and disadvantages of each of DMD, LDC and GDC.
Aug 18 2015
next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Tuesday, 18 August 2015 at 12:37:37 UTC, Vladimir Panteleev 
wrote:
 I think stability of the DMD backend is a goal of much higher 
 value than the performance of the code it emits. DMD is never 
 going to match the code generation quality of LLVM and GCC, 
 which have had many, many man-years invested in them. Working 
 on DMD optimizations is essentially duplicating this work, and 
 IMHO I think it's not only a waste of time, but harmful to D 
 because of the risk of regressions.
+1
Aug 18 2015
parent "ChangLong" <changlon gmail.com> writes:
On Tuesday, 18 August 2015 at 12:58:45 UTC, Dicebot wrote:
 On Tuesday, 18 August 2015 at 12:37:37 UTC, Vladimir Panteleev 
 wrote:
 I think stability of the DMD backend is a goal of much higher 
 value than the performance of the code it emits. DMD is never 
 going to match the code generation quality of LLVM and GCC, 
 which have had many, many man-years invested in them. Working 
 on DMD optimizations is essentially duplicating this work, and 
 IMHO I think it's not only a waste of time, but harmful to D 
 because of the risk of regressions.
+1
+1
Aug 19 2015
prev sibling next sibling parent reply "Joakim" <dlang joakim.fea.st> writes:
On Tuesday, 18 August 2015 at 12:37:37 UTC, Vladimir Panteleev 
wrote:
 I think stability of the DMD backend is a goal of much higher 
 value than the performance of the code it emits. DMD is never 
 going to match the code generation quality of LLVM and GCC, 
 which have had many, many man-years invested in them. Working 
 on DMD optimizations is essentially duplicating this work, and 
 IMHO I think it's not only a waste of time, but harmful to D 
 because of the risk of regressions.
Well, you have to admit that it's pretty impressive that dmd's backend gets within 30% of those monumental backends despite having pretty much only Walter working on it sporadically. If it's a waste of time to work on compiler optimizations because of existing work, you could have said the same to the llvm devs when they tried to take on gcc. As ponce said, people are always going to use dmd because of it's speed, no reason not to make its codegen better also. Also, soon the dmd compiler backend will be the only one written in D. :) No reason to not also make it better. Of course, Walter is the only one who can decide the best use of his time.
Aug 18 2015
next sibling parent reply "anonymous" <anonymous example.com> writes:
On Tuesday, 18 August 2015 at 15:22:15 UTC, Joakim wrote:
 Also, soon the dmd compiler backend will be the only one 
 written in D. :)
Soon the front end will be written in D. And the front end is shared among dmd, gdc, ldc. Walter has expressed a desire to port the back end to D, too [1]. But that's not going to happen "soon". [1] http://forum.dlang.org/post/mohrrs$1pu7$1 digitalmars.com
Aug 18 2015
parent reply "Joakim" <dlang joakim.fea.st> writes:
On Tuesday, 18 August 2015 at 15:45:25 UTC, anonymous wrote:
 On Tuesday, 18 August 2015 at 15:22:15 UTC, Joakim wrote:
 Also, soon the dmd compiler backend will be the only one 
 written in D. :)
Soon the front end will be written in D. And the front end is shared among dmd, gdc, ldc. Walter has expressed a desire to port the back end to D, too [1]. But that's not going to happen "soon". [1] http://forum.dlang.org/post/mohrrs$1pu7$1 digitalmars.com
Yes, that's why I said the dmd _backend_ will be the only one written in D. Not sure how you know what the timeline for such a backend port is, seems like he really wants to get everything in D soon.
Aug 18 2015
parent "Temtaime" <temtaime gmail.com> writes:
Soon ? Are you sure ?
Some people said that 2.068 frontend will be in D.
And it's not.

I think it's a really bad idea to optimize dmd's backend.
It will add new regressions and it will make it more complex and 
slower.

I think it's better to fix bugs and not to optimize that backend.
Aug 18 2015
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2015-08-18 17:22, Joakim wrote:

 Well, you have to admit that it's pretty impressive that dmd's backend
 gets within 30% of those monumental backends despite having pretty much
 only Walter working on it sporadically.
DMD has only a very limited set of targets compared to LLVM and GCC. So they need more manpower to maintain and enhance the backends. -- /Jacob Carlborg
Aug 18 2015
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 18 August 2015 at 17:31:50 UTC, Jacob Carlborg wrote:
 On 2015-08-18 17:22, Joakim wrote:

 Well, you have to admit that it's pretty impressive that dmd's 
 backend
 gets within 30% of those monumental backends despite having 
 pretty much
 only Walter working on it sporadically.
DMD has only a very limited set of targets compared to LLVM and GCC. So they need more manpower to maintain and enhance the backends.
Target are the tip of the iceberg. GCC and LLVM do most of their magic in the middle, that is common accross front ends and targets. And honestly, there is no way DMD can catch up.
Aug 18 2015
next sibling parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Tue, Aug 18, 2015 at 07:38:43PM +0000, deadalnix via Digitalmars-d wrote:
 On Tuesday, 18 August 2015 at 17:31:50 UTC, Jacob Carlborg wrote:
On 2015-08-18 17:22, Joakim wrote:

Well, you have to admit that it's pretty impressive that dmd's
backend gets within 30% of those monumental backends despite having
pretty much only Walter working on it sporadically.
DMD has only a very limited set of targets compared to LLVM and GCC. So they need more manpower to maintain and enhance the backends.
Target are the tip of the iceberg. GCC and LLVM do most of their magic in the middle, that is common accross front ends and targets. And honestly, there is no way DMD can catch up.
DMD's optimizer is far behind GDC/LDC. Every one of my own programs that I ran a profiler on, shows a 30-50% decrease in performance when compiled (with all optimization flags on) with DMD, as opposed to GDC. For CPU-intensive programs, DMD's optimizer has a long way to go. T -- Why is it that all of the instruments seeking intelligent life in the universe are pointed away from Earth? -- Michael Beibl
Aug 18 2015
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/18/2015 12:38 PM, deadalnix wrote:
 And honestly, there is no way DMD can catch up.
I find your lack of faith disturbing. https://www.youtube.com/watch?v=Zzs-OvfG8tE&feature=player_detailpage#t=91
Aug 18 2015
next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 18 August 2015 at 20:28:48 UTC, Walter Bright wrote:
 On 8/18/2015 12:38 PM, deadalnix wrote:
 And honestly, there is no way DMD can catch up.
I find your lack of faith disturbing. https://www.youtube.com/watch?v=Zzs-OvfG8tE&feature=player_detailpage#t=91
Let's say I have some patches in LLVM and a pretty good understanding of how it works. There are some big optimizations that DMD could benefit from, but a lot of it is getting heuristics just right and recognize a sludge of patterns. For instance this: http://llvm.org/docs/doxygen/html/DAGCombiner_8cpp_source.html is what you get to recognize patterns created by legalization. For more general patterns: https://github.com/llvm-mirror/llvm/tree/master/lib/Transforms/InstCombine And that is just the general case pass. You then have a sludge of passes that do canonicalization (GVN for instance) in order to reduce the amount of pattern other passes have to match, and others looking for specialized things (SROA, LoadCombine, ...) and finally a ton of them looking for higher level things to change (SimplifyCFG, Inliner, ...). All of them require a cheer amount of pure brute force, by recognizing more and more patterns, while other required fine tuned heuristics. Realistically, D does not have the man power required to reach the same level of optimization, and have many higher impact task to spend that manpower on.
Aug 18 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/18/2015 1:47 PM, deadalnix wrote:
 Realistically, D does not have the man power required to reach the same level
of
 optimization, and have many higher impact task to spend that manpower on.
dmd also does a sludge of patterns. I'm just looking for a few that would significantly impact the result.
Aug 18 2015
next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 18 August 2015 at 21:25:35 UTC, Walter Bright wrote:
 On 8/18/2015 1:47 PM, deadalnix wrote:
 Realistically, D does not have the man power required to reach 
 the same level of
 optimization, and have many higher impact task to spend that 
 manpower on.
dmd also does a sludge of patterns. I'm just looking for a few that would significantly impact the result.
There is none. There is a ton of 0.5% one that adds up to the 30% difference. If I'd were to bet on what would impact DMD perfs the most, I'd go for SRAO, and a inliner in the middle end that works bottom up : - Explore the call graph to-down optimizing functions along the way - Backtrack bottom-up and check for inlining opportunities. - Rerun optimizations on the function inlining was done in. It require a fair amount of tweaking and probably need a way for the backends to provide a cost heuristic for various functions, but that would leverage the patterns already existing in the backend.
Aug 18 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/18/2015 2:33 PM, deadalnix wrote:
 There is none. There is a ton of 0.5% one that adds up to the 30% difference.
I regard a simple pattern that nets 0.5% as quite a worthwhile win. That's only 60 of those to make up the difference. If you've got any that you know of that would net 0.5% for dmd, lay it on me!
 If I'd were to bet on what would impact DMD perfs the most, I'd go for SRAO,
and
 a inliner in the middle end that works bottom up :
   - Explore the call graph to-down optimizing functions along the way
   - Backtrack bottom-up and check for inlining opportunities.
   - Rerun optimizations on the function inlining was done in.
That's how the inliner already works. The data flow analysis optimizer also runs repeatedly as each optimization exposes more possibilities. The register allocator also runs repeatedly. (I am unfamiliar with the term SRAO. Google comes up with nothing for it.)
 It require a fair amount of tweaking and probably need a way for the backends
to
 provide a cost heuristic for various functions,
A cost heuristic is already used for the inliner (it's in the front end, in 'inline.d'). A cost heuristic is also used for the register allocator.
Aug 18 2015
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 18 August 2015 at 21:55:26 UTC, Walter Bright wrote:
 On 8/18/2015 2:33 PM, deadalnix wrote:
 There is none. There is a ton of 0.5% one that adds up to the 
 30% difference.
I regard a simple pattern that nets 0.5% as quite a worthwhile win. That's only 60 of those to make up the difference. If you've got any that you know of that would net 0.5% for dmd, lay it on me!
 If I'd were to bet on what would impact DMD perfs the most, 
 I'd go for SRAO, and
 a inliner in the middle end that works bottom up :
   - Explore the call graph to-down optimizing functions along 
 the way
   - Backtrack bottom-up and check for inlining opportunities.
   - Rerun optimizations on the function inlining was done in.
That's how the inliner already works. The data flow analysis optimizer also runs repeatedly as each optimization exposes more possibilities. The register allocator also runs repeatedly.
My understanding is that the inliner is in the front end. This definitively do not work the way I describe it here.
 (I am unfamiliar with the term SRAO. Google comes up with 
 nothing for it.)
That is because I made a typo, sorry. It stand for 'scalar replacement of aggregate' aka SROA (not SRAO). You can find literature on the subject.
 It require a fair amount of tweaking and probably need a way 
 for the backends to
 provide a cost heuristic for various functions,
A cost heuristic is already used for the inliner (it's in the front end, in 'inline.d'). A cost heuristic is also used for the register allocator.
I'm not sure how this can be made to work the way I describe it in the frontend.
Aug 18 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/18/2015 3:04 PM, deadalnix wrote:
 My understanding is that the inliner is in the front end. This definitively do
 not work the way I describe it here.
But it uses a cost function and runs repeatedly until there is no more inlining to be done.
 It stand for 'scalar replacement of aggregate' aka SROA (not SRAO).
 You can find literature on the subject.
I'm aware of the technique, though I didn't know the name for it (I always called it "horizontal slicing of aggregates"). It is one optimization that dmd could significantly benefit from, and wouldn't be too hard to implement. The ubiquitous use of ranges makes it a much more important optimization. I suspect it would net much more than 0.5%.
Aug 18 2015
next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 18 August 2015 at 22:14:52 UTC, Walter Bright wrote:
 On 8/18/2015 3:04 PM, deadalnix wrote:
 My understanding is that the inliner is in the front end. This 
 definitively do
 not work the way I describe it here.
But it uses a cost function and runs repeatedly until there is no more inlining to be done.
You need to have the optimization done on the way down or the cost function only tells you about the unoptimized cost, which doesn't really matter, especially after inlining is done as the code can become fairly redundant.
 It stand for 'scalar replacement of aggregate' aka SROA (not 
 SRAO).
 You can find literature on the subject.
I'm aware of the technique, though I didn't know the name for it (I always called it "horizontal slicing of aggregates"). It is one optimization that dmd could significantly benefit from, and wouldn't be too hard to implement. The ubiquitous use of ranges makes it a much more important optimization. I suspect it would net much more than 0.5%.
Yes. It would make many thing apparent to other part of the optimizer.
Aug 18 2015
prev sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 19-Aug-2015 01:14, Walter Bright wrote:
 On 8/18/2015 3:04 PM, deadalnix wrote:
 My understanding is that the inliner is in the front end. This
 definitively do
 not work the way I describe it here.
But it uses a cost function and runs repeatedly until there is no more inlining to be done.
When looking at AST there is no way to correctly estimate cost function - code generated may be huge with user-defined types/operators. -- Dmitry Olshansky
Aug 19 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/19/2015 1:11 AM, Dmitry Olshansky wrote:
 When looking at AST there is no way to correctly estimate cost function - code
 generated may be huge with user-defined types/operators.
Sure the cost function is fuzzy, but it tends to work well enough.
Aug 19 2015
parent "deadalnix" <deadalnix gmail.com> writes:
On Wednesday, 19 August 2015 at 08:29:05 UTC, Walter Bright wrote:
 On 8/19/2015 1:11 AM, Dmitry Olshansky wrote:
 When looking at AST there is no way to correctly estimate cost 
 function - code
 generated may be huge with user-defined types/operators.
Sure the cost function is fuzzy, but it tends to work well enough.
No, looking at what DMD geenrate, it is obviously not good at inlining. Here is the issue, when you have A calling B calling C, once you have inlined C into B, and ran optimization, you often find that there are dramatic simplifications you can do (this tends to be especially true with templates) and that may make B eligible for inlining into A, because it became simpler instead of more complex. Optimize top-down, inline bottom-up and reoptimize as you inline. That's proven tech.
Aug 19 2015
prev sibling next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Tue, Aug 18, 2015 at 02:25:34PM -0700, Walter Bright via Digitalmars-d wrote:
 On 8/18/2015 1:47 PM, deadalnix wrote:
Realistically, D does not have the man power required to reach the
same level of optimization, and have many higher impact task to spend
that manpower on.
dmd also does a sludge of patterns. I'm just looking for a few that would significantly impact the result.
From the little that I've seen of dmd's output, it seems that it's
rather weak in the areas of inlining and loop unrolling / refactoring. Inner loops especially would benefit from much more aggressive inlining, which dmd seems to be unable to do once the loop body gets moderately complex. Unrolling in dmd seems to be very minimal or absent. In both cases, it seems that the optimizer gives up too quickly -- an if-else function body will get inlined, but an if without an else doesn't, etc.. This means the slightest bump in the road will cause the inliner to throw up its hands and not inline, which in turn causes missed opportunities for further refactoring / simplification in the calling function. Especially in range-based code, this can make a big difference. There's also the more general optimizations, like eliminating redundant loads, eliding useless allocation of stack space in functions that return constant values, etc.. While DMD does do some of this, it's not as thorough as GDC. While it may sound like only a small difference, if they happen to run inside an inner loop, they can add up to quite a significant difference. DMD needs to be much more aggressive in eliminating useless / redundant code blocks; a lot of this comes not from people writing unusually redundant code, but from template expansions and inlined range-based code, which sometimes produce a lot of redundant operations if translated directly. Aggressively reducing these generated code blocks will often open up further optimization opportunities. T -- The computer is only a tool. Unfortunately, so is the user. -- Armaphine, K5
Aug 18 2015
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/18/2015 2:57 PM, H. S. Teoh via Digitalmars-d wrote:
 From the little that I've seen of dmd's output, it seems that it's
 rather weak in the areas of inlining and loop unrolling / refactoring.
DMD does not do loop unrolling. I've thought about it many times, but just never did it.
 In both cases, it seems that the optimizer gives up too quickly -- an
 if-else function body will get inlined, but an if without an else
 doesn't, etc..
It should do this. An example would be nice.
 There's also the more general optimizations, like eliminating redundant
 loads, eliding useless allocation of stack space in functions that
 return constant values, etc.. While DMD does do some of this, it's not
 as thorough as GDC. While it may sound like only a small difference, if
 they happen to run inside an inner loop, they can add up to quite a
 significant difference.
dmd has a full data flow analysis pass, which includes dead code elimination and dead store elimination. It goes as far as possible with the intermediate code. Any dead stores still generated are an artifact of the the detailed code generation, which I agree is a problem.
 DMD needs to be much more aggressive in eliminating useless / redundant
 code blocks; a lot of this comes not from people writing unusually
 redundant code, but from template expansions and inlined range-based
 code, which sometimes produce a lot of redundant operations if
 translated directly.  Aggressively reducing these generated code blocks
 will often open up further optimization opportunities.
I'm not aware of any case of DMD generating dead code blocks. I'd like to see it if you have one.
Aug 18 2015
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Tue, Aug 18, 2015 at 03:25:38PM -0700, Walter Bright via Digitalmars-d wrote:
 On 8/18/2015 2:57 PM, H. S. Teoh via Digitalmars-d wrote:
From the little that I've seen of dmd's output, it seems that it's
rather weak in the areas of inlining and loop unrolling /
refactoring.
DMD does not do loop unrolling. I've thought about it many times, but just never did it.
What's the reason for it?
In both cases, it seems that the optimizer gives up too quickly -- an
if-else function body will get inlined, but an if without an else
doesn't, etc..
It should do this. An example would be nice.
Sorry, I wrote this from memory, so I don't have an example handy. But IIRC it was either a lambda or a function with a single-line body, where if the function has the form: auto f() { if (cond) return a; else return b; } it would be inlined, but if it was written: auto f() { if (cond) return a; return b; } it would remain as a function call. (I didn't test this, btw, like I said, I'm writing this from memory.)
There's also the more general optimizations, like eliminating
redundant loads, eliding useless allocation of stack space in
functions that return constant values, etc.. While DMD does do some
of this, it's not as thorough as GDC. While it may sound like only a
small difference, if they happen to run inside an inner loop, they
can add up to quite a significant difference.
dmd has a full data flow analysis pass, which includes dead code elimination and dead store elimination. It goes as far as possible with the intermediate code. Any dead stores still generated are an artifact of the the detailed code generation, which I agree is a problem.
DMD needs to be much more aggressive in eliminating useless /
redundant code blocks; a lot of this comes not from people writing
unusually redundant code, but from template expansions and inlined
range-based code, which sometimes produce a lot of redundant
operations if translated directly.  Aggressively reducing these
generated code blocks will often open up further optimization
opportunities.
I'm not aware of any case of DMD generating dead code blocks. I'd like to see it if you have one.
Sorry, I didn't write it clearly. I meant dead or redundant loads/stores, caused either by detailed codegen or from template expansion or inlining(?). Overall, the assembly produced by GDC tends to be "cleaner" or "leaner", whereas the assembly produced by DMD tends to be more "frilly" (doing the same thing in more instructions than GDC does in fewer). Maybe when I get some free time this week, I could look at the disassembly of one of my programs again to give some specific examples. T -- Doubtless it is a good thing to have an open mind, but a truly open mind should be open at both ends, like the food-pipe, with the capacity for excretion as well as absorption. -- Northrop Frye
Aug 18 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/18/2015 4:05 PM, H. S. Teoh via Digitalmars-d wrote:
 Maybe when I get some free time this week, I could look at the
 disassembly of one of my programs again to give some specific examples.
Please do.
Aug 18 2015
next sibling parent reply "Ivan Kazmenko" <gassa mail.ru> writes:
On Tuesday, 18 August 2015 at 23:30:26 UTC, Walter Bright wrote:
 On 8/18/2015 4:05 PM, H. S. Teoh via Digitalmars-d wrote:
 Maybe when I get some free time this week, I could look at the
 disassembly of one of my programs again to give some specific 
 examples.
Please do.
Sorry to repeat myself, but isn't https://issues.dlang.org/show_bug.cgi?id=11821 such an example? Perhaps other examples can be generated by examining the assembly output of some simple range-based programs. So what I am suggesting is a kind of test-driven approach. Just throw some random range stuff together, like ----- import std.algorithm, std.range; int main() {return [0,1,4,9,16] . take(3) . filter!(q{a&1}) . front;} ----- and look at the generated assembly. For me, the above example did not inline some FilterResult lambda call. Isn't it the optimizer's fault in the end, one which can be addressed? Besides, there are quite a few other hits at the bugzilla when searching for "backend" or "performance".
Aug 18 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/18/2015 5:07 PM, Ivan Kazmenko wrote:
 Sorry to repeat myself, but isn't
https://issues.dlang.org/show_bug.cgi?id=11821
 such an example?
Yes, absolutely.
Aug 18 2015
parent Walter Bright <newshound2 digitalmars.com> writes:
On 8/18/2015 5:27 PM, Walter Bright wrote:
 On 8/18/2015 5:07 PM, Ivan Kazmenko wrote:
 Sorry to repeat myself, but isn't
https://issues.dlang.org/show_bug.cgi?id=11821
 such an example?
Yes, absolutely.
I remarked it as an enhancement request.
Aug 18 2015
prev sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Tue, Aug 18, 2015 at 04:30:26PM -0700, Walter Bright via Digitalmars-d wrote:
 On 8/18/2015 4:05 PM, H. S. Teoh via Digitalmars-d wrote:
Maybe when I get some free time this week, I could look at the
disassembly of one of my programs again to give some specific
examples.
Please do.
Rather than try to reduce one of my programs into a self-contained test case, I decided instead to write a contrived range-based program that exemplifies the limitations of dmd as I have observed. The program code follows; the comments at the end describe the results, analysis, and my conclusions. I didn't include the assembly listings, as it's rather long, but if people are interested I'll trim them down to the parts of interest and post them in a follow-up post. ----------------------snip----------------------- /* Crude benchmark for comparing dmd/gdc output. */ import std.algorithm : map, filter, reduce; import std.conv : to; import std.range : iota; import std.stdio : writeln; auto fun(int n) { return iota(n) .map!(a => a*7) .filter!(a => a % 2 == 0) .reduce!((a,b) => a/2 + b); } void main() { writeln(fun(100_000_000)); } /* RESULTS: Compiled with: dmd -release -O -inline test.d -oftest.dmd gdc -frelease -finline -O3 test.d -o test.gdc (dmd git HEAD, gdc 5.2.1) Execution times: % time test.dmd ; time test.gdc 1399999944 real 0m0.628s user 0m0.627s sys 0m0.001s 1399999944 real 0m0.168s user 0m0.167s sys 0m0.000s % As can be seen, the executable produced by gdc runs about 3.7 times faster than the executable produced by dmd. Why? Looking at the disassembly, the first thing that stands out is that gdc has inlined the call to writeln, whereas dmd calls a separate function. While this isn't the bottleneck, it gives a hint of the things to come. We look next at fun(), which in both cases are standalone functions. The dmd version of fun() is pretty straightforward: it calls iota() to create the range, then map(), then filter(), and finally reduce() where most of the work takes place. This is pretty much a direct translation of the code. The gdc version of fun() is markedly different. We immediately notice that the only function call in it is a call to enforce(). Not only the first level calls to iota, map, filter, reduce have been inlined, pretty much their *entire* call trees have been inlined, except for the call to enforce(). Here I'd like to highlight the fact that in the dmd version of the code, the call to iota() has another function call to the ctor of the returned range. So we see that gdc has inlined two levels of function calls where dmd has inlined none, even though one would expect that with -inline, at least the call from iota() to the ctor of the range should have been inlined, since it's the only place where that ctor would be called; iota() itself being merely a thin wrapper around it. (The ctor itself is also pretty simple; I'm not sure why dmd fails to inline it.) Similar remarks apply to the calls to map() and filter() as well. Now let's look at reduce(), which is where the actual action takes place. The dmd version, of course, involves a separate function call, which in the grand scheme of things isn't all that important, since it's only a single function call. However, a look at the disassembly of reduce() shows that dmd has not inlined the calls to .empty, .front, and .popFront. In fact, the function calls yet another function -- reduceImpl -- where the main loop sits. Inside this main loop, .empty, .front, and .popFront are again called with no inlining -- even though .empty has a trivial body, .front involves only 1 multiplication, and .popFront only 1 multiplication and a single odd/even test. On top of this, each of these nested function calls involve a certain amount of boilerplate: twiddling with the stack registers, shuffling call arguments about, etc., that add up to quite a large overhead in reduceImpl's inner loop. The gdc version, by contrast, inlines *everything*, except the call to enforce() which is outside the inner loop. This aggressive inlining allowed gdc to trim the loop body down to about only 18 instructions with no function calls. While the dmd inner loop itself has only 15 instructions, it involves 3 function calls, with .front having 8 instructions, .empty also 8 instructions, and .popFront 13 instructions, making a total of 44 instructions per iteration. A significant percentage of these instructions are function call boilerplate. The entire inner loop in the gdc version would fit in about 4-5 CPU cache lines, whereas the dmd version would require a lot more. To dmd's credit, it did manage to inline the nested calls in .empty, .front, and .popFront(), which would have involved more function calls when no inlining at all is done (each wrapper range forwards the calls to the next). This probably helped to reduce the cost of running their respective function bodies. However, this isn't quite enough, since the overhead of 3 function calls in the inner loop is pretty expensive when the cost could have been eliminated completely, as gdc had done. */ ----------------------snip----------------------- T -- Political correctness: socially-sanctioned hypocrisy.
Aug 20 2015
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/20/2015 4:56 PM, H. S. Teoh via Digitalmars-d wrote:
 I didn't include the assembly listings, as it's rather long, but if
 people are interested I'll trim them down to the parts of interest and
 post them in a follow-up post.
Thank you. This belongs as an enhancement request in bugzilla, using the 'performance' keyword. I don't think adding the assembler listings is necessary for this one, what you've posted here is sufficient.
Aug 20 2015
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Aug 20, 2015 at 05:30:25PM -0700, Walter Bright via Digitalmars-d wrote:
 On 8/20/2015 4:56 PM, H. S. Teoh via Digitalmars-d wrote:
I didn't include the assembly listings, as it's rather long, but if
people are interested I'll trim them down to the parts of interest
and post them in a follow-up post.
Thank you. This belongs as an enhancement request in bugzilla, using the 'performance' keyword. I don't think adding the assembler listings is necessary for this one, what you've posted here is sufficient.
https://issues.dlang.org/show_bug.cgi?id=14943 T -- People tell me that I'm skeptical, but I don't believe them.
Aug 20 2015
parent Walter Bright <newshound2 digitalmars.com> writes:
On 8/20/2015 6:06 PM, H. S. Teoh via Digitalmars-d wrote:
 https://issues.dlang.org/show_bug.cgi?id=14943
Thanks!
Aug 20 2015
prev sibling parent reply "jmh530" <john.michael.hall gmail.com> writes:
On Friday, 21 August 2015 at 00:00:09 UTC, H. S. Teoh wrote:
 The gdc version, by contrast, inlines *everything*,
This could be why I've observed performance differentials in dmd for doing some manual for loops rather than using the stuff in std.algorithms.
Aug 20 2015
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Fri, Aug 21, 2015 at 01:20:25AM +0000, jmh530 via Digitalmars-d wrote:
 On Friday, 21 August 2015 at 00:00:09 UTC, H. S. Teoh wrote:
The gdc version, by contrast, inlines *everything*,
This could be why I've observed performance differentials in dmd for doing some manual for loops rather than using the stuff in std.algorithms.
Very likely, I'd say. IME dmd tends to give up inlining rather easily. This is very much something that needs to improve, since ranges in D are supposed to be a big selling point. Wouldn't want them to perform poorly compared to hand-written loops. Have you tried using gdc -O3 (or ldc) to see if there's a big difference? T -- Prosperity breeds contempt, and poverty breeds consent. -- Suck.com
Aug 20 2015
next sibling parent reply "Kagamin" <spam here.lot> writes:
On Friday, 21 August 2015 at 01:29:12 UTC, H. S. Teoh wrote:
 Have you tried using gdc -O3 (or ldc) to see if there's a big
 difference?
How -Os and -march=native change the picture?
Aug 21 2015
parent reply Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 21 August 2015 at 10:49, Kagamin via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 On Friday, 21 August 2015 at 01:29:12 UTC, H. S. Teoh wrote:

 Have you tried using gdc -O3 (or ldc) to see if there's a big
 difference?
How -Os and -march=native change the picture?
There's a paper somewhere about optimisations on Intel processors that says that -O2 produces overall better results than -O3 (I'll have to dig it out). In any case, -Ofast may give you better benchmark results because it permits skipping corners for IEEE and other standards. Also, -march=native is cheating, as you'll have the most unportable binary created from it. ;-)
Aug 21 2015
parent "Kagamin" <spam here.lot> writes:
On Friday, 21 August 2015 at 09:17:28 UTC, Iain Buclaw wrote:
 There's a paper somewhere about optimisations on Intel 
 processors that says that -O2 produces overall better results 
 than -O3 (I'll have to dig it out).
That being said, recently I compared performance of the datetime library using different algorithms. One function of interest was computing year from raw time: D1 had an implementation based on loop - it iterated over years until it matched the source raw time; and currently phobos has implementation without loop, which carefully reduces the time to year. I wrote two tests iterating over days and calling date-time conversion functions, the test which invoked yearFromDays directly showed that implementation without loop is faster, but the bigger test that called full conversion between date and time showed that version with loop is faster by 5%. Quite unintuitive. Could it be due to cache problems? The function with loop is smaller, but the whole executable is only 15kb - should fit in the processor cache entirely.
Aug 21 2015
prev sibling parent reply "Ivan Kazmenko" <gassa mail.ru> writes:
On Friday, 21 August 2015 at 01:29:12 UTC, H. S. Teoh wrote:
 On Fri, Aug 21, 2015 at 01:20:25AM +0000, jmh530 via 
 Digitalmars-d wrote:
 On Friday, 21 August 2015 at 00:00:09 UTC, H. S. Teoh wrote:
 The gdc version, by contrast, inlines *everything*,
This could be why I've observed performance differentials in dmd for doing some manual for loops rather than using the stuff in std.algorithms.
Very likely, I'd say. IME dmd tends to give up inlining rather easily. This is very much something that needs to improve, since ranges in D are supposed to be a big selling point. Wouldn't want them to perform poorly compared to hand-written loops.
Yeah, ranges should ideally be a zero-cost abstraction, at least in trivial cases.
Aug 21 2015
parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Fri, Aug 21, 2015 at 03:09:42PM +0000, Ivan Kazmenko via Digitalmars-d wrote:
 On Friday, 21 August 2015 at 01:29:12 UTC, H. S. Teoh wrote:
On Fri, Aug 21, 2015 at 01:20:25AM +0000, jmh530 via Digitalmars-d wrote:
On Friday, 21 August 2015 at 00:00:09 UTC, H. S. Teoh wrote:
The gdc version, by contrast, inlines *everything*,
This could be why I've observed performance differentials in dmd for doing some manual for loops rather than using the stuff in std.algorithms.
Very likely, I'd say. IME dmd tends to give up inlining rather easily. This is very much something that needs to improve, since ranges in D are supposed to be a big selling point. Wouldn't want them to perform poorly compared to hand-written loops.
Yeah, ranges should ideally be a zero-cost abstraction, at least in trivial cases.
Definitely. Fortunately, gdc (and probably ldc) seems quite capable of achieving this. It's just dmd that needs some improvement in this area. This will quickly become a major issue once we switch to ddmd and start making use of range-based code in the compiler, esp. since compiler performance has been one of the selling points of D. T -- Democracy: The triumph of popularity over principle. -- C.Bond
Aug 21 2015
prev sibling parent reply "rsw0x" <anonymous anonymous.com> writes:
On Friday, 21 August 2015 at 01:20:27 UTC, jmh530 wrote:
 On Friday, 21 August 2015 at 00:00:09 UTC, H. S. Teoh wrote:
 The gdc version, by contrast, inlines *everything*,
This could be why I've observed performance differentials in dmd for doing some manual for loops rather than using the stuff in std.algorithms.
ldc and gdc typically produce output nearly the same as handwritten loops for ranges.
Aug 20 2015
parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Friday, 21 August 2015 at 02:02:57 UTC, rsw0x wrote:
 On Friday, 21 August 2015 at 01:20:27 UTC, jmh530 wrote:
 On Friday, 21 August 2015 at 00:00:09 UTC, H. S. Teoh wrote:
 The gdc version, by contrast, inlines *everything*,
This could be why I've observed performance differentials in dmd for doing some manual for loops rather than using the stuff in std.algorithms.
ldc and gdc typically produce output nearly the same as handwritten loops for ranges.
Which is really what we need to be happening with ranges. The fact that they make code so much more idiomatic helps a _lot_, making code faster to write and easier to understand and maintain, but if we're taking performance hits from it, then we start losing out to C++ code pretty quickly, which is _not_ what we want. - Jonathan M Davis
Aug 21 2015
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 8/18/2015 2:57 PM, H. S. Teoh via Digitalmars-d wrote:
 like eliminating redundant loads
Turns out you were right. https://github.com/D-Programming-Language/dmd/pull/4906
Aug 18 2015
prev sibling parent Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 18 August 2015 at 23:25, Walter Bright via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 On 8/18/2015 1:47 PM, deadalnix wrote:

 Realistically, D does not have the man power required to reach the same
 level of
 optimization, and have many higher impact task to spend that manpower on.
dmd also does a sludge of patterns. I'm just looking for a few that would significantly impact the result.
Speculative devirtualization? http://hubicka.blogspot.de/2014/02/devirtualization-in-c-part-4-analyzing.html
Aug 18 2015
prev sibling next sibling parent Martin Nowak <code+news.digitalmars dawg.eu> writes:
On 08/18/2015 10:28 PM, Walter Bright wrote:
 On 8/18/2015 12:38 PM, deadalnix wrote:
 And honestly, there is no way DMD can catch up.
I find your lack of faith disturbing.
I don't doubt we can catch up, and it might be worth for the very low hanging fruit. But our real problem isn't the backend but the amount of bugs and small ecosystem when compared with other languages like go, scala, swift, or even rust. This is a simple matter of priorities. Whether or not D succeeds won't depend on SETcc vs. SBB, but it does depend on us delivering the most useful ecosystem. It's so much more important to move forward with https://trello.com/c/YoAFvV5n/6-313-314 or https://trello.com/c/1dQh4gxm/35-drop-property, or improve backtraces https://trello.com/c/FbuWfpVE/54-backtraces-with-line-numbers, no matter how much these codegen inefficiencies tickle your ambition. -Martin
Aug 23 2015
prev sibling parent reply Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 18/08/2015 21:28, Walter Bright wrote:
 On 8/18/2015 12:38 PM, deadalnix wrote:
 And honestly, there is no way DMD can catch up.
I find your lack of faith disturbing. https://www.youtube.com/watch?v=Zzs-OvfG8tE&feature=player_detailpage#t=91
My instinct also tells me it's extremely unlikely that DMD will be able to catch. But regardless of that, let's suppose it does catch up, that you (and/or others) are eventually able to make the DMD backend as good as LLVM/GCC. At what cost (development time wise) will that come? How much big chunks of development effort will be spent on that task, that could be spent on improving other areas of D, just so that DMD could be about as good (not better, just *as good*), as LDC/GDC?... -- Bruno Medeiros https://twitter.com/brunodomedeiros
Aug 27 2015
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Thursday, 27 August 2015 at 14:12:01 UTC, Bruno Medeiros wrote:
 On 18/08/2015 21:28, Walter Bright wrote:
 On 8/18/2015 12:38 PM, deadalnix wrote:
 And honestly, there is no way DMD can catch up.
I find your lack of faith disturbing. https://www.youtube.com/watch?v=Zzs-OvfG8tE&feature=player_detailpage#t=91
My instinct also tells me it's extremely unlikely that DMD will be able to catch. But regardless of that, let's suppose it does catch up, that you (and/or others) are eventually able to make the DMD backend as good as LLVM/GCC. At what cost (development time wise) will that come? How much big chunks of development effort will be spent on that task, that could be spent on improving other areas of D, just so that DMD could be about as good (not better, just *as good*), as LDC/GDC?...
Honestly, while I don't see why dmd couldn't catch up to gdc and ldc if enough development time were sunk into it, I seriously question that dmd can catch up without way too much development time being sunk into it. And if ldc and gdc are ultimately the compilers that folks should be using if they want the best performance, then so be it. But if dmd can be sped up so that it's closer and there's less need to worry about the speed difference for most folks, then I think that that's a big win. Every little bit of performance improvement that we can get out of dmd is an improvement, especially when those improvements come at minimal cost, and I see no reason to not improve dmd's performance where it's not going to be a huge timesink to do so and where the appropriate precautions are taken to avoid regressions. - Jonathan M Davis
Aug 27 2015
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 27 August 2015 at 17:59:48 UTC, Jonathan M Davis 
wrote:
 that's a big win. Every little bit of performance improvement 
 that we can get out of dmd is an improvement, especially when 
 those improvements come at minimal cost, and I see no reason to 
 not improve dmd's performance where it's not going to be a huge 
 timesink to do so and where the appropriate precautions are 
 taken to avoid regressions.
Actually, if you think about PR, it does not matter. I might even be better to perform 50% worse and call it a development-mode than to perform 25% worse in release mode... How about putting effort into making it easier to hook multiple backends into the same binary instead? People won't complain if they can toss in "-O" and get LLVM backend output from the same compiler.
Aug 27 2015
prev sibling next sibling parent reply "Vladimir Panteleev" <thecybershadow.lists gmail.com> writes:
On Tuesday, 18 August 2015 at 12:37:37 UTC, Vladimir Panteleev 
wrote:
 I suggest that we revamp the compiler download page again. The 
 lead should be a "select your compiler" which lists the 
 advantages and disadvantages of each of DMD, LDC and GDC.
https://github.com/D-Programming-Language/dlang.org/pull/1067
Aug 18 2015
parent reply "Vladimir Panteleev" <thecybershadow.lists gmail.com> writes:
On Tuesday, 18 August 2015 at 17:10:36 UTC, Vladimir Panteleev 
wrote:
 On Tuesday, 18 August 2015 at 12:37:37 UTC, Vladimir Panteleev 
 wrote:
 I suggest that we revamp the compiler download page again. The 
 lead should be a "select your compiler" which lists the 
 advantages and disadvantages of each of DMD, LDC and GDC.
https://github.com/D-Programming-Language/dlang.org/pull/1067
Now live on http://dlang.org/download.html Better artwork welcome :)
Aug 22 2015
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Sun, Aug 23, 2015 at 12:03:25AM +0000, Vladimir Panteleev via Digitalmars-d
wrote:
 On Tuesday, 18 August 2015 at 17:10:36 UTC, Vladimir Panteleev wrote:
On Tuesday, 18 August 2015 at 12:37:37 UTC, Vladimir Panteleev wrote:
I suggest that we revamp the compiler download page again. The lead
should be a "select your compiler" which lists the advantages and
disadvantages of each of DMD, LDC and GDC.
https://github.com/D-Programming-Language/dlang.org/pull/1067
Now live on http://dlang.org/download.html Better artwork welcome :)
Um... why is the GNU icon twice the size of the others? T -- The best way to destroy a cause is to defend it poorly.
Aug 22 2015
parent "Vladimir Panteleev" <thecybershadow.lists gmail.com> writes:
On Sunday, 23 August 2015 at 03:37:17 UTC, H. S. Teoh wrote:
 On Sun, Aug 23, 2015 at 12:03:25AM +0000, Vladimir Panteleev 
 via Digitalmars-d wrote:
 On Tuesday, 18 August 2015 at 17:10:36 UTC, Vladimir Panteleev 
 wrote:
On Tuesday, 18 August 2015 at 12:37:37 UTC, Vladimir 
Panteleev wrote:
I suggest that we revamp the compiler download page again. 
The lead should be a "select your compiler" which lists the 
advantages and disadvantages of each of DMD, LDC and GDC.
https://github.com/D-Programming-Language/dlang.org/pull/1067
Now live on http://dlang.org/download.html Better artwork welcome :)
Um... why is the GNU icon twice the size of the others?
Refresh, your browser cached style.css.
Aug 23 2015
prev sibling next sibling parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Sun, Aug 23, 2015 at 12:03:25AM +0000, Vladimir Panteleev via Digitalmars-d
wrote:
 On Tuesday, 18 August 2015 at 17:10:36 UTC, Vladimir Panteleev wrote:
On Tuesday, 18 August 2015 at 12:37:37 UTC, Vladimir Panteleev wrote:
I suggest that we revamp the compiler download page again. The lead
should be a "select your compiler" which lists the advantages and
disadvantages of each of DMD, LDC and GDC.
https://github.com/D-Programming-Language/dlang.org/pull/1067
Now live on http://dlang.org/download.html Better artwork welcome :)
What about one of the figures from: http://eusebeia.dyndns.org/~hsteoh/tmp/mascot.png for the DMD icon? (Or any other pose that you might suggest -- I still have the povray files and can do a render in a different post.) T -- It is of the new things that men tire --- of fashions and proposals and improvements and change. It is the old things that startle and intoxicate. It is the old things that are young. -- G.K. Chesterton
Aug 22 2015
prev sibling next sibling parent Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 23 August 2015 at 05:33, H. S. Teoh via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 On Sun, Aug 23, 2015 at 12:03:25AM +0000, Vladimir Panteleev via
 Digitalmars-d wrote:
 On Tuesday, 18 August 2015 at 17:10:36 UTC, Vladimir Panteleev wrote:
On Tuesday, 18 August 2015 at 12:37:37 UTC, Vladimir Panteleev wrote:
I suggest that we revamp the compiler download page again. The lead
should be a "select your compiler" which lists the advantages and
disadvantages of each of DMD, LDC and GDC.
https://github.com/D-Programming-Language/dlang.org/pull/1067
Now live on http://dlang.org/download.html Better artwork welcome :)
Um... why is the GNU icon twice the size of the others?
[mutter, mutter] ... Lord of Mordor ... [mutter, mutter]
Aug 22 2015
prev sibling next sibling parent "Joakim" <dlang joakim.fea.st> writes:
On Sunday, 23 August 2015 at 00:03:27 UTC, Vladimir Panteleev 
wrote:
 On Tuesday, 18 August 2015 at 17:10:36 UTC, Vladimir Panteleev 
 wrote:
 On Tuesday, 18 August 2015 at 12:37:37 UTC, Vladimir Panteleev 
 wrote:
 I suggest that we revamp the compiler download page again. 
 The lead should be a "select your compiler" which lists the 
 advantages and disadvantages of each of DMD, LDC and GDC.
https://github.com/D-Programming-Language/dlang.org/pull/1067
Now live on http://dlang.org/download.html Better artwork welcome :)
Nice, this is much better, with actual info on which compiler is best for the user.
Aug 23 2015
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/22/2015 5:03 PM, Vladimir Panteleev wrote:
 On Tuesday, 18 August 2015 at 17:10:36 UTC, Vladimir Panteleev wrote:
 On Tuesday, 18 August 2015 at 12:37:37 UTC, Vladimir Panteleev wrote:
 I suggest that we revamp the compiler download page again. The lead should be
 a "select your compiler" which lists the advantages and disadvantages of each
 of DMD, LDC and GDC.
https://github.com/D-Programming-Language/dlang.org/pull/1067
Now live on http://dlang.org/download.html Better artwork welcome :)
Aren't i386 and x32 the same platforms?
Aug 23 2015
next sibling parent reply "Liam McSherry" <mcsherry.liam gmail.com> writes:
On Sunday, 23 August 2015 at 08:01:27 UTC, Walter Bright wrote:
 On 8/22/2015 5:03 PM, Vladimir Panteleev wrote:
 On Tuesday, 18 August 2015 at 17:10:36 UTC, Vladimir Panteleev 
 wrote:
 On Tuesday, 18 August 2015 at 12:37:37 UTC, Vladimir 
 Panteleev wrote:
 I suggest that we revamp the compiler download page again. 
 The lead should be
 a "select your compiler" which lists the advantages and 
 disadvantages of each
 of DMD, LDC and GDC.
https://github.com/D-Programming-Language/dlang.org/pull/1067
Now live on http://dlang.org/download.html Better artwork welcome :)
Aren't i386 and x32 the same platforms?
Might be referring to this: https://en.wikipedia.org/wiki/X32_ABI
Aug 23 2015
parent Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 23 August 2015 at 10:11, Liam McSherry via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 On Sunday, 23 August 2015 at 08:01:27 UTC, Walter Bright wrote:

 On 8/22/2015 5:03 PM, Vladimir Panteleev wrote:

 On Tuesday, 18 August 2015 at 17:10:36 UTC, Vladimir Panteleev wrote:

 On Tuesday, 18 August 2015 at 12:37:37 UTC, Vladimir Panteleev wrote:

 I suggest that we revamp the compiler download page again. The lead
 should be
 a "select your compiler" which lists the advantages and disadvantages
 of each
 of DMD, LDC and GDC.
https://github.com/D-Programming-Language/dlang.org/pull/1067
Now live on http://dlang.org/download.html Better artwork welcome :)
Aren't i386 and x32 the same platforms?
Might be referring to this: https://en.wikipedia.org/wiki/X32_ABI
It is referring to the X32 ABI.
Aug 23 2015
prev sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Sunday, 23 August 2015 at 08:01:27 UTC, Walter Bright wrote:
 On 8/22/2015 5:03 PM, Vladimir Panteleev wrote:
 On Tuesday, 18 August 2015 at 17:10:36 UTC, Vladimir Panteleev 
 wrote:
 On Tuesday, 18 August 2015 at 12:37:37 UTC, Vladimir 
 Panteleev wrote:
 I suggest that we revamp the compiler download page again. 
 The lead should be
 a "select your compiler" which lists the advantages and 
 disadvantages of each
 of DMD, LDC and GDC.
https://github.com/D-Programming-Language/dlang.org/pull/1067
Now live on http://dlang.org/download.html Better artwork welcome :)
Aren't i386 and x32 the same platforms?
No, x32 is basically amd64 with 32 bits pointers. You can use all other functions of amd64, like the extended number of registers.
Aug 23 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/23/2015 11:42 AM, deadalnix wrote:
 No, x32 is basically amd64 with 32 bits pointers. You can use all other
 functions of amd64, like the extended number of registers.
Makes sense.
Aug 23 2015
parent reply Shachar Shemesh <shachar weka.io> writes:
On 23/08/15 22:12, Walter Bright wrote:
 On 8/23/2015 11:42 AM, deadalnix wrote:
 No, x32 is basically amd64 with 32 bits pointers. You can use all other
 functions of amd64, like the extended number of registers.
Makes sense.
At least right now, it sounds more useful than it actually is. Since this is a different ABI, it actually requires its own version of the libraries of everything, kernel support, etc. As of now, no one seriously uses it, and the cost of carrying two ABIs doesn't make it plausible, in my opinion, that this will change in the future. To add insult to injury, Debian switched it off in their default kernel compilation, as they claim they don't want the extra exposure to security exploit through code the few test (this has saved them from at least one kernel security vulnerability since that decision was taken). Also, I haven't seen any hard numbers as far as how much, if at all, this saves in space (likely a little) and in runtime efficiency (likely even less). If these are as compelling as I expect them to be, expect this platform to die without gaining much traction. Shachar The difference between theory and practice is that, in theory, there is no difference between theory and practice. -- Yoggi Berra
Aug 28 2015
parent reply "Temtaime" <temtaime gmail.com> writes:
So comparing to llvm the idea of optimizing backend comes with:

1) LLVM optimizes code much better than DMD for now. And it's in 
active development, so it always will be far from DMD.
2) LLVM has over 120k commits, it has many financial investments 
from Google and Apple. Are you sure that ONE Walter can achieve 
what they done ?
3) LLVM supports many platforms while DMD will never support 
anything different from x86.
4) LLVM has free licensing, while DMD's backend is not.
5) Changing the backend often causes weird regressions to appear, 
it causes users to be afraid.

I think it's really better to fix current bugs than waste the 
time.
Aug 28 2015
next sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 28-Aug-2015 14:18, Temtaime wrote:
 So comparing to llvm the idea of optimizing backend comes with:

 1) LLVM optimizes code much better than DMD for now. And it's in active
 development, so it always will be far from DMD.
In producing better code very it well may be. Faster - I don't think so.
 2) LLVM has over 120k commits, it has many financial investments from
 Google and Apple. Are you sure that ONE Walter can achieve what they done ?
 3) LLVM supports many platforms while DMD will never support anything
 different from x86.
We do not need to do the new LLVM thing. Then the rest of argument is destroyed.
 I think it's really better to fix current bugs than waste the time.
Fixing bugs is certainly of higher priority but we need them found and reported first. -- Dmitry Olshansky
Aug 28 2015
next sibling parent reply "Temtaime" <temtaime gmail.com> writes:
Are we needing fastest or optimizing backend ?
Makes it able to optimize means it will become more complicated 
and slower. Don't touch it then.

There are TONS of bugs. For example from 2008.
https://issues.dlang.org/show_bug.cgi?id=2043
Aug 28 2015
parent reply "Temtaime" <temtaime gmail.com> writes:
Currently there are 3948 issues reported.
There are 24K threads and 83K posts it « Issues » while in « 
Learn » only 11K and 74K.

I can say than backend is not a problem at all for now.
Many bugs are ugly.
Aug 28 2015
parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 28-Aug-2015 15:37, Temtaime wrote:
 Currently there are 3948 issues reported.
Specifically for DMD including enhancements - 2569 Without enhancements - 1574 Having around 1K open issues is quite typical for a product that is actually being used.
 I can say than backend is not a problem at all for now.
 Many bugs are ugly.
Fixing one will make that list shorter. -- Dmitry Olshansky
Aug 28 2015
prev sibling parent reply "Jack Stouffer" <jack jackstouffer.com> writes:
On Friday, 28 August 2015 at 11:53:20 UTC, Dmitry Olshansky wrote:
 On 28-Aug-2015 14:18, Temtaime wrote:
 So comparing to llvm the idea of optimizing backend comes with:

 1) LLVM optimizes code much better than DMD for now. And it's 
 in active
 development, so it always will be far from DMD.
In producing better code very it well may be. Faster - I don't think so.
 2) LLVM has over 120k commits, it has many financial 
 investments from
 Google and Apple. Are you sure that ONE Walter can achieve 
 what they done ?
 3) LLVM supports many platforms while DMD will never support 
 anything
 different from x86.
We do not need to do the new LLVM thing. Then the rest of argument is destroyed.
You ignored probably the most important point in that post. Getting DMD to work on ARM would be a huge undertaking, probably so large that I don't think it will ever happen. This is a huge bummer because this essentially means getting D on phones or cheap computers is a pipe dream. LDC's ARM branch shows that it can be done with LLVM as a back end. It doesn't support 64 bit and I believe that exception handling still doesn't work, but if there was more man power dedicated to it, it would be possible.
Aug 28 2015
next sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Friday, 28 August 2015 at 15:01:32 UTC, Jack Stouffer wrote:
 This is a huge bummer because this essentially means getting D 
 on phones or cheap computers is a pipe dream.
D has worked on ARM for a long time. gdc supports it well, and I'm skeptical that it would be all that hard for dmd to do it too.... but since gdc already works, I'm meh at spending the time on it.
Aug 28 2015
prev sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 28-Aug-2015 18:01, Jack Stouffer wrote:
 On Friday, 28 August 2015 at 11:53:20 UTC, Dmitry Olshansky wrote:
 On 28-Aug-2015 14:18, Temtaime wrote:
 So comparing to llvm the idea of optimizing backend comes with:

 1) LLVM optimizes code much better than DMD for now. And it's in active
 development, so it always will be far from DMD.
In producing better code very it well may be. Faster - I don't think so.
 2) LLVM has over 120k commits, it has many financial investments from
 Google and Apple. Are you sure that ONE Walter can achieve what they
 done ?
 3) LLVM supports many platforms while DMD will never support anything
 different from x86.
We do not need to do the new LLVM thing. Then the rest of argument is destroyed.
You ignored probably the most important point in that post. Getting DMD to work on ARM would be a huge undertaking, probably so large that I don't think it will ever happen. This is a huge bummer because this essentially means getting D on phones or cheap computers is a pipe dream.
Have you ever written a backend? What is the evidance? Consider that x86 x64 bit support was done in about one year and a half by Walter single-handedly that is without freezing the other activity on DMD, of course. Aside from emitting different sequences of instructions most IR-based optimizations stay the same. -- Dmitry Olshansky
Aug 28 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/28/2015 9:49 AM, Dmitry Olshansky wrote:
 Have you ever written a backend? What is the evidance?

 Consider that x86 x64 bit support was done in about one year and a half by
 Walter single-handedly that is without freezing the other activity on DMD, of
 course. Aside from emitting different sequences of instructions most IR-based
 optimizations stay the same.
Doing an ARM back end wouldn't be that hard. It's much less complex than x86. Most of the work would be deleting about half of the x86 code generator :-)
Aug 28 2015
next sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 29-Aug-2015 01:05, Walter Bright wrote:
 On 8/28/2015 9:49 AM, Dmitry Olshansky wrote:
 Have you ever written a backend? What is the evidance?

 Consider that x86 x64 bit support was done in about one year and a
 half by
 Walter single-handedly that is without freezing the other activity on
 DMD, of
 course. Aside from emitting different sequences of instructions most
 IR-based
 optimizations stay the same.
Doing an ARM back end wouldn't be that hard. It's much less complex than x86. Most of the work would be deleting about half of the x86 code generator :-)
Yeah, I guess the things to provision for is register count + some peculiarities of allowed moves/stores etc. Doing the first 64-bit codegen was a difficult task, compared to doing another 32-bit one. -- Dmitry Olshansky
Aug 29 2015
parent Walter Bright <newshound2 digitalmars.com> writes:
On 8/29/2015 12:30 AM, Dmitry Olshansky wrote:
 Doing the first 64-bit codegen was a difficult task, compared to doing another
 32-bit one.
The main problem with the 64 bit x86 was the endlessly confusing non-orthogonality of it. The Win64 port was bad because of the bizarro calling convention they invented.
Aug 29 2015
prev sibling parent Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 29 Aug 2015 12:10 am, "Walter Bright via Digitalmars-d" <
digitalmars-d puremagic.com> wrote:
 On 8/28/2015 9:49 AM, Dmitry Olshansky wrote:
 Have you ever written a backend? What is the evidance?

 Consider that x86 x64 bit support was done in about one year and a half
by
 Walter single-handedly that is without freezing the other activity on
DMD, of
 course. Aside from emitting different sequences of instructions most
IR-based
 optimizations stay the same.
Doing an ARM back end wouldn't be that hard. It's much less complex than
x86. Most of the work would be deleting about half of the x86 code generator :-)

Don't forget you have about a 100_000_000 distinct ABIs, with about
100_000_000 distinct CPUs/Boards to target.

;-)
Aug 29 2015
prev sibling next sibling parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Friday, 28 August 2015 at 11:18:57 UTC, Temtaime wrote:
 Are you sure that ONE Walter can achieve what they done ?
Yes. A LOT of significant projects are primarily one-person jobs. Even the hard parts can be replicated by one person after a team blazes the path since they can just observe the successes and failures out of the others and skip the research steps. Even on a team, there's often one person who does the bulk of the work for any section of it; a really large project might just be a bunch of basically independent pieces that happen to fit together.
Aug 28 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/28/2015 8:10 AM, Adam D. Ruppe wrote:
 On Friday, 28 August 2015 at 11:18:57 UTC, Temtaime wrote:
 Are you sure that ONE Walter can achieve what they done ?
Yes. A LOT of significant projects are primarily one-person jobs. Even the hard parts can be replicated by one person after a team blazes the path since they can just observe the successes and failures out of the others and skip the research steps. Even on a team, there's often one person who does the bulk of the work for any section of it; a really large project might just be a bunch of basically independent pieces that happen to fit together.
Sometimes, adding more manpower just makes progress slower.
Aug 28 2015
parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Fri, Aug 28, 2015 at 03:07:19PM -0700, Walter Bright via Digitalmars-d wrote:
 On 8/28/2015 8:10 AM, Adam D. Ruppe wrote:
On Friday, 28 August 2015 at 11:18:57 UTC, Temtaime wrote:
Are you sure that ONE Walter can achieve what they done ?
Yes. A LOT of significant projects are primarily one-person jobs. Even the hard parts can be replicated by one person after a team blazes the path since they can just observe the successes and failures out of the others and skip the research steps. Even on a team, there's often one person who does the bulk of the work for any section of it; a really large project might just be a bunch of basically independent pieces that happen to fit together.
Sometimes, adding more manpower just makes progress slower.
https://en.wikipedia.org/wiki/The_Mythical_Man-Month T -- Famous last words: I wonder what will happen if I do *this*...
Aug 28 2015
prev sibling next sibling parent reply "rsw0x" <anonymous anonymous.com> writes:
On Friday, 28 August 2015 at 11:18:57 UTC, Temtaime wrote:
 4) LLVM has free licensing, while DMD's backend is not.
this is probably the biggest, you can't even legally redistribute dmd therefore linux distros can't include it in their repos
Aug 28 2015
parent "deadalnix" <deadalnix gmail.com> writes:
On Friday, 28 August 2015 at 19:10:56 UTC, rsw0x wrote:
 On Friday, 28 August 2015 at 11:18:57 UTC, Temtaime wrote:
 4) LLVM has free licensing, while DMD's backend is not.
this is probably the biggest, you can't even legally redistribute dmd therefore linux distros can't include it in their repos
That is by far one of the biggest problem D has.
Aug 28 2015
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/28/2015 4:18 AM, Temtaime wrote:
 Are you sure that ONE Walter can achieve what they done ?
People told me I couldn't write a C compiler, then told me I couldn't write a C++ compiler. I'm still the only person who has ever implemented a complete C++ compiler (C++98). Then they all (100%) laughed at me for starting D, saying nobody would ever use it. My whole career is built on stepping over people who told me I couldn't do anything and wouldn't amount to anything. LLVM is a fine compiler, but there's nothing magical about it. Besides, we have a secret productivity enhancing weapon that LLVM doesn't have - D! Now, if I can only tear myself away from the internet for a while...
Aug 28 2015
next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Friday, 28 August 2015 at 21:59:57 UTC, Walter Bright wrote:
 On 8/28/2015 4:18 AM, Temtaime wrote:
 Are you sure that ONE Walter can achieve what they done ?
People told me I couldn't write a C compiler, then told me I couldn't write a C++ compiler. I'm still the only person who has ever implemented a complete C++ compiler (C++98). Then they all (100%) laughed at me for starting D, saying nobody would ever use it. My whole career is built on stepping over people who told me I couldn't do anything and wouldn't amount to anything. LLVM is a fine compiler, but there's nothing magical about it. Besides, we have a secret productivity enhancing weapon that LLVM doesn't have - D! Now, if I can only tear myself away from the internet for a while...
Ok let's be very clear. It CAN be done (I even told you what would be the most impact thing to do to move in that direction). The question is, is it worth it ? I mean D has various issue, and speed in not one of them.
Aug 28 2015
parent "rsw0x" <anonymous anonymous.com> writes:
On Friday, 28 August 2015 at 23:38:47 UTC, deadalnix wrote:
 On Friday, 28 August 2015 at 21:59:57 UTC, Walter Bright wrote:
 [...]
Ok let's be very clear. It CAN be done (I even told you what would be the most impact thing to do to move in that direction). The question is, is it worth it ? I mean D has various issue, and speed in not one of them.
+1. D is probably the only language that rivals C++ speeds while using it completely idiomatically.
Aug 28 2015
prev sibling next sibling parent reply "jmh530" <john.michael.hall gmail.com> writes:
On Friday, 28 August 2015 at 21:59:57 UTC, Walter Bright wrote:
 My whole career is built on stepping over people who told me I 
 couldn't do anything and wouldn't amount to anything.
You should feel proud. There's something to be said for going forward regardless of what others say. I feel like I would have been fired years ago if I just disregarded what everyone said and did what I thought was right.
Aug 28 2015
parent Walter Bright <newshound2 digitalmars.com> writes:
On 8/28/2015 8:49 PM, jmh530 wrote:
 You should feel proud. There's something to be said for going forward
regardless
 of what others say. I feel like I would have been fired years ago if I just
 disregarded what everyone said and did what I thought was right.
If management won't back you up, you're working for the wrong outfit anyway.
Aug 29 2015
prev sibling next sibling parent reply Rikki Cattermole <alphaglosined gmail.com> writes:
On 29/08/15 9:59 AM, Walter Bright wrote:
 On 8/28/2015 4:18 AM, Temtaime wrote:
 Are you sure that ONE Walter can achieve what they done ?
People told me I couldn't write a C compiler, then told me I couldn't write a C++ compiler. I'm still the only person who has ever implemented a complete C++ compiler (C++98). Then they all (100%) laughed at me for starting D, saying nobody would ever use it. My whole career is built on stepping over people who told me I couldn't do anything and wouldn't amount to anything. LLVM is a fine compiler, but there's nothing magical about it. Besides, we have a secret productivity enhancing weapon that LLVM doesn't have - D! Now, if I can only tear myself away from the internet for a while...
Humm Walter, wanna start up trash talking banter? I also work better when people say I can't do X.
Aug 28 2015
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 29 August 2015 at 04:17:30 UTC, Rikki Cattermole 
wrote:
 On 29/08/15 9:59 AM, Walter Bright wrote:
 On 8/28/2015 4:18 AM, Temtaime wrote:
 Are you sure that ONE Walter can achieve what they done ?
People told me I couldn't write a C compiler, then told me I couldn't write a C++ compiler. I'm still the only person who has ever implemented a complete C++ compiler (C++98). Then they all (100%) laughed at me for starting D, saying nobody would ever use it. My whole career is built on stepping over people who told me I couldn't do anything and wouldn't amount to anything. LLVM is a fine compiler, but there's nothing magical about it. Besides, we have a secret productivity enhancing weapon that LLVM doesn't have - D! Now, if I can only tear myself away from the internet for a while...
Humm Walter, wanna start up trash talking banter? I also work better when people say I can't do X.
I bet none of you can implement SROA in DMD.
Aug 28 2015
next sibling parent Rikki Cattermole <alphaglosined gmail.com> writes:
On 29/08/15 4:19 PM, deadalnix wrote:
 On Saturday, 29 August 2015 at 04:17:30 UTC, Rikki Cattermole wrote:
 On 29/08/15 9:59 AM, Walter Bright wrote:
 On 8/28/2015 4:18 AM, Temtaime wrote:
 Are you sure that ONE Walter can achieve what they done ?
People told me I couldn't write a C compiler, then told me I couldn't write a C++ compiler. I'm still the only person who has ever implemented a complete C++ compiler (C++98). Then they all (100%) laughed at me for starting D, saying nobody would ever use it. My whole career is built on stepping over people who told me I couldn't do anything and wouldn't amount to anything. LLVM is a fine compiler, but there's nothing magical about it. Besides, we have a secret productivity enhancing weapon that LLVM doesn't have - D! Now, if I can only tear myself away from the internet for a while...
Humm Walter, wanna start up trash talking banter? I also work better when people say I can't do X.
I bet none of you can implement SROA in DMD.
I'll see what I can do. Remind me in 10 years please.
Aug 28 2015
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 8/28/2015 9:19 PM, deadalnix wrote:
 I bet none of you can implement SROA in DMD.
I know it's supposed to be advanced technology, but it's pretty simple. Just look for aggregate instances which are only accessed on register boundaries, and don't have the address taken. Then slice them up into separate register-sized variables, and re-run the optimizer. Voila!
Aug 29 2015
prev sibling next sibling parent "Paolo Invernizzi" <paolo.invernizzi no.address> writes:
On Friday, 28 August 2015 at 21:59:57 UTC, Walter Bright wrote:
 On 8/28/2015 4:18 AM, Temtaime wrote:
 Are you sure that ONE Walter can achieve what they done ?
People told me I couldn't write a C compiler, then told me I couldn't write a C++ compiler. I'm still the only person who has ever implemented a complete C++ compiler (C++98). Then they all (100%) laughed at me for starting D, saying nobody would ever use it. My whole career is built on stepping over people who told me I couldn't do anything and wouldn't amount to anything. LLVM is a fine compiler, but there's nothing magical about it. Besides, we have a secret productivity enhancing weapon that LLVM doesn't have - D! Now, if I can only tear myself away from the internet for a while...
I really like your attitude! That's, IMHO, one of the strongest selling point for D itself! --- Paolo
Aug 29 2015
prev sibling next sibling parent reply "Casual D user" <none none.com> writes:
On Friday, 28 August 2015 at 21:59:57 UTC, Walter Bright wrote:
 On 8/28/2015 4:18 AM, Temtaime wrote:
 Are you sure that ONE Walter can achieve what they done ?
People told me I couldn't write a C compiler, then told me I couldn't write a C++ compiler. I'm still the only person who has ever implemented a complete C++ compiler (C++98). Then they all (100%) laughed at me for starting D, saying nobody would ever use it. My whole career is built on stepping over people who told me I couldn't do anything and wouldn't amount to anything. LLVM is a fine compiler, but there's nothing magical about it. Besides, we have a secret productivity enhancing weapon that LLVM doesn't have - D! Now, if I can only tear myself away from the internet for a while...
The problem is that you're pretty much the face of D along with Andrei. Andrei announcing he was quitting Facebook to work on D fulltime was one of the most popular articles on Reddit's programming subreddit in the past month. Someone picks up D, and realizes that out of the box it has a full stop the world 1960s-style garbage collector completely wrapped in a mutex, can't inline constructors/destructors, basically non-functioning RTTI, no safe way to manage resources, a type system with massive holes in it, type qualifiers being suggestions, the non-proprietary compilers that generate faster code lag a year+ behind. Even more than this, D has no real IDE integration like C++ or Java, and none is even being worked on as far as I'm aware. D is advertised as a system's language, but most of the built-in language features require the GC so you might as well just use C if you can't use the GC. There's other things I can't remember right now. Then they come to the forums and see the head people of D working on ... DMD codegen improvements. That inspires a lot of confidence that these issues will get fixed beyond fixing them yourself - because that's what everyone adopting a new language wants to do. Do you know what the most complaints about D in the reddit thread were? D's incredibly old garbage collector, a complete lack of a good IDE, and a lack of good manual memory management utilities. I'm not blaming you, I'm just not sure if you're aware of what this looks like. If you intend for D to be a hobby project, then continue on.
Aug 29 2015
next sibling parent reply "welkam" <wwwelkam gmail.com> writes:
On Saturday, 29 August 2015 at 14:44:01 UTC, Casual D user wrote:

 Then they come to the forums and see the head people of D 
 working on ... DMD codegen improvements. That inspires a lot of 
 confidence that these issues will get fixed beyond fixing them 
 yourself - because that's what everyone adopting a new language 
 wants to do.

 Do you know what the most complaints about D in the reddit 
 thread were? D's incredibly old garbage collector, a complete 
 lack of a good IDE, and a lack of good manual memory management 
 utilities.

 I'm not blaming you, I'm just not sure if you're aware of what 
 this looks like. If you intend for D to be a hobby project, 
 then continue on.
I just want to make sure that you understand that he was asking for low hanging optimization opportunities that could be implemented in few hours of work?
Aug 29 2015
parent "cym13" <cpicard openmailbox.org> writes:
On Saturday, 29 August 2015 at 18:10:33 UTC, welkam wrote:
 On Saturday, 29 August 2015 at 14:44:01 UTC, Casual D user 
 wrote:

 Then they come to the forums and see the head people of D 
 working on ... DMD codegen improvements. That inspires a lot 
 of confidence that these issues will get fixed beyond fixing 
 them yourself - because that's what everyone adopting a new 
 language wants to do.

 Do you know what the most complaints about D in the reddit 
 thread were? D's incredibly old garbage collector, a complete 
 lack of a good IDE, and a lack of good manual memory 
 management utilities.

 I'm not blaming you, I'm just not sure if you're aware of what 
 this looks like. If you intend for D to be a hobby project, 
 then continue on.
I just want to make sure that you understand that he was asking for low hanging optimization opportunities that could be implemented in few hours of work?
All the more if he doesn't, that's the trick with impressions: they don't have to be true to have effects. If that's what it looks like to someone who comes, then it is a problem.
Aug 29 2015
prev sibling next sibling parent reply "Laeeth Isharc" <spamnolaeeth nospamlaeeth.com> writes:
On Saturday, 29 August 2015 at 14:44:01 UTC, Casual D user wrote:
 Someone picks up D, and realizes that out of the box it has a 
 full stop the world 1960s-style garbage collector completely 
 wrapped in a mutex, can't inline constructors/destructors, 
 basically non-functioning RTTI, no safe way to manage 
 resources, a type system with massive holes in it, type 
 qualifiers being suggestions, the non-proprietary compilers 
 that generate faster code lag a year+ behind.
Seems a bit harsh. As a 'casual D user', you're criticizing Walter for not having dmd inline constructors and destructors at the same time as critizing him for working on codegen. And of course it seems like LDC and GDC do in-line them, so if it matters to you you can use them. Bear in mind that many people seem to be happy enough with languages that are significantly slower than dmd. Of course one will hear disproportionately from people who aren't happy (and fair enough) because if you're happy you let things be. It's not like LDC and GDC are unusable or missing some super critical features just because it takes some time for them to be kept up to date. And if that matters, then a little help for those teams might go a long way as they have a tough job to accomplish with limited resources. You might also be more rhetorically effective if you acknowledged the very real improvements that have taken place, just in the past year. Sending a rocket isn't always the best way to achieve one's ends. http://www.artofmanliness.com/2010/12/21/classical-rhetoric-101-the-three-means-of-persuasion/
 Even more than this, D has no real IDE integration like C++ or 
 Java
One needs an IDE a bit less than for Java, I suppose. Since there are people working on IDEs and on IDE integration here, some constructive criticism of what you would like to see might again be more helpful rather than just pretending nobody is trying.
 is even being worked on as far as I'm aware. D is advertised as 
 a system's language, but most of the built-in language features 
 require the GC so you might as well just use C if you can't use 
 the GC
Strange then that people who don't depend on the GC seem to like D's features anyway. I wonder why that is. There's other things I can't remember right now.
 Do you know what the most complaints about D in the reddit 
 thread were? D's incredibly old garbage collector, a complete 
 lack of a good IDE, and a lack of good manual memory management 
 utilities.
One needs to pay some attention to critics, because good advice is hard to come by. But it's a mistake to take what they say too seriously, because quite often it's more of an excuse than the real reason. In my experience you can deliver everything people say they want, and then find it isn't that at all. And the lessons of the Innovator's Dilemma by Christensen is that it may be better to develop what one does really well than to focus all one's energy on fixing perceived' weaknesses. It's not like what the crowd says on reddit must be taken as gospel truth, really.
Aug 29 2015
next sibling parent reply "Ola Fosheim Grostad" <ola.fosheim.grostad+dlang gmail.com> writes:
On Saturday, 29 August 2015 at 19:37:33 UTC, Laeeth Isharc wrote:
 it takes some time for them to be kept up to date.  And if that 
 matters, then a little help for those teams might go a long way 
 as they have a tough job to accomplish with limited resources.
If it is a toy language with a hobby development process then it does not warrant more resources. Walter is entitled to do what is fun and rewarding to him, obviously. If working on his propriatory backend is important to him, then he should do so. Nobody has a right to question that. But the net effect of maintaining 3 different backends is sending signals that the project lacks direction and priorities. Why would anyone commit resources to a project that lacks direction?
Aug 29 2015
next sibling parent reply "Laeeth Isharc" <Laeeth.nospam nospam-laeeth.com> writes:
On Saturday, 29 August 2015 at 20:13:57 UTC, Ola Fosheim Grostad 
wrote:
 On Saturday, 29 August 2015 at 19:37:33 UTC, Laeeth Isharc 
 wrote:
 it takes some time for them to be kept up to date.  And if 
 that matters, then a little help for those teams might go a 
 long way as they have a tough job to accomplish with limited 
 resources.
If it is a toy language with a hobby development process then it does not warrant more resources.
 Walter is entitled to do what is fun and rewarding to him, 
 obviously. If working on his propriatory backend is important 
 to him, then he should do so. Nobody has a right to question 
 that.

 But the net effect of maintaining 3 different backends is 
 sending signals that the project lacks direction and priorities.

 Why would anyone commit resources to a project that lacks 
 direction?
We are all entitled to our opinion. It's my experience that people tend to listen more to those who show themselves to be generally friendly and encouraging than those in whose eyes one doesn't seem to be able to do anything right. D doesn't strike me as a language, project or community lacking in direction, particularly given recent developments. I suspect resources of all sorts will come in time. Toy languages aren't used by the sorts of people that have built their businesses around D. I don't think you do yourself any favours by adopting the tone you do. It's disrespectful and unconstructive. In any case, I don't wish to divert attention from what's important, so I won't say anything more on this topic.
Aug 29 2015
next sibling parent "jmh530" <john.michael.hall gmail.com> writes:
On Saturday, 29 August 2015 at 20:36:33 UTC, Laeeth Isharc wrote:
 Toy languages aren't used by the sorts of people that have 
 built their businesses around D.  I don't think you do yourself 
 any favours by adopting the tone you do.  It's disrespectful 
 and unconstructive.
I've only been following this forum for a few months. I see lots of comments about the same things over and over, at various levels of constructiveness. At this point, I just sort of tune them all out. I imagine it's more frustrating for the core development team... Maybe it would be helpful to put a sticky acknowledging some of these sort of perennial topics/criticisms and that everyone working on the language already knows about them? You could at least point people in the direction of the sticky this way. Pretty much every other forum I go to has some kind of forum rules sticky anyway.
Aug 29 2015
prev sibling parent reply "Ola Fosheim Grostad" <ola.fosheim.grostad+dlang gmail.com> writes:
On Saturday, 29 August 2015 at 20:36:33 UTC, Laeeth Isharc wrote:
 doesn't seem to be able to do anything right.  D doesn't strike 
 me as a language, project or community lacking in direction, 
 particularly given recent developments.  I suspect resources of 
 all sorts will come in time.
When people work on FOUR compilers then you cannot complain about lack of resources. You then need to see if you can do something to unite efforts.
 Toy languages aren't used by the sorts of people that have 
 built their businesses around D.  I don't think you do yourself 
 any favours by adopting the tone you do.  It's disrespectful 
 and unconstructive.
I am not calling D a toy language, other people do and you have to come to terms with that. D has rough edges and is in an incomplete state, this has to be acknowledged rather than glossed over, it is dishonest and give developers too high expectations. That is disrespectful to potential adopters. And that is why these complaints resurface with a high pitched delivery.
 In any case, I don't wish to divert attention from what's 
 important, so I won't say anything more on this topic.
Having a propietary in the core product is a liability in terms of creating a foundation. That actually is important. It actually would be better to reimplement a new D backend from scratch.
Aug 29 2015
parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Sunday, 30 August 2015 at 01:48:06 UTC, Ola Fosheim Grostad 
wrote:
 When people work on FOUR compilers then you cannot complain 
 about lack of resources. You then need to see if you can do 
 something to unite efforts.
They aren't really four compilers. It is more like 2.1. sdc is a separate project, but dmd, ldc, and gdc share like 90% of the effort, so it is more realistic to call them 1.1 compilers rather than 3...
Aug 29 2015
next sibling parent reply "Ola Fosheim Grostad" <ola.fosheim.grostad+dlang gmail.com> writes:
On Sunday, 30 August 2015 at 02:16:13 UTC, Adam D. Ruppe wrote:
 They aren't really four compilers. It is more like 2.1. sdc is 
 a separate project, but dmd, ldc, and gdc share like 90% of the 
 effort, so it is more realistic to call them 1.1 compilers 
 rather than 3...
Then why are they trailing the main compiler if they represent an insignificant effort? Even at 10% it would be a lot for a 10+ year old project. I also think it is higher than that accumulated over time too (sans Phobos). Having a non-free undocumented backend discourage people who think backend work is fun from contributing to the main compiler. Doing some open source backend work on a more lightweight well documented backend could be fun, I would think. If you as a leader feel you lack resources you essentially have to give up some of your own for the greater good. If you end up having to do all the work yourself, then you need to rethink your strategy. Because that strategy is not sustainable.
Aug 29 2015
parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Sunday, 30 August 2015 at 02:34:46 UTC, Ola Fosheim Grostad 
wrote:
 Then why are they trailing the main compiler if they represent 
 an insignificant effort?
In some areas, they are ahead. gdc is directly responsible for the D debugging experience on Linux, for example. But they also have fewer than 10% of the contributors.
Aug 29 2015
parent reply "Ola Fosheim Grostad" <ola.fosheim.grostad+dlang gmail.com> writes:
On Sunday, 30 August 2015 at 03:04:28 UTC, Adam D. Ruppe wrote:
 On Sunday, 30 August 2015 at 02:34:46 UTC, Ola Fosheim Grostad 
 wrote:
 Then why are they trailing the main compiler if they represent 
 an insignificant effort?
In some areas, they are ahead. gdc is directly responsible for the D debugging experience on Linux, for example. But they also have fewer than 10% of the contributors.
Number of contributors does not say all that much. It is competence and tine that matters. For a project that is in perpetual beta leaders need to show their priorities. Here is a good list: 1. Complete the language specification (define semantics). 2. Implement and polish semantics. 3. Clean up syntax. 4. Tooling. 5. Performance. As a leader you should create a frame for others to fill in. That means you cannot afford to focus you effort on point 5, it essentially means you resign the role as a project lead. Enabling others to work at point 5 would be completely ok...
Aug 29 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/29/2015 9:16 PM, Ola Fosheim Grostad wrote:
 Here is a good list:
 [...]
 5. Performance.
Ironically, you guys complained in this thread when that gets worked on.
Sep 02 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 2 September 2015 at 19:05:32 UTC, Walter Bright 
wrote:
 On 8/29/2015 9:16 PM, Ola Fosheim Grostad wrote:
 Here is a good list:
 [...]
 5. Performance.
Ironically, you guys complained in this thread when that gets worked on.
I agree that having a native D backend is a good thing. In fact, I'd very much like to see WebAssembly/asm.js codegen built around a backend that create compact builds since download size is an issue. Which is a different kind of "performance". I just don't see how I could use the current backend to achieve it. Maybe with your experience you could at some point in the future lay the foundation for a new a free backend, that is more minimalistic than LLVM, but that also could be used for the web?
Sep 02 2015
parent reply "Laeeth Isharc" <laeethnospam nospamlaeeth.com> writes:
On Wednesday, 2 September 2015 at 20:50:03 UTC, Ola Fosheim 
Grøstad wrote:
 On Wednesday, 2 September 2015 at 19:05:32 UTC, Walter Bright 
 wrote:
 On 8/29/2015 9:16 PM, Ola Fosheim Grostad wrote:
 Here is a good list:
 [...]
 5. Performance.
Ironically, you guys complained in this thread when that gets worked on.
I agree that having a native D backend is a good thing. In fact, I'd very much like to see WebAssembly/asm.js codegen built around a backend that create compact builds since download size is an issue. Which is a different kind of "performance". I just don't see how I could use the current backend to achieve it. Maybe with your experience you could at some point in the future lay the foundation for a new a free backend, that is more minimalistic than LLVM, but that also could be used for the web?
Adam Ruppe already wrote a javascript backend - it's not maintained as I guess not so much interest. Maybe not the fastest, but it's already been done. Again, JS not asm.js but I don't see why a man of your evident ability should find this an infeasible project were you to be serious about completing it.
Sep 02 2015
parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Thursday, 3 September 2015 at 02:02:06 UTC, Laeeth Isharc 
wrote:
 Adam Ruppe already wrote a javascript backend - it's not 
 maintained as I guess not so much interest.
It is several years old now, the compiler has been completely refactored since then so it wouldn't work anyway. Besides, I wasn't particularly happy with my approach and found it pretty useless in actual practice. (In fact, I think ALL JS converters are useless in practice right now, not adding enough benefit over plain javascript to warrant the extra pains the converter brings. But maybe browsers will change that by supporting better debugging features, etc., to bridge the gap.) I think ldc can output javascript if you compile it yourself using a llvm thing too. BTW those refactorings in the compiler should make it quite a bit easier to do than it was then; if we were to start over, we could probably have it more-or-less working in like a full-time work week. But getting all the semantics right and then any runtime library etc would be tricky. (One thing I wanted with mine was to generate compact code, so I tried to match JS semantics fairly closely rather than D, and offered bindings to native JS functions to prefer over using phobos (though a decent chunk of Phobos actually did work, notably most of std.algorithm). Array.sort is like a few dozen bytes. std.algorithm.sort is hundreds of kilobytes of generated JS code.) but still i'm meh on the practical usefulness of such things. I guess if you target a canvas and run your code in it that makes more sense but my preferred style is a progressive enhancement webpage where you want to know the browser platform and work with it rather than around it.
Sep 02 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/2/2015 7:48 PM, Adam D. Ruppe wrote:
 but still i'm meh on the practical usefulness of such things. I guess if you
 target a canvas and run your code in it that makes more sense but my preferred
 style is a progressive enhancement webpage where you want to know the browser
 platform and work with it rather than around it.
I don't see a whole lot of point to generating JS from another language. You can't do anything more than JS can do, and you're likely to be doing less.
Sep 02 2015
next sibling parent reply "Ola Fosheim Grostad" <ola.fosheim.grostad+dlang gmail.com> writes:
On Thursday, 3 September 2015 at 03:57:39 UTC, Walter Bright 
wrote:
 On 9/2/2015 7:48 PM, Adam D. Ruppe wrote:
 but still i'm meh on the practical usefulness of such things. 
 I guess if you
 target a canvas and run your code in it that makes more sense 
 but my preferred
 style is a progressive enhancement webpage where you want to 
 know the browser
 platform and work with it rather than around it.
I don't see a whole lot of point to generating JS from another language. You can't do anything more than JS can do, and you're likely to be doing less.
That is silly. asm.js is a very restricted typed subset with strict rules that allows generation of pure assembly in a contiguous memory sandbox. It is a completely different setup. If you move outside those rules the compiler give up and switch to regular JIT with less restrictions. WebAssembly aims to go beyond what you can do otherwise (like multithreading).
Sep 02 2015
next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Thursday, 3 September 2015 at 04:30:04 UTC, Ola Fosheim 
Grostad wrote:
 On Thursday, 3 September 2015 at 03:57:39 UTC, Walter Bright 
 wrote:
 On 9/2/2015 7:48 PM, Adam D. Ruppe wrote:
 but still i'm meh on the practical usefulness of such things. 
 I guess if you
 target a canvas and run your code in it that makes more sense 
 but my preferred
 style is a progressive enhancement webpage where you want to 
 know the browser
 platform and work with it rather than around it.
I don't see a whole lot of point to generating JS from another language. You can't do anything more than JS can do, and you're likely to be doing less.
That is silly. asm.js is a very restricted typed subset with strict rules that allows generation of pure assembly in a contiguous memory sandbox. It is a completely different setup. If you move outside those rules the compiler give up and switch to regular JIT with less restrictions. WebAssembly aims to go beyond what you can do otherwise (like multithreading).
It is twice as slow as native. That's far from allowing generation of pure assembly.
Sep 02 2015
next sibling parent Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 3 Sep 2015 8:20 am, "deadalnix via Digitalmars-d" <
digitalmars-d puremagic.com> wrote:
 On Thursday, 3 September 2015 at 04:30:04 UTC, Ola Fosheim Grostad wrote:
 On Thursday, 3 September 2015 at 03:57:39 UTC, Walter Bright wrote:
 On 9/2/2015 7:48 PM, Adam D. Ruppe wrote:
 but still i'm meh on the practical usefulness of such things. I guess
if you
 target a canvas and run your code in it that makes more sense but my
preferred
 style is a progressive enhancement webpage where you want to know the
browser
 platform and work with it rather than around it.
I don't see a whole lot of point to generating JS from another
language. You can't do anything more than JS can do, and you're likely to be doing less.
 That is silly. asm.js is a very restricted typed subset with strict
rules that allows generation of pure assembly in a contiguous memory sandbox. It is a completely different setup. If you move outside those rules the compiler give up and switch to regular JIT with less restrictions. WebAssembly aims to go beyond what you can do otherwise (like multithreading).
 It is twice as slow as native. That's far from allowing generation of
pure assembly. I have far more faith in gccjit. But it's more like the orange to a conventional jit's apples.
Sep 02 2015
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 3 September 2015 at 06:18:54 UTC, deadalnix wrote:
 It is twice as slow as native. That's far from allowing 
 generation of pure assembly.
It is translatable to pure assembly, addressing is modulo heap size. Performance is a different issue since it does not provide SIMD yet.
Sep 03 2015
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Thursday, 3 September 2015 at 09:56:55 UTC, Ola Fosheim 
Grøstad wrote:
 On Thursday, 3 September 2015 at 06:18:54 UTC, deadalnix wrote:
 It is twice as slow as native. That's far from allowing 
 generation of pure assembly.
It is translatable to pure assembly, addressing is modulo heap size. Performance is a different issue since it does not provide SIMD yet.
SIMD is not even remotely close to explaining the perf difference.
Sep 03 2015
parent reply "Ola Fosheim Grostad" <ola.fosheim.grostad+dlang gmail.com> writes:
On Thursday, 3 September 2015 at 10:04:58 UTC, deadalnix wrote:
 On Thursday, 3 September 2015 at 09:56:55 UTC, Ola Fosheim 
 Grøstad wrote:
 On Thursday, 3 September 2015 at 06:18:54 UTC, deadalnix wrote:
 It is twice as slow as native. That's far from allowing 
 generation of pure assembly.
It is translatable to pure assembly, addressing is modulo heap size. Performance is a different issue since it does not provide SIMD yet.
SIMD is not even remotely close to explaining the perf difference.
What browser? Only FF supports it. Chrome just JIT it IIRC.
Sep 03 2015
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Thursday, 3 September 2015 at 21:08:51 UTC, Ola Fosheim 
Grostad wrote:
 On Thursday, 3 September 2015 at 10:04:58 UTC, deadalnix wrote:
 On Thursday, 3 September 2015 at 09:56:55 UTC, Ola Fosheim 
 Grøstad wrote:
 On Thursday, 3 September 2015 at 06:18:54 UTC, deadalnix 
 wrote:
 It is twice as slow as native. That's far from allowing 
 generation of pure assembly.
It is translatable to pure assembly, addressing is modulo heap size. Performance is a different issue since it does not provide SIMD yet.
SIMD is not even remotely close to explaining the perf difference.
What browser? Only FF supports it. Chrome just JIT it IIRC.
asm.js typically runs half the speed of natively compiled code. pNaCl run about 20% slower typically. The gap is way to big for vectorization to be a reasonable explanation. In fact a large body of code just do not vectorize at all. You seems to be fixated on that vectorization thing, when it is not even remotely close to the problem at hand.
Sep 03 2015
next sibling parent "Ola Fosheim Grostad" <ola.fosheim.grostad+dlang gmail.com> writes:
On Thursday, 3 September 2015 at 22:53:01 UTC, deadalnix wrote:
 asm.js typically runs half the speed of natively compiled code. 
 pNaCl run about 20% slower typically.
Browser version, benchmark, reference? Without informationloss there is no inherent overhead outside sandboxing if you avoid the sandbox boundaries. Other people report 1-1.5x.
Sep 03 2015
prev sibling parent reply "rsw0x" <anonymous anonymous.com> writes:
On Thursday, 3 September 2015 at 22:53:01 UTC, deadalnix wrote:
 On Thursday, 3 September 2015 at 21:08:51 UTC, Ola Fosheim 
 Grostad wrote:
 On Thursday, 3 September 2015 at 10:04:58 UTC, deadalnix wrote:
 On Thursday, 3 September 2015 at 09:56:55 UTC, Ola Fosheim 
 Grøstad wrote:
 On Thursday, 3 September 2015 at 06:18:54 UTC, deadalnix 
 wrote:
 [...]
It is translatable to pure assembly, addressing is modulo heap size. Performance is a different issue since it does not provide SIMD yet.
SIMD is not even remotely close to explaining the perf difference.
What browser? Only FF supports it. Chrome just JIT it IIRC.
asm.js typically runs half the speed of natively compiled code. pNaCl run about 20% slower typically. The gap is way to big for vectorization to be a reasonable explanation. In fact a large body of code just do not vectorize at all. You seems to be fixated on that vectorization thing, when it is not even remotely close to the problem at hand.
All of this could have been avoided by all browser vendors agreeing to implement pNaCl. Maybe we'll be lucky and Firefox will fade into obscurity with the way they've been handling things lately.
Sep 04 2015
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 4 September 2015 at 14:26:49 UTC, rsw0x wrote:
 Maybe we'll be lucky and Firefox will fade into obscurity with 
 the way they've been handling things lately.
No. TurboFan is in Chrome with asm.js support.
Sep 04 2015
parent reply "rsw0x" <anonymous anonymous.com> writes:
On Friday, 4 September 2015 at 14:34:52 UTC, Ola Fosheim Grøstad 
wrote:
 On Friday, 4 September 2015 at 14:26:49 UTC, rsw0x wrote:
 Maybe we'll be lucky and Firefox will fade into obscurity with 
 the way they've been handling things lately.
No. TurboFan is in Chrome with asm.js support.
I'd rather not advocate the adoption of inferior technology.
Sep 04 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 4 September 2015 at 14:40:32 UTC, rsw0x wrote:
 On Friday, 4 September 2015 at 14:34:52 UTC, Ola Fosheim 
 Grøstad wrote:
 On Friday, 4 September 2015 at 14:26:49 UTC, rsw0x wrote:
 Maybe we'll be lucky and Firefox will fade into obscurity 
 with the way they've been handling things lately.
No. TurboFan is in Chrome with asm.js support.
I'd rather not advocate the adoption of inferior technology.
It has already been adopted by Microsoft, Google and Mozilla...
Sep 04 2015
next sibling parent reply "rsw0x" <anonymous anonymous.com> writes:
On Friday, 4 September 2015 at 14:43:43 UTC, Ola Fosheim Grøstad 
wrote:
 On Friday, 4 September 2015 at 14:40:32 UTC, rsw0x wrote:
 On Friday, 4 September 2015 at 14:34:52 UTC, Ola Fosheim 
 Grøstad wrote:
 On Friday, 4 September 2015 at 14:26:49 UTC, rsw0x wrote:
 Maybe we'll be lucky and Firefox will fade into obscurity 
 with the way they've been handling things lately.
No. TurboFan is in Chrome with asm.js support.
I'd rather not advocate the adoption of inferior technology.
It has already been adopted by Microsoft, Google and Mozilla...
Because it has the path of least resistance. It's still a poor technology that is just treating the symptoms.
Sep 04 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 4 September 2015 at 14:45:39 UTC, rsw0x wrote:
 Because it has the path of least resistance. It's still a poor 
 technology that is just treating the symptoms.
pnacl/pepper is not good either, they are both poor technologies. But vendors are moving in the same direction, which is important, and compilers are improving each release. What matters most is getting something that is 3x+ faster than javascript when you need it, cross browser. Fortunately, Apple seems to take is seriously too, which is important, iOS Safari is a critical platform.
Sep 04 2015
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Friday, 4 September 2015 at 14:56:48 UTC, Ola Fosheim Grøstad 
wrote:
 On Friday, 4 September 2015 at 14:45:39 UTC, rsw0x wrote:
 Because it has the path of least resistance. It's still a poor 
 technology that is just treating the symptoms.
pnacl/pepper is not good either, they are both poor technologies.
Statement do not makes arguements. pNaCl is portable, take only a 20% hit compared to pure native and is compact to send through the wire.
Sep 04 2015
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 4 September 2015 at 17:05:41 UTC, deadalnix wrote:
 Statement do not makes arguements. pNaCl is portable, take only 
 a 20% hit compared to pure native and is compact to send 
 through the wire.
You think pnacl is compact and compresses well? That's an unusual position. Both asm.js and pnacl fail to be ideal for various reasons. One is that you loose too much information if you go from a high level language to enable full target optimization. The "p" for portable is questionable. A BIG advantage with asm.js is that you can generate and compile it from code running in the browser, so you can choose your own transport format, run JITs in the browser etc. You could essentially create a (simple) D development environment in the browser. Anyway, what we think does not matter. What matters is what is supported by the least common denominator, which for many developers happen to be browsers on weak mobile ARM devices. Apple has no interest in undermining their Apps market, so… who knows where this will go. Let's hope they don't sabotage it.
Sep 04 2015
prev sibling parent Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
Actually, in the point cloud on the web demo I've linked before, which is
EXTREMELY compute intensive code, we experience a barely measurable loss in
performance between pnacl and native code. 20% loss would be huge, but we
see nothing like that, probably within 5% is closer to our experience.
On 5 Sep 2015 3:11 am, "deadalnix via Digitalmars-d" <
digitalmars-d puremagic.com> wrote:

 On Friday, 4 September 2015 at 14:56:48 UTC, Ola Fosheim Gr=C3=B8stad wro=
te:
 On Friday, 4 September 2015 at 14:45:39 UTC, rsw0x wrote:

 Because it has the path of least resistance. It's still a poor
 technology that is just treating the symptoms.
pnacl/pepper is not good either, they are both poor technologies.
Statement do not makes arguements. pNaCl is portable, take only a 20% hit compared to pure native and is compact to send through the wire.
Sep 04 2015
prev sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Friday, 4 September 2015 at 14:43:43 UTC, Ola Fosheim Grøstad 
wrote:
 On Friday, 4 September 2015 at 14:40:32 UTC, rsw0x wrote:
 On Friday, 4 September 2015 at 14:34:52 UTC, Ola Fosheim 
 Grøstad wrote:
 On Friday, 4 September 2015 at 14:26:49 UTC, rsw0x wrote:
 Maybe we'll be lucky and Firefox will fade into obscurity 
 with the way they've been handling things lately.
No. TurboFan is in Chrome with asm.js support.
I'd rather not advocate the adoption of inferior technology.
It has already been adopted by Microsoft, Google and Mozilla...
Doesn't mean it is not inferior. In fact it is bad enough on its own so there is the whole WebAssembly things going on.
Sep 04 2015
prev sibling next sibling parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 5 Sep 2015 12:32 am, "rsw0x via Digitalmars-d" <
digitalmars-d puremagic.com> wrote:
 On Thursday, 3 September 2015 at 22:53:01 UTC, deadalnix wrote:
 On Thursday, 3 September 2015 at 21:08:51 UTC, Ola Fosheim Grostad wrote=
:
 On Thursday, 3 September 2015 at 10:04:58 UTC, deadalnix wrote:
 On Thursday, 3 September 2015 at 09:56:55 UTC, Ola Fosheim Gr=C3=B8sta=
d wrote:
 On Thursday, 3 September 2015 at 06:18:54 UTC, deadalnix wrote:
 [...]
It is translatable to pure assembly, addressing is modulo heap size.
Performance is a different issue since it does not provide SIMD yet.
 SIMD is not even remotely close to explaining the perf difference.
What browser? Only FF supports it. Chrome just JIT it IIRC.
asm.js typically runs half the speed of natively compiled code. pNaCl
run about 20% slower typically.
 The gap is way to big for vectorization to be a reasonable explanation.
In fact a large body of code just do not vectorize at all.
 You seems to be fixated on that vectorization thing, when it is not even
remotely close to the problem at hand.
 All of this could have been avoided by all browser vendors agreeing to
implement pNaCl.
 Maybe we'll be lucky and Firefox will fade into obscurity with the way
they've been handling things lately. What I don't get is, Firefox and ie support plugins... Why isn't there a pnacl plugin for other browsers? Surely it could be added with the existing plugin interfaces?
Sep 04 2015
next sibling parent "rsw0x" <anonymous anonymous.com> writes:
On Friday, 4 September 2015 at 14:53:06 UTC, Manu wrote:
 On 5 Sep 2015 12:32 am, "rsw0x via Digitalmars-d" < 
 digitalmars-d puremagic.com> wrote:
 [...]
wrote:
 [...]
Performance is a different issue since it does not provide SIMD yet.
 [...]
run about 20% slower typically.
 [...]
In fact a large body of code just do not vectorize at all.
 [...]
remotely close to the problem at hand.
 [...]
implement pNaCl.
 [...]
they've been handling things lately. What I don't get is, Firefox and ie support plugins... Why isn't there a pnacl plugin for other browsers? Surely it could be added with the existing plugin interfaces?
Mozilla flat out stated they have no intention of supporting pNaCl. I'm sure a third party could make a plugin to support it.
Sep 04 2015
prev sibling next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 4 September 2015 at 14:53:06 UTC, Manu wrote:
 What I don't get is, Firefox and ie support plugins... Why 
 isn't there a pnacl plugin for other browsers? Surely it could 
 be added with the existing plugin interfaces?
Actually, browsers are deprecating NPAPI plugins. Flash is so dead…
Sep 04 2015
parent reply =?UTF-8?B?Ikx1w61z?= Marques" <luis luismarques.eu> writes:
On Friday, 4 September 2015 at 14:59:12 UTC, Ola Fosheim Grøstad 
wrote:
 Actually, browsers are deprecating NPAPI plugins. Flash is so 
 dead…
Could, in principle, Flash be supported through an extension, instead of a media / NPAPI plugin?
Sep 04 2015
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 4 September 2015 at 15:45:35 UTC, Luís Marques wrote:
 On Friday, 4 September 2015 at 14:59:12 UTC, Ola Fosheim 
 Grøstad wrote:
 Actually, browsers are deprecating NPAPI plugins. Flash is so 
 dead…
Could, in principle, Flash be supported through an extension, instead of a media / NPAPI plugin?
I don't think they will kill Flash outright even after removing plugins. It was more wishful thinking on my part. Adobe will emulate everything Flash does within HTML5 before it is killed, don't you think?
Sep 04 2015
prev sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 4 September 2015 at 15:45:35 UTC, Luís Marques wrote:
 On Friday, 4 September 2015 at 14:59:12 UTC, Ola Fosheim 
 Grøstad wrote:
 Actually, browsers are deprecating NPAPI plugins. Flash is so 
 dead…
Could, in principle, Flash be supported through an extension, instead of a media / NPAPI plugin?
Btw, come across this flash emulator, in case you are interested: https://github.com/mozilla/shumway
Sep 06 2015
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/4/2015 7:52 AM, Manu via Digitalmars-d wrote:
 [...]
Sadly, your newsgroup software is back to doing double posts - once in plaintext, once in html.
Sep 04 2015
next sibling parent Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 5 September 2015 at 14:14, Walter Bright via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 9/4/2015 7:52 AM, Manu via Digitalmars-d wrote:
 [...]
Sadly, your newsgroup software is back to doing double posts - once in plaintext, once in html.
My software in this case was the mail client on Android. I'm astonished this isn't a rampant problem; mail clients seem to do this by default. I have not configured it in any special way, not changed a single setting... this is default settings for the worlds most popular mail client (gmail). How can I possibly be the only offender here?
Sep 04 2015
prev sibling parent reply Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 5 Sep 2015 6:31 am, "Manu via Digitalmars-d" <digitalmars-d puremagic.com>
wrote:
 On 5 September 2015 at 14:14, Walter Bright via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 On 9/4/2015 7:52 AM, Manu via Digitalmars-d wrote:
 [...]
Sadly, your newsgroup software is back to doing double posts - once in plaintext, once in html.
My software in this case was the mail client on Android. I'm astonished this isn't a rampant problem; mail clients seem to do this by default. I have not configured it in any special way, not changed a single setting... this is default settings for the worlds most popular mail client (gmail). How can I possibly be the only offender here?
That is a good question (this is a reply from Android Gmail).
Sep 05 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/5/2015 1:00 AM, Iain Buclaw via Digitalmars-d wrote:
 On 5 Sep 2015 6:31 am, "Manu via Digitalmars-d" <digitalmars-d puremagic.com
 <mailto:digitalmars-d puremagic.com>> wrote:
  >
  > On 5 September 2015 at 14:14, Walter Bright via Digitalmars-d
  > <digitalmars-d puremagic.com <mailto:digitalmars-d puremagic.com>> wrote:
  > > On 9/4/2015 7:52 AM, Manu via Digitalmars-d wrote:
  > >>
  > >> [...]
  > >
  > >
  > > Sadly, your newsgroup software is back to doing double posts - once in
  > > plaintext, once in html.
  >
  > My software in this case was the mail client on Android.
  > I'm astonished this isn't a rampant problem; mail clients seem to do
  > this by default. I have not configured it in any special way, not
  > changed a single setting... this is default settings for the worlds
  > most popular mail client (gmail).
  > How can I possibly be the only offender here?

 That is a good question (this is a reply from Android Gmail).
And your post did it too. If you're using the Thunderbird news reader, typing Cntl-U will show the full source of the message.
Sep 05 2015
parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Saturday, 5 September 2015 at 08:15:06 UTC, Walter Bright 
wrote:
 And your post did it too.

 If you're using the Thunderbird news reader, typing Cntl-U will 
 show the full source of the message.
This is perfectly normal for emails and such. They are multipart/alternative MIME messages which pack different versions of the same message together and your client picks its preferred one to show you. It is kinda useless because the html version adds zero value, but the text version is still there to so your client should just ignore it.
Sep 05 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/5/2015 5:54 AM, Adam D. Ruppe wrote:
 On Saturday, 5 September 2015 at 08:15:06 UTC, Walter Bright wrote:
 And your post did it too.

 If you're using the Thunderbird news reader, typing Cntl-U will show the full
 source of the message.
This is perfectly normal for emails and such. They are multipart/alternative MIME messages which pack different versions of the same message together and your client picks its preferred one to show you. It is kinda useless because the html version adds zero value, but the text version is still there to so your client should just ignore it.
I know, and my client does, but given the size of the n.g. message database, doubling its size for no added value makes it slower.
Sep 05 2015
next sibling parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 6 September 2015 at 07:20, Walter Bright via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 9/5/2015 5:54 AM, Adam D. Ruppe wrote:
 On Saturday, 5 September 2015 at 08:15:06 UTC, Walter Bright wrote:
 And your post did it too.

 If you're using the Thunderbird news reader, typing Cntl-U will show the
 full
 source of the message.
This is perfectly normal for emails and such. They are multipart/alternative MIME messages which pack different versions of the same message together and your client picks its preferred one to show you. It is kinda useless because the html version adds zero value, but the text version is still there to so your client should just ignore it.
I know, and my client does, but given the size of the n.g. message database, doubling its size for no added value makes it slower.
Perhaps the NG server should make an effort to trim the wanted message content then? I'm still astonished I'm the only one that uses Gmail... this should be a rampant problem.
Sep 05 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/5/2015 4:32 PM, Manu via Digitalmars-d wrote:
 Perhaps the NG server should make an effort to trim the wanted message
 content then?
I'd rather work with NNTP as it is.
 I'm still astonished I'm the only one that uses Gmail... this should
 be a rampant problem.
It probably is a rampant problem. I notice it with you because Thunderbird gives a line count for a message, and yours are usually in the hundreds of lines while others are like 10 to 20. Your last one was 109 lines long, although you only wrote 1 original line of text. It also isn't helpful to include a quote of the whole previous message - just what you are specifically replying to.
Sep 05 2015
parent reply Jacob Carlborg <doob me.com> writes:
On 2015-09-06 02:54, Walter Bright wrote:

 It probably is a rampant problem. I notice it with you because
 Thunderbird gives a line count for a message, and yours are usually in
 the hundreds of lines while others are like 10 to 20.
Usually Thunderbird highlights the quoted part in blue and makes the '<' characters in to a solid line. I've noticed that for some messages that don't happen, recently for Iain's messages, but not for Manu's. -- /Jacob Carlborg
Sep 06 2015
parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 6 September 2015 at 18:57, Jacob Carlborg via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 2015-09-06 02:54, Walter Bright wrote:

 It probably is a rampant problem. I notice it with you because
 Thunderbird gives a line count for a message, and yours are usually in
 the hundreds of lines while others are like 10 to 20.
Usually Thunderbird highlights the quoted part in blue and makes the '<' characters in to a solid line. I've noticed that for some messages that don't happen, recently for Iain's messages, but not for Manu's.
It didn't happen for me because I changed my gmail settings after Walter requested some time back to only include plain text. My NG experience is much less enjoyable as a result of the change; I prefer the blue quote line, but now I just have a sea of '>' characters after turning it off. I preferred it before I changed my settings, but apparently I am invisible spamming.
Sep 06 2015
parent Walter Bright <newshound2 digitalmars.com> writes:
On 9/6/2015 4:39 PM, Manu via Digitalmars-d wrote:
 It didn't happen for me because I changed my gmail settings after
 Walter requested some time back to only include plain text. My NG
 experience is much less enjoyable as a result of the change; I prefer
 the blue quote line, but now I just have a sea of '>' characters after
 turning it off. I preferred it before I changed my settings, but
 apparently I am invisible spamming.
It is doing the right thing now, yay! :-) BTW, Thunderbird's n.g. reader will transform the > into the blue line.
Sep 06 2015
prev sibling parent reply Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 5 Sep 2015 11:25 pm, "Walter Bright via Digitalmars-d" <
digitalmars-d puremagic.com> wrote:
 On 9/5/2015 5:54 AM, Adam D. Ruppe wrote:
 On Saturday, 5 September 2015 at 08:15:06 UTC, Walter Bright wrote:
 And your post did it too.

 If you're using the Thunderbird news reader, typing Cntl-U will show
the full
 source of the message.
This is perfectly normal for emails and such. They are
multipart/alternative
 MIME messages which pack different versions of the same message together
and
 your client picks its preferred one to show you.

 It is kinda useless because the html version adds zero value, but the
text
 version is still there to so your client should just ignore it.
I know, and my client does, but given the size of the n.g. message
database, doubling its size for no added value makes it slower. There's no way to change the Gmail client behaviour. And I'm assuming that it isn't a recent feature either.
Sep 05 2015
parent =?UTF-8?Q?Tobias=20M=C3=BCller?= <troplin bluewin.ch> writes:
Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> wrote:
 On 5 Sep 2015 11:25 pm, "Walter Bright via Digitalmars-d" <
 digitalmars-d puremagic.com> wrote:
 
 On 9/5/2015 5:54 AM, Adam D. Ruppe wrote:
 
 On Saturday, 5 September 2015 at 08:15:06 UTC, Walter Bright wrote:
 
 And your post did it too.
 
 If you're using the Thunderbird news reader, typing Cntl-U will show
the full
 source of the message.
This is perfectly normal for emails and such. They are
multipart/alternative
 MIME messages which pack different versions of the same message together
and
 your client picks its preferred one to show you.
 
 It is kinda useless because the html version adds zero value, but the
text
 version is still there to so your client should just ignore it.
I know, and my client does, but given the size of the n.g. message
database, doubling its size for no added value makes it slower. There's no way to change the Gmail client behaviour. And I'm assuming that it isn't a recent feature either.
Considering the messy quoting in your posts I'd actually prefer HTML messages.
Oct 20 2015
prev sibling parent "jmh530" <john.michael.hall gmail.com> writes:
On Friday, 4 September 2015 at 14:26:49 UTC, rsw0x wrote:
 All of this could have been avoided by all browser vendors 
 agreeing to implement pNaCl.
 Maybe we'll be lucky and Firefox will fade into obscurity with 
 the way they've been handling things lately.
I thought even the PNaCl people were working on wasm.
Sep 04 2015
prev sibling parent reply "Laeeth Isharc" <laeethnospam nospamlaeeth.com> writes:
On Thursday, 3 September 2015 at 04:30:04 UTC, Ola Fosheim 
Grostad wrote:
"I don't have enough time to figure out all the ins-and-outs of 
the current compiler."

"Unless you sell DMD, how about providing a definition of 
"customer"?
If you don't pay attention to evaluations, then surely the 
competition
will steal your thunder and you'll end up as Cobol; on a long tail
in maintenance mode."

Sometimes it's really worth putting energy into ensuring crisp
definitions.  This isn't one of those cases.  Your own language is
exploiting polysemy in an unstraightforward manner - mixing up
different meanings to achieve a rhetorical effect.  Quite clearly
Walter+Andrei listen to what people say, but it doesn't thereby
follow that they should listen to people who think D should go
in a completely different direction based on a very theoretical
view of things and who have no evident experience in writing
a compiler used on a large scale, or in developing a language
community.

It's Business 101 that you shouldn't listen to what most people
tell you, because whilst often well-meaning it's based on an
understanding of things different from the practical situation one
faces and that doesn't always understand what one is trying to
achieve.  It's hard to do that whilst not falling into the other
extreme of not attending to the smallest signs from the right
people when you should, but difficulty is in the nature of such
endeavours.

"Oh, I would love for D to follow the trajectory of C. C 
development can follow a near-waterfall development model:

1. specification-ratification-cycle
2. specification based gamma-beta-alpha production implementation
3. release
4. non-semantic improvements

D is nowhere near C's conservative fully spec'ed 
ISO-standard-based process."

Thank the Lord!  I really don't see how anyone can seriously 
believe
that this is an appropriate model for D for the foreseeable 
future.
The essential reason for why it is an attractive language is that 
it
was based on the genius (I mean this in the etymologically-rooted 
sense)
of one, or a few minds and thereby is greatly superior to anything
designed by a committee.  Maybe at some stage this will be an 
appropriate
process to establish, but I do note that even some of those 
involved
in such processes would readily admit that waterfalls are evil, if
necessary at that stage.

 Andrei talked about documentation and presentation a little 
 while
 back, and we're now in a much better state.  Allocators soon 
 here.
 People have got D to run on embedded systems with low 
 resources.
" What people can do isn't really all that interesting. What is interesting is what you can do with little effort and better than the alternatives." This is why you come across as a bit academic and not always constructive. You suggested things weren't developing, and I pointed out that they are, and gave a few concrete examples of how. You then said that the first attempt wasn't perfect. But you know, as well as I, that it's in the nature of things that beginnings never are. It's really tough to be the first to do something (another reason I think you could be a little more respectful towards Walter), but every person that follows makes it some how easier. There's a slightly mysterious aspect to this, but nonetheless it's true. D is running on embedded systems, and it sounds like it was a pain because of the runtime but not because of any horrible flaw in the language that means it cannot be considered a systems language if you want it to be. I'd be really surprised if it doesn't become easier from here with time. I like the old-fashioned English virtue of being willing in a discussion to give credit where credit is due.
I'd honestly bet that a little more effort to communicate the 
practical commercial benefits of D would make much more of a 
difference than this abstract stuff.  But who am I to say.
" I think you underestimate the expections people with a major in compsci or a significant amount of development experience have. They care about the actual qualities, not the presentation. Those are the CTOs you need to convince, not the CEOs." I shared a flat for some time with a pretty smart friend who studied computer science at Trinity, Cambridge (I didn't study computing). He wrote this - it wasn't a success businesswise, but that's not for technical reasons and timing had a lot to do with it: https://github.com/pythonanywhere/dirigible-spreadsheet And I have other friends with similar backgrounds, and I have hired and worked with developers, and I am not sure that in the enterprise world a computer science credential has any more incantatory standing than any other degree - and that's probably as it should be. My point wasn't that D needs to fool gullible people into adopting it, but rather the opposite. I always thought smart people should just be able to see what's obvious, but one sometimes has to learn the hard way that ideas and products don't sell themselves of their own accord. You have to make it as easy as you can for your potential customers to appreciate what the value is for them in adopting what it is you would like them to explore. Part of that is that people naturally think differently, and also that what's obvious from the inside simply isn't from the outside. CTOs are not gifted with some magic ability to see through appearances to the Platonic essences of things. They are human and have a tough job. So one needs to make an effort if they are to easily appreciate the value. This is a point John Colvin made with regards to scientists in his dconf talk. And, on balance, I rather think that scientists are a little less concerned about appearances than enterprise people. And if it's true for them - which it is - it's all the more true for the enterprise. "I am not calling D a toy language" "As it stands today both Rust and D falls into the category toy-languages." Make up your mind. Or better, think for a bit about whether a different approach would be more effective. Of course you know that you're using a pejorative term (depersonalizing it as if it is something objective doesn't change that), and asserting it to apply without any kind of argument for it. A toy is something that is suitable for entertaining children, but not something to be used be a serious craftsman or businessman. Since serious people have built their businesses around it (Sociomantic alone being $200mm EV), you must be fully aware that it can hardly be a toy. I gather you think market share is critical - it's reasonable for you to think that, but not to suggest that it's the only reasonable way of looking at the world. Myself, I side with Thiel,who at least must be considered a serious thinker, and to have some insight regarding the diffusion of technologies and the creation of new enterprises.
 When you get fracturing related to code base quality, licensing
 or language semantics, you have development strategy issues that
 cause fracturing (you also have private fors like ketmar's).
You're really reaching now. In a community that draws creative and curious people something would be wrong if people didn't fork the compiler and experiment. Now if there's mass adoption of different forks and warring tribes, perhaps that's different. (War is generative, too, but the costs are too high for this to be sustained for long). But this simply isn't the case, and it strikes me that you're manufacturing words to suit how you feel, rather than basing how you feel on things as they are.
 If you scaled up C++ to D based on resources and usage then C++
 should have 100s of free compilers. You have 2 free C++14 
 compilers.
If you scaled up the institutions and mores of Norway to the size of and conditions applying to the US you would have a very odd country indeed. That gedankenexperiment may indeed help stimulate the imagination to understand what one has before one. But the conditions of D are different from those of C++, since the two non-dmd compilers were written, as you know, to benefit from the code generation capabilities of other projects. One may legitimately have the view that the projects should be consolidated, but one would have to actually make an argument for it in the Russian style of close-reasoning. As someone said, a statement isn't an argument, and neither is an analogy to something different. Also, one's arguments would need to cohere in this domain, rather than arguing like a lawyer "1. my client didn't take, let alone steal the vase; 2. it was broken already when he took it; 3. he returned it in good shape' "End user back pressure helps. If it does not help, then the development process is FUBAR." My sincere view is that if you adopted a different approach you would be more effective in your influence. And there might be broader beneficial effects too. " However it does help, Walter is an intelligent man, that enjoys defending his position" Patient as he is, I have the impression that the enjoyment is not saliently on his side of things!
Sep 04 2015
parent "Ola Fosheim Grostad" <ola.fosheim.grostad+dlang gmail.com> writes:
On Friday, 4 September 2015 at 18:27:14 UTC, Laeeth Isharc wrote:

 Sometimes it's really worth putting energy into ensuring crisp
 definitions.  This isn't one of those cases.  Your own language 
 is
 exploiting polysemy in an unstraightforward manner - mixing up
 different meanings to achieve a rhetorical effect.  Quite 
 clearly
 Walter+Andrei listen to what people say, but it doesn't thereby
 follow that they should listen to people who think D should go
 in a completely different direction based on a very theoretical
 view of things and who have no evident experience in writing
 a compiler used on a large scale, or in developing a language
 community.
Define your target before designing, that is the essence here. This is critical to gain wide adoption. D has been defined to be a cross platform system level progrmming language. That is what D should be evaluated as. "customer" is a nonsensical term that essentially means what?
 It's Business 101 that you shouldn't listen to what most people
 tell you, because whilst often well-meaning it's based on an
 understanding of things different from the practical situation 
 one
 faces and that doesn't always understand what one is trying to
 achieve.
Defining who your target is does communicate what you are trying to achieve! This is critical if you want to evaluate and drive the systems development process forward. You cannot fix the process if you don't know how you measure progress.
 Thank the Lord!  I really don't see how anyone can seriously 
 believe
 that this is an appropriate model for D for the foreseeable 
 future.
You compared D to C, not me. I know many appropriate system development models...
 process to establish, but I do note that even some of those 
 involved
 in such processes would readily admit that waterfalls are evil, 
 if
 necessary at that stage.
You are taking this too far, all non-chatoic models have a design-implement-evaluate pattern in them that, the difference is in how the iterations go. A language specification is a critical work document that expose qualities of the language. Bypassing that affects quality. At this stage D needs a specification.
 This is why you come across as a bit academic and not always 
 constructive.
 You suggested things weren't developing,
I said that adding performance features during refactoring communicates a lack of direction. Macro level refactoring is needed and challemging snd takes leadership.
 nonetheless it's true.  D is running on embedded systems, and it
 sounds like it was a pain because of the runtime but not 
 because of
 any horrible flaw in the language that means it cannot be 
 considered
 a systems language if you want it to be.
It does not matter if it runs on embedded or not, unless your "customers" are defined as that market. Nobody serious about development cares if D runs on embedded if there are cheaper and more suitable alternatives. It only matters if you are the best alternative. Engineers don't look for the next-best solution. They look for the best. So to gain ground you need to define your key goals with your design.
 obvious from the inside simply isn't from the outside.  CTOs are
 not gifted with some magic ability to see through appearances to
 the Platonic essences of things.  They are human and have a 
 tough
 job.  So one needs to make an effort if they are to easily 
 appreciate
 the value.
There are so few system level programming languages that the landscspe is easy to grasp. People are waiting for mature solutions that are better than what they have... Marketing does not affect this. Focus and improved process does.
 "I am not calling D a toy language"
 "As it stands today both Rust and D falls into the category 
 toy-languages."

 Make up your mind.
I said "if D is a toy language". That is not calling it anything. But it is, like Rust, a toy language by academic use of the phrase which is not a pejorative term, but an affectionate term in my book. The pejorative term is to call a language a "hack". C++ is a hack. String mixins is a hack. Etc.
 But this simply isn't the case, and it strikes me that you're
 manufacturing words to suit how you feel, rather than basing how
 you feel on things as they are.
No. There are plenty of visible artifacts from the process. No need to manufacture anything.
 If you scaled up C++ to D based on resources and usage then C++
 should have 100s of free compilers. You have 2 free C++14 
 compilers.
If you scaled up the institutions and mores of Norway to the size of and conditions applying to the US you would have a very odd country indeed.
There is only one community driven C++14 compiler, g++. The other ones are primarily commercial. > One may legitimately have the
 view
 that the projects should be consolidated, but one would have to
 actually make an argument for it in the Russian style of
  close-reasoning.
You dont have to consolidate anything. You need to look at the causes that has led to fragmentation in the past, so you can improve the process today. Then you need to see if you are using resources effectively or if you accumulate costs you can remove.
 My sincere view is that if you adopted a different approach you
 would be more effective in your influence.  And there might be
 broader beneficial effects too.
And it is wrong, you cannot have process improvement without either leadership support, consensus or more likely a high level of fuzz (end user pressure).
Sep 04 2015
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2015-09-03 05:57, Walter Bright wrote:

 I don't see a whole lot of point to generating JS from another language.
 You can't do anything more than JS can do, and you're likely to be doing
 less.
There's a lot of stuff other languages can do that JS can't. For example, classes, which a lot of developers prefer to use in favor of the weird object system in JS. Although now with ECMAScript 6 classes are supported. But it will still be a long time before enough web browsers support E6. Therefore it can be useful to have an E6 compiler that translates to E5 or E3 compatible JS. -- /Jacob Carlborg
Sep 02 2015
parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Thursday, 3 September 2015 at 06:45:16 UTC, Jacob Carlborg 
wrote:
 There's a lot of stuff other languages can do that JS can't. 
 For example, classes, which a lot of developers prefer to use 
 in favor of the weird object system in JS.
You can kinda do classes in JS, it just isn't pretty syntax. In the D to JS toy I did, I just did an array of function pointers to handle the virtual functions, similar to how D is compiled to machine code. It'd be fairly ugly to write by hand but when converting languages, it works well enough.
Sep 03 2015
parent reply "Ola Fosheim Grostad" <ola.fosheim.grostad+dlang gmail.com> writes:
On Thursday, 3 September 2015 at 13:12:07 UTC, Adam D. Ruppe 
wrote:
 On Thursday, 3 September 2015 at 06:45:16 UTC, Jacob Carlborg 
 wrote:
 There's a lot of stuff other languages can do that JS can't. 
 For example, classes, which a lot of developers prefer to use 
 in favor of the weird object system in JS.
If you don't change the prototype object, then it is mostly similar, but more flexible. Functions are constructors and prototype objects are class definitions. You could also use typescript, typescript playground is quite fun. It allows you to explore the JS output in realtime.
 You can kinda do classes in JS, it just isn't pretty syntax. In 
 the D to JS toy I did, I just did an array of function pointers 
 to handle the virtual functions, similar to how D is compiled 
 to machine code.

 It'd be fairly ugly to write by hand but when converting 
 languages, it works well enough.
Huh? Dynamic languages have dynamic lookup, how is that different from virtual functions?
Sep 03 2015
next sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Thursday, 3 September 2015 at 21:01:10 UTC, Ola Fosheim 
Grostad wrote:
 Huh? Dynamic languages have dynamic lookup, how is that 
 different from virtual functions?
The specific implementation I used was like what D compiles to: index into an array. So it is a bit clunky to do obj.vtbl[2](args) rather than obj.foo(args).
Sep 03 2015
prev sibling next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Thursday, 3 September 2015 at 21:01:10 UTC, Ola Fosheim 
Grostad wrote:
 On Thursday, 3 September 2015 at 13:12:07 UTC, Adam D. Ruppe 
 wrote:
 On Thursday, 3 September 2015 at 06:45:16 UTC, Jacob Carlborg 
 wrote:
 There's a lot of stuff other languages can do that JS can't. 
 For example, classes, which a lot of developers prefer to use 
 in favor of the weird object system in JS.
If you don't change the prototype object, then it is mostly similar, but more flexible. Functions are constructors and prototype objects are class definitions. You could also use typescript, typescript playground is quite fun. It allows you to explore the JS output in realtime.
 You can kinda do classes in JS, it just isn't pretty syntax. 
 In the D to JS toy I did, I just did an array of function 
 pointers to handle the virtual functions, similar to how D is 
 compiled to machine code.

 It'd be fairly ugly to write by hand but when converting 
 languages, it works well enough.
Huh? Dynamic languages have dynamic lookup, how is that different from virtual functions?
Maybe because you need 2 map lookups + 1 indirection instead of an array lookup in addition of the indirect call. But who know, with a vectorized SSA, it surely will be faster than light.
Sep 03 2015
parent "Ola Fosheim Grostad" <ola.fosheim.grostad+dlang gmail.com> writes:
On Thursday, 3 September 2015 at 22:48:47 UTC, deadalnix wrote:
 On Thursday, 3 September 2015 at 21:01:10 UTC, Ola Fosheim 
 Grostad wrote:
 Huh? Dynamic languages have dynamic lookup, how is that 
 different from virtual functions?
Maybe because you need 2 map lookups + 1 indirection instead of an array lookup in addition of the indirect call. But who know, with a vectorized SSA, it surely will be faster than light.
obj.f is not slower than obj.v[5] on the contrary, it is faster
Sep 03 2015
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2015-09-03 23:01, Ola Fosheim Grostad wrote:

 If you don't change the prototype object, then it is mostly similar, but
 more flexible. Functions are constructors and prototype objects are
 class definitions.
Looking how many languages compile to JS and how many JS libraries there are out there that try to invent some kind of new syntax for declaring classes and now E6, I would say a lot of people are not satisfied with the current state of JS.
 You could also use typescript, typescript playground
 is quite fun. It allows you to explore the JS output in realtime.
Yeah, that's one those languages I was thinking about ;) -- /Jacob Carlborg
Sep 04 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 4 September 2015 at 07:44:23 UTC, Jacob Carlborg wrote:
 Looking how many languages compile to JS and how many JS 
 libraries there are out there that try to invent some kind of 
 new syntax for declaring classes and now E6, I would say a lot 
 of people are not satisfied with the current state of JS.
Yes, JS gets painful once you get above 1000 lines of code. Fortunately we can now use typed ES6 also on older platforms. There are also polyfills for things like Promises (async), and Fetch (dead easy ajax http request).
 You could also use typescript, typescript playground
 is quite fun. It allows you to explore the JS output in 
 realtime.
Yeah, that's one those languages I was thinking about ;)
I have no problem recommending TypeScript with WebStorm (or some other editor) for business like applications. For more demanding apps I think we need a more traditional language that generates high quality asm.js like code as well as regular ES6 and some clean bridging. So either a new language or a modified language (like a D++ or something).
Sep 04 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 4 September 2015 at 07:52:30 UTC, Ola Fosheim Grøstad 
wrote:
 I have no problem recommending TypeScript with WebStorm (or 
 some other editor) for business like applications.
Err... avoid WebStorm. Just noticed JetBrains have decided to rip off their customers with a subscription model and increase their pricing 100%. Damn, I'm going back to OpenSource IDEs…
Sep 04 2015
next sibling parent "rsw0x" <anonymous anonymous.com> writes:
On Friday, 4 September 2015 at 13:39:45 UTC, Ola Fosheim Grøstad 
wrote:
 On Friday, 4 September 2015 at 07:52:30 UTC, Ola Fosheim 
 Grøstad wrote:
 I have no problem recommending TypeScript with WebStorm (or 
 some other editor) for business like applications.
Err... avoid WebStorm. Just noticed JetBrains have decided to rip off their customers with a subscription model and increase their pricing 100%. Damn, I'm going back to OpenSource IDEs…
I believe the FOSS version of Intellij can install the Javascript plugin which also adds support for Typescript. May be wrong.
Sep 04 2015
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2015-09-04 15:39, Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
<ola.fosheim.grostad+dlang gmail.com> wrote:

 Err... avoid WebStorm. Just noticed JetBrains have decided to rip off
 their customers with a subscription model and increase their pricing
 100%. Damn, I'm going back to OpenSource IDEs…
I heard the TypeScript support for Visual Studio Code is really good. -- /Jacob Carlborg
Sep 04 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 4 September 2015 at 14:25:11 UTC, rsw0x wrote:
 I believe the FOSS version of Intellij can install the 
 Javascript plugin which also adds support for Typescript.
 May be wrong.
Hm. I bought WebStorm to do Dart, but have kinda put Dart on hold, so maybe not a bad idea. I assume it would then work with PyCharm CE. On Friday, 4 September 2015 at 14:44:46 UTC, Jacob Carlborg wrote:
 I heard the TypeScript support for Visual Studio Code is really 
 good.
I'm crossing my fingers for an OS-X or Linux version of VS. ;)
Sep 05 2015
parent reply "Kagamin" <spam here.lot> writes:
On Saturday, 5 September 2015 at 14:57:58 UTC, Ola Fosheim 
Grøstad wrote:
 On Friday, 4 September 2015 at 14:44:46 UTC, Jacob Carlborg 
 wrote:
 I heard the TypeScript support for Visual Studio Code is 
 really good.
I'm crossing my fingers for an OS-X or Linux version of VS. ;)
You mean Visual Studio Code doesn't run on Linux?
Sep 06 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 6 September 2015 at 12:31:27 UTC, Kagamin wrote:
 On Saturday, 5 September 2015 at 14:57:58 UTC, Ola Fosheim 
 Grøstad wrote:
 On Friday, 4 September 2015 at 14:44:46 UTC, Jacob Carlborg 
 wrote:
 I heard the TypeScript support for Visual Studio Code is 
 really good.
I'm crossing my fingers for an OS-X or Linux version of VS. ;)
You mean Visual Studio Code doesn't run on Linux?
Oh, actually it appears to run on both OS-X and Linux. I didn't know that. Looks very promising, thanks!
Sep 06 2015
parent reply Jacob Carlborg <doob me.com> writes:
On 2015-09-06 15:24, Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
<ola.fosheim.grostad+dlang gmail.com> wrote:

 Oh, actually it appears to run on both OS-X and Linux. I didn't know
 that. Looks very promising, thanks!
Yeah, it's built on the same framework as Atom. Or were you hoping for Visual Studio, sans Code, on OS X and Linux? -- /Jacob Carlborg
Sep 07 2015
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 7 September 2015 at 13:41:31 UTC, Jacob Carlborg wrote:
 On 2015-09-06 15:24, Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
 <ola.fosheim.grostad+dlang gmail.com> wrote:

 Oh, actually it appears to run on both OS-X and Linux. I 
 didn't know
 that. Looks very promising, thanks!
Yeah, it's built on the same framework as Atom. Or were you hoping for Visual Studio, sans Code, on OS X and Linux?
I knew there was an Atom based type script editor (or several?), but didn't know that Microsoft was behind it and that "VS Code" was different from "VS"/"VS Express" ;). Typescript seems to have a lot of momentum. I'm thinking about the possibility of a transpiler TypeScript->D (and other languages).
Sep 07 2015
prev sibling next sibling parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 3 September 2015 at 13:57, Walter Bright via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 9/2/2015 7:48 PM, Adam D. Ruppe wrote:
 but still i'm meh on the practical usefulness of such things. I guess if
 you
 target a canvas and run your code in it that makes more sense but my
 preferred
 style is a progressive enhancement webpage where you want to know the
 browser
 platform and work with it rather than around it.
I don't see a whole lot of point to generating JS from another language. You can't do anything more than JS can do, and you're likely to be doing less.
You have a pile of existing code, you need to run it on a webpage, and don't have time/budget to rewrite that code. Emscripten is an opportunity, it is an enabling technology. Something that you can do and brings a nice business opportunity that you just wouldn't do otherwise, as in our case: http://udserver.euclideon.com/demo/ It would have been great if this were written in D, but we reverted to C++ because LDC doesn't support Emscripten (yet?). Our major active project at work also now depends on Emscripten and PNaCl; 2 exotic LDC targets which would get my office onto D quicksmart! I've never suffered C++ so violently.
Sep 03 2015
parent Walter Bright <newshound2 digitalmars.com> writes:
On 9/3/2015 3:27 AM, Manu via Digitalmars-d wrote:
 Our major active project at work also now depends on Emscripten and
 PNaCl; 2 exotic LDC targets which would get my office onto D
 quicksmart!
ping Adam Ruppe?
 I've never suffered C++ so violently.
I feel your pain!
Sep 03 2015
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 09/02/2015 11:57 PM, Walter Bright wrote:
 On 9/2/2015 7:48 PM, Adam D. Ruppe wrote:
 but still i'm meh on the practical usefulness of such things. I guess
 if you
 target a canvas and run your code in it that makes more sense but my
 preferred
 style is a progressive enhancement webpage where you want to know the
 browser
 platform and work with it rather than around it.
I don't see a whole lot of point to generating JS from another language. You can't do anything more than JS can do, and you're likely to be doing less.
The premise here is that Javascript is a lock-in for code that can run in the browser. So to get D code to run in the browser you'd need to generate Javascript. This concern is orthogonal to relative capabilities of languages etc. -- Andrei
Sep 04 2015
prev sibling parent reply "Laeeth Isharc" <Laeeth.nospam nospam-laeeth.com> writes:
On Sunday, 30 August 2015 at 02:16:13 UTC, Adam D. Ruppe wrote:
 On Sunday, 30 August 2015 at 01:48:06 UTC, Ola Fosheim Grostad 
 wrote:
 When people work on FOUR compilers then you cannot complain 
 about lack of resources. You then need to see if you can do 
 something to unite efforts.
They aren't really four compilers. It is more like 2.1. sdc is a separate project, but dmd, ldc, and gdc share like 90% of the effort, so it is more realistic to call them 1.1 compilers rather than 3...
And it's not like Walter could order the vast and well compensated GDC team to stop and go and work on DMD, even if that made sense. To make the observation that someone unhappy with a state of affairs has the option to contribute the time to help move the world in the direction they think good is not quite the same thing as complaining about a lack of resources. Morale is important in long term projects that don't pay off very quickly, and constant nagging and grumbling doesn't tend to help, even in the case when it is entirely well founded.
Aug 29 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 30 August 2015 at 06:07:25 UTC, Laeeth Isharc wrote:
 To make the observation that someone unhappy with a state of 
 affairs has the option to contribute the time to help move the 
 world in the direction they think good is not quite the same 
 thing as complaining about a lack of resources.   Morale is 
 important in long term projects that don't pay off very 
 quickly, and constant nagging and grumbling doesn't tend to 
 help, even in the case when it is entirely well founded.
Actually, it does help. There has been changes over time. Right now the most sensible thing to do is to focus on stability, refactor the codebase, document the codebase and track regressions. If you actually want others to contribute you need to leading by example... E.g. you need to get the boring stuff done first. There is absolutely no point in helping out with a project that add features/optimizations faster than they are finished. Such projects are never finished. Even if you got 10 more people to work on it, the outcome would not be that it would be finished, you would end up getting more features, not more polish.
Aug 30 2015
parent reply "Laeeth Isharc" <laeethnospam nospamlaeeth.com> writes:
On Sunday, 30 August 2015 at 16:17:49 UTC, Ola Fosheim Grøstad 
wrote:
 On Sunday, 30 August 2015 at 06:07:25 UTC, Laeeth Isharc wrote:
 Morale is important in long term projects that don't pay off 
 very quickly, and constant nagging and grumbling doesn't tend 
 to help, even in the case when it is entirely well founded.
Actually, it does help.
There's a big difference between saying "dmd generates code that's slow by the standards of modern C++ compilers. it's an impressive accomplishment for a small group, but we ought to do better. I know C++ and asm, and only have a little time, but I would like to help. what would be most useful to look at ?" and something else. In my experience grumbling without action doesn't tend to lead to the change you want in the world. In theory maybe it should, and someone will listen. But human beings are funny creatures.
 There has been changes over time. Right now the most sensible 
 thing to do is
 to focus on stability, refactor the codebase, document the 
 codebase and track regressions. If you actually want others to 
 contribute you need to leading by example... E.g. you need to 
 get the boring stuff done first.
Well how would a dictator of D accomplish that? Probably porting the compiler to D would be a pretty good start, for a variety of reasons. That will help with stability, refactoring, and documentation I should have thought. Not everyone knows C++, and of those who do, not everyone wants to write in it. By the way, the dmd source code doesn't seem horribly obscure to read at first glance.
 If you actually want others to contribute you need to leading 
 by example
alors ? as you point out, an asm.js backend would be rather nice to have, and you are by all accounts a low-level guy, so it shouldn't be hard to do, no ?
 There is absolutely no point in helping out with a project that 
 add features/optimizations faster than they are finished. Such 
 projects are never finished. Even if you got 10 more people to 
 work on it, the outcome would not be that it would be finished, 
 you would end up getting more features, not more polish.
Given that C compilers are still developing, I doubt D will ever be finished in my useful career. So what ? The facts speak against your claim, since D is clearly becoming more polished with every release - just look at the improvements to the GC within the past year. (Runtime/compiler, whatever).
 For a project that is in perpetual beta leaders need to show 
 their priorities.
Andrei talked about documentation and presentation a little while back, and we're now in a much better state. Allocators soon here. People have got D to run on embedded systems with low resources. GC is getting better - maybe not yet good enough for some, but I don't know that it's an easy problem and realistic to expect things to change in the space of a year. etcimon has written something interesting - maybe a niche application or not yet ready for prime time - I haven't had time to see. Walter talked about the need to get across that D is being used for serious business by serious people. He made the Andy Smith talk happen, and that certainly increases the language credibility amongst a group that spends a lot of money on technology. Maybe D will have another decent-sized hedge fund user soon. I tried with a bank, but they had bigger problems.
 1. Complete the language specification (define semantics).
 2. Implement and polish semantics.
 3. Clean up syntax.
 4. Tooling.
 5. Performance.
As some wit once said, what is true is not original, and what is original is not necessarily true. Ie syntax/tooling/performance is already a focus here. Semantics, I don't know. I'd bet on good taste over excessive formalisation, but I am no computer science expert. It's good enough for my modest needs. I'd honestly bet that a little more effort to communicate the practical commercial benefits of D would make much more of a difference than this abstract stuff. But who am I to say. Laeeth.
Sep 02 2015
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Sep 03, 2015 at 02:30:39AM +0000, Laeeth Isharc via Digitalmars-d wrote:
 On Sunday, 30 August 2015 at 16:17:49 UTC, Ola Fosheim Grstad wrote:
On Sunday, 30 August 2015 at 06:07:25 UTC, Laeeth Isharc wrote:
Morale is important in long term projects that don't pay off very
quickly, and constant nagging and grumbling doesn't tend to help,
even in the case when it is entirely well founded.
Actually, it does help.
There's a big difference between saying "dmd generates code that's slow by the standards of modern C++ compilers. it's an impressive accomplishment for a small group, but we ought to do better. I know C++ and asm, and only have a little time, but I would like to help. what would be most useful to look at ?" and something else. In my experience grumbling without action doesn't tend to lead to the change you want in the world. In theory maybe it should, and someone will listen. But human beings are funny creatures.
[...] Especially in this forum, where large quantities of discussions, complaints, grandiose ideas, etc., are produced every day, yet disappointingly little of it actually results in anything. Submitting PRs on Github, or even just submitting bug reports / enhancement requests, OTOH, produces much more tangible value, in spite of frequently not even being mentioned here on the forum. As Walter once said, "Be the change you wish to see." Somebody else once said, "Talk is cheap; whining is actually free", which seems especially pertinent to this forum. --T
Sep 02 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/2/2015 9:09 PM, H. S. Teoh via Digitalmars-d wrote:
 As Walter once said, "Be the change you wish to see."
I think that was Andrei. But I do agree with it.
Sep 02 2015
next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Thursday, 3 September 2015 at 06:54:05 UTC, Walter Bright 
wrote:
 On 9/2/2015 9:09 PM, H. S. Teoh via Digitalmars-d wrote:
 As Walter once said, "Be the change you wish to see."
I think that was Andrei. But I do agree with it.
It's Gandhi.
Sep 03 2015
parent Walter Bright <newshound2 digitalmars.com> writes:
On 9/3/2015 1:31 AM, deadalnix wrote:
 On Thursday, 3 September 2015 at 06:54:05 UTC, Walter Bright wrote:
 On 9/2/2015 9:09 PM, H. S. Teoh via Digitalmars-d wrote:
 As Walter once said, "Be the change you wish to see."
I think that was Andrei. But I do agree with it.
It's Gandhi.
Ah, makes sense. Thanks for the correction.
Sep 03 2015
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 09/03/2015 02:54 AM, Walter Bright wrote:
 On 9/2/2015 9:09 PM, H. S. Teoh via Digitalmars-d wrote:
 As Walter once said, "Be the change you wish to see."
I think that was Andrei. But I do agree with it.
I think it was Gandhi :o). -- Andrei
Sep 04 2015
prev sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 3 September 2015 at 02:30:41 UTC, Laeeth Isharc 
wrote:
 In my experience grumbling without action doesn't tend to lead 
 to the change you want in the world.  In theory maybe it 
 should, and someone will listen.  But human beings are funny 
 creatures.
End user back pressure helps. If it does not help, then the development process is FUBAR. However it does help, Walter is an intelligent man, that enjoys defending his position, but that does not mean that he does not listen to arguments and internalize them over time.
 Well how would a dictator of D accomplish that?  Probably 
 porting the compiler to D would be a pretty good start, for a 
 variety of reasons.  That will help with stability, 
 refactoring, and documentation I should have thought.  Not 
 everyone knows C++, and of those who do, not everyone wants to 
 write in it.
Dedicating one cycle to porting, another to refactoring and documentation is a very good start. IFF you focus on that and stick with it and avoid adding more features as code at the same time. (Add them to the plan/spec instead.)
 By the way, the dmd source code doesn't seem horribly obscure 
 to read at first glance.
Reading is one thing, making it do what you want another. For that you need either documentation or "reverse-engineering the underlying model".
 alors ?  as you point out, an asm.js backend would be rather 
 nice to have, and you are by all accounts a low-level guy, so 
 it shouldn't be hard to do, no ?
I don't have enough time to figure out all the ins-and-outs of the current compiler. To do that, in reasonable time, I would need a refactored and documented compiler. (I am also not primarily a low level person, my background is broad, but my major was in systems development).
 Given that C compilers are still developing, I doubt D will 
 ever be finished in my useful career.  So what ?  The facts 
 speak against your claim, since D is clearly becoming more 
 polished with every release - just look at the improvements to 
 the GC within the past year.  (Runtime/compiler, whatever).
Oh, I would love for D to follow the trajectory of C. C development can follow a near-waterfall development model: 1. specification-ratification-cycle 2. specification based gamma-beta-alpha production implementation 3. release 4. non-semantic improvements D is nowhere near C's conservative fully spec'ed ISO-standard-based process. Semantically C does not have many advantages over D, except VLAs (which I find rather useful and think D should adopt).
 Andrei talked about documentation and presentation a little 
 while back, and we're now in a much better state.  Allocators 
 soon here.
 People have got D to run on embedded systems with low resources.
What people can do isn't really all that interesting. What is interesting is what you can do with little effort and better than the alternatives. BASIC ran on machines with just a few K's of RAM, that does not make it a reasonable choice.
 enough for my modest needs.  I'd honestly bet that a little 
 more effort to communicate the practical commercial benefits of 
 D would make much more of a difference than this abstract 
 stuff.  But who am I to say.
I think you underestimate the expections people with a major in compsci or a significant amount of development experience have. They care about the actual qualities, not the presentation. Those are the CTOs you need to convince, not the CEOs. As it stands today both Rust and D falls into the category toy-languages. "toy language" is the term academics use to describe their languages that explore interesting ideas, but does not have the polish, tooling or commercial backing to take a significant market share.
Sep 02 2015
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/29/2015 1:13 PM, Ola Fosheim Grostad wrote:
 But the net effect of maintaining 3 different backends is sending signals that
 the project lacks direction and priorities.
Back when there was only 1 compiler, people complained about that, saying it signaled lack of reliable support. Having 3 D compilers is a big positive. Each has their strengths and weaknesses. It's all good. People can and do interpret anything and everything about D as a negative. Or get involved and do something positive.
Sep 02 2015
parent reply Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 2 Sep 2015 9:05 pm, "Walter Bright via Digitalmars-d" <
digitalmars-d puremagic.com> wrote:
 On 8/29/2015 1:13 PM, Ola Fosheim Grostad wrote:
 But the net effect of maintaining 3 different backends is sending
signals that
 the project lacks direction and priorities.
Back when there was only 1 compiler, people complained about that, saying
it signaled lack of reliable support.

Is this argument still being used?

This is the best example of double standards that outside reviewers give
about the core D maintainers.

In any other language, you'd call it freedom of choice (devil's advocate:
the fact that there are dozens of C++ compilers has a negative impact on
usage and adoption).

Iain.
Sep 02 2015
next sibling parent reply "Laeeth Isharc" <laeethnospam nospamlaeeth.com> writes:
On Wednesday, 2 September 2015 at 21:51:58 UTC, Iain Buclaw wrote:
 On 2 Sep 2015 9:05 pm, "Walter Bright via Digitalmars-d" < 
 digitalmars-d puremagic.com> wrote:
 On 8/29/2015 1:13 PM, Ola Fosheim Grostad wrote:
 But the net effect of maintaining 3 different backends is 
 sending
signals that
 the project lacks direction and priorities.
Back when there was only 1 compiler, people complained about that, saying
it signaled lack of reliable support.

 Is this argument still being used?

 This is the best example of double standards that outside 
 reviewers give about the core D maintainers.

 In any other language, you'd call it freedom of choice (devil's 
 advocate: the fact that there are dozens of C++ compilers has a 
 negative impact on usage and adoption).

 Iain.
A very interesting phenomenon, and one tends to refine one's understanding of it when thinking about developments in financial markets, because then it's serious and one has one's career at stake in learning to understand this phenomenon better. There's a great deal of insight in Dr Iain Mcgilchrist's The Master and His Emissary. What I think is happening is the gestalt perception system in the brain has an emotional reaction to some entity (in this case D) and the part specialised in articulation of ideas (that is also prone to confabulation) comes up with a reason to explain why. But the feeling is primary, and the words just justify the feeling (since cognitive dissonance is uncomfortable). The best recent example of this phenomenon was the hysteria over how the dollar was going to collapse, and anyone with any brain simply had to own precious metals and real assets + emerging market currencies. It didn't quite play out that way, and it wasn't hard to figure that out at the time. (Timing was the hard part): http://www.slideshare.net/Laeeth/is-the-us-dollar-bottoming Anyway, once you start to understand this phenomenon you see it everywhere. I do believe that's in play with regards to D. And that's why it really doesn't matter what the naysayers believe - the ones you want to focus on pleasing are those who are favourably disposed towards D anyway and just need to understand the case for it better, or have one or two missing things completed. Laeeth.
Sep 02 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/2/2015 7:08 PM, Laeeth Isharc wrote:
 And that's why it really doesn't
 matter what the naysayers believe - the ones you want to focus on pleasing are
 those who are favourably disposed towards D anyway and just need to understand
 the case for it better, or have one or two missing things completed.
That's right. I've heard "I'd use your product if only you added Feature X" for 35 years. Every time I come back with "Here's X, now you can use it!" they just come back with "That's great, but I actually need Feature Y". The truth is, those people will never use it. They just come up with endless reasonable sounding excuses. They'll wear you out chasing rainbows. Those kinds of feature requests should be responded to politely, but with a healthy amount of skepticism. Of much more realistic interest are those who are already heavily using the product, but are blocked by this or that.
Sep 02 2015
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Wed, Sep 02, 2015 at 09:09:43PM -0700, Walter Bright via Digitalmars-d wrote:
 On 9/2/2015 7:08 PM, Laeeth Isharc wrote:
And that's why it really doesn't matter what the naysayers believe -
the ones you want to focus on pleasing are those who are favourably
disposed towards D anyway and just need to understand the case for it
better, or have one or two missing things completed.
That's right. I've heard "I'd use your product if only you added Feature X" for 35 years. Every time I come back with "Here's X, now you can use it!" they just come back with "That's great, but I actually need Feature Y".
Too true! auto featuresRequested = [ x ]; while (featuresRequested.length > 0) { refuse(product, new Reason(featuresRequested)); auto newFeatures = waitForNextRelease(); foreach (x; newFeatures) { featuresRequested.remove(x); auto y = new Excuse(); featuresRequested ~= y; } } adopt(product);
 The truth is, those people will never use it. They just come up with
 endless reasonable sounding excuses. They'll wear you out chasing
 rainbows.
 
 Those kinds of feature requests should be responded to politely, but
 with a healthy amount of skepticism.
 
 Of much more realistic interest are those who are already heavily
 using the product, but are blocked by this or that.
Yes, serve existing customers well, and they will spread the word for you, leading to more customers. Divert your energy to please non-customers in hopes of winning them over, and you may end up driving away what customers you do have. T -- Век живи - век учись. А дураком помрёшь.
Sep 02 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/2/2015 9:28 PM, H. S. Teoh via Digitalmars-d wrote:
 Yes, serve existing customers well, and they will spread the word for
 you, leading to more customers. Divert your energy to please
 non-customers in hopes of winning them over, and you may end up driving
 away what customers you do have.
That's a good description of the approach I prefer.
Sep 02 2015
parent "Ola Fosheim Grostad" <ola.fosheim.grostad+dlang gmail.com> writes:
On Thursday, 3 September 2015 at 06:56:16 UTC, Walter Bright 
wrote:
 On 9/2/2015 9:28 PM, H. S. Teoh via Digitalmars-d wrote:
 Yes, serve existing customers well, and they will spread the 
 word for
 you, leading to more customers. Divert your energy to please
 non-customers in hopes of winning them over, and you may end 
 up driving
 away what customers you do have.
That's a good description of the approach I prefer.
Unless you sell DMD, how about providing a definition of "customer"? If you don't pay attention to evaluations, then surely the competition will steal your thunder and you'll end up as Cobol; on a long tail in maintenance mode.
Sep 03 2015
prev sibling parent reply "Ola Fosheim Grostad" <ola.fosheim.grostad+dlang gmail.com> writes:
On Wednesday, 2 September 2015 at 21:51:58 UTC, Iain Buclaw wrote:
 This is the best example of double standards that outside 
 reviewers give about the core D maintainers.
Hogwash. AFAIK, they complained about the backend, not a lack of compilers. When you get fracturing related to code base quality, licensing or language semantics, you have development strategy issues that cause fracturing (you also have private fors like ketmar's). Besides, different people having different opinions is not double standards.
 In any other language, you'd call it freedom of choice (devil's 
 advocate: the fact that there are dozens of C++ compilers has a 
 negative impact on usage and adoption).
Most C++ compilers are dead. If you scaled up C++ to D based on resources and usage then C++ should have 100s of free compilers. You have 2 free C++14 compilers. By your argument emacs and xemacs should never have been merged, the fact is that they were better off with uniting efforts.
Sep 02 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/2/2015 9:55 PM, Ola Fosheim Grostad wrote:
 Most C++ compilers are dead.
Actually, only a tiny handful of original C++ compilers were ever created. The rest are just evolved versions of them. To list them (from memory): Cfront (Bjarne Stroustrup) Zortech C++ (Me) G++ (Michael Tiemann) Clang Edison Design Group (Daveed Vandevorde) Taumetric (Michael Ball) Microsoft There were a lot of original C compilers developed, but they pretty much all failed to make the transition to C++.
Sep 03 2015
next sibling parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Thursday, 3 September 2015 at 07:04:11 UTC, Walter Bright 
wrote:
 Actually, only a tiny handful of original C++ compilers were 
 ever created. The rest are just evolved versions of them.
what about Borland's compiler?
Sep 03 2015
next sibling parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 03-Sep-2015 16:09, Adam D. Ruppe wrote:
 On Thursday, 3 September 2015 at 07:04:11 UTC, Walter Bright wrote:
 Actually, only a tiny handful of original C++ compilers were ever
 created. The rest are just evolved versions of them.
what about Borland's compiler?
Seconded, it was horrible but still was there since MS-DOS. -- Dmitry Olshansky
Sep 03 2015
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2015-09-03 15:09, Adam D. Ruppe wrote:

 what about Borland's compiler?
That would be Taumetric in Walter's list [1][2]. [1] https://en.wikipedia.org/wiki/Borland_C%2B%2B [2] https://en.wikipedia.org/wiki/Turbo_C%2B%2B -- /Jacob Carlborg
Sep 03 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/3/2015 7:30 AM, Jacob Carlborg wrote:
 On 2015-09-03 15:09, Adam D. Ruppe wrote:

 what about Borland's compiler?
That would be Taumetric in Walter's list [1][2]. [1] https://en.wikipedia.org/wiki/Borland_C%2B%2B [2] https://en.wikipedia.org/wiki/Turbo_C%2B%2B
Apple had licensed Symantec C++ at one point. I sometimes wonder what influence it had on clang.
Sep 03 2015
parent reply "David Nadlinger" <code klickverbot.at> writes:
On Thursday, 3 September 2015 at 18:14:45 UTC, Walter Bright 
wrote:
 I sometimes wonder what influence it had on clang.
In terms of design, not more than any other C++ compiler would have as far as I can tell. Might be interesting for you to have a closer look at it at some point though for comparison (it's non-copyleft, so no need to be afraid of lawyers there). — David
Sep 03 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/3/2015 2:37 PM, David Nadlinger wrote:
 On Thursday, 3 September 2015 at 18:14:45 UTC, Walter Bright wrote:
 I sometimes wonder what influence it had on clang.
In terms of design, not more than any other C++ compiler would have as far as I can tell. Might be interesting for you to have a closer look at it at some point though for comparison (it's non-copyleft, so no need to be afraid of lawyers there).
Yeah, I'd have to read through the source code. But it's still copyrighted, and I prefer not to be subject to taint. For years, many people did not believe I could have created a C++ compiler on my own, and they were quick to believe any possibility that I didn't. Refusing to look at other compilers helped a lot with this. I'm still the only person to have ever created a C++ compiler from front to back. :-) Of course, DMC++ is a C++98 compiler, and C++ has moved on.
Sep 03 2015
parent "David Nadlinger" <code klickverbot.at> writes:
On Thursday, 3 September 2015 at 21:58:18 UTC, Walter Bright 
wrote:
 Yeah, I'd have to read through the source code. But it's still 
 copyrighted, and I prefer not to be subject to taint. For 
 years, many people did not believe I could have created a C++ 
 compiler on my own, and they were quick to believe any 
 possibility that I didn't. Refusing to look at other compilers 
 helped a lot with this.
Yeah, sure, that's your choice. It would have been been interesting to hear your take on it, though. — David
Sep 03 2015
prev sibling parent reply "Ola Fosheim Grostad" <ola.fosheim.grostad+dlang gmail.com> writes:
On Thursday, 3 September 2015 at 07:04:11 UTC, Walter Bright 
wrote:
 On 9/2/2015 9:55 PM, Ola Fosheim Grostad wrote:
 Most C++ compilers are dead.
Actually, only a tiny handful of original C++ compilers were ever created. The rest are just evolved versions of them. To list them (from memory): Cfront (Bjarne Stroustrup) Zortech C++ (Me) G++ (Michael Tiemann) Clang Edison Design Group (Daveed Vandevorde) Taumetric (Michael Ball) Microsoft There were a lot of original C compilers developed, but they pretty much all failed to make the transition to C++.
I expected the list to be longer. Which one represents the non-cfront SGI compilers? SGI was quite heavily into C++ unlike most of the Unix world.
Sep 03 2015
parent Walter Bright <newshound2 digitalmars.com> writes:
On 9/3/2015 2:28 PM, Ola Fosheim Grostad wrote:
 I expected the list to be longer.
I don't. It takes 10 years to write a C++ compiler, and most companies wanting to get into the business found it far more practical to buy one as a starting point.
 Which one represents the non-cfront SGI
 compilers? SGI was quite heavily into C++ unlike most of the Unix world.
I don't know, but many companies tended to hide where they got their starting point. I know about a few of them simply from being in the business and knowing the players. It's like VCRs. There were only a couple makers of VCR guts, but a lot of VCR boxes with different brand names on them that repackaged the same old guts. The same goes for dishwashers, SD cards, DVD blanks, etc. EDG licensed their front end to a lot of companies who made their own branded C++ compilers, such as Intel C++.
Sep 03 2015
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/29/2015 12:37 PM, Laeeth Isharc wrote:
 In my experience you can deliver
 everything people say they want, and then find it isn't that at all.
That's so true. My favorite anecdote on that was back in the 1990's. A friend of mine said that what he and the world really needs was a Java native compiler. It'd be worth a fortune! I told him that I had that idea a while back, and had implemented one for Symantec. I could get him a copy that day. He changed the subject. I have many, many similar stories. I also have many complementary stories - implementing things that people laugh at me for doing, that turn out to be crucial. We can start with the laundry list of D features that C++ is rushing to adopt :-)
Sep 02 2015
parent reply Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 02/09/2015 19:58, Walter Bright wrote:
 On 8/29/2015 12:37 PM, Laeeth Isharc wrote:
 In my experience you can deliver
 everything people say they want, and then find it isn't that at all.
That's so true. My favorite anecdote on that was back in the 1990's. A friend of mine said that what he and the world really needs was a Java native compiler. It'd be worth a fortune! I told him that I had that idea a while back, and had implemented one for Symantec. I could get him a copy that day. He changed the subject. I have many, many similar stories. I also have many complementary stories - implementing things that people laugh at me for doing, that turn out to be crucial. We can start with the laundry list of D features that C++ is rushing to adopt :-)
Yes, and this I think is demonstrative of a very important consideration: if someone says they want X (and they are not paying upfront for it), then it is crucial for *you* to be able to figure out if that person or group actually wants X or not. If someone spends time building a product or feature that turns out people don't want... the failure is on that someone. And on this aspect I think the development of D does very poorly. Often people clamored for a feature or change (whether people in the D community, or the C++ one), and Walter you went ahead and did it, regardless of whether it will actually increase D usage in the long run. You are prone to this, given your nature to please people who ask for things, or to prove people wrong (as you yourself admitted). I apologize for not remembering any example at the moment, but I know there was quite a few, especially many years back. It usually went like this: C++ community guy: "D is crap, it's not gonna be used without X" *some time later* Walter: "Ok, I've now implemented X in D!" the same C++ community guy: either finds another feature or change to complain about (repeat), or goes silent, or goes "meh, D is still not good" Me and other people from D community: "ok... now we have a new half-baked functionality in D, adding complexity for little value, and put here only to please people that are extremely unlikely to ever be using D whatever any case"... -- Bruno Medeiros https://twitter.com/brunodomedeiros
Sep 16 2015
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Wednesday, 16 September 2015 at 14:40:26 UTC, Bruno Medeiros 
wrote:
 Me and other people from D community: "ok... now we have a new 
 half-baked functionality in D, adding complexity for little 
 value, and put here only to please people that are extremely 
 unlikely to ever be using D whatever any case"...
D is fun for prototyping ideas, so yes half-baked and not stable, but still useful. I'm waiting for Rust to head down the same lane of adding features and obfuscating the syntax (and their starting point is even more complex than D's was)...
Sep 16 2015
prev sibling parent reply Mike Parker <aldacron gmail.com> writes:
On Wednesday, 16 September 2015 at 14:40:26 UTC, Bruno Medeiros 
wrote:
 And on this aspect I think the development of D does very 
 poorly. Often people clamored for a feature or change (whether 
 people in the D community, or the C++ one), and Walter you went 
 ahead and did it, regardless of whether it will actually 
 increase D usage in the long run. You are prone to this, given 
 your nature to please people who ask for things, or to prove 
 people wrong (as you yourself admitted).

 I apologize for not remembering any example at the moment, but 
 I know there was quite a few, especially many years back. It 
 usually went like this:

 C++ community guy: "D is crap, it's not gonna be used without X"
 *some time later*
 Walter: "Ok, I've now implemented X in D!"
 the same C++ community guy: either finds another feature or 
 change to complain about (repeat), or goes silent, or goes 
 "meh, D is still not good"
 Me and other people from D community: "ok... now we have a new 
 half-baked functionality in D, adding complexity for little 
 value, and put here only to please people that are extremely 
 unlikely to ever be using D whatever any case"...
I find this assessment inaccurate. In my own experience, I have come to see Walter as Dr. No (in a good sense!) in that he has said no to a great many feature requests over the years. The instances where a feature was implemented that took the community by surprise have been rare indeed. And even then, we are not privy to the support requests and other discussions that Walter has with the businesses using D. I'm confident that what goes on in his head when deciding to pursue a change or enhancement has little to do with willy-nilly complaints by C++ users.
Sep 17 2015
parent reply Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 17/09/2015 09:06, Mike Parker wrote:
 On Wednesday, 16 September 2015 at 14:40:26 UTC, Bruno Medeiros wrote:
 And on this aspect I think the development of D does very poorly.
 Often people clamored for a feature or change (whether people in the D
 community, or the C++ one), and Walter you went ahead and did it,
 regardless of whether it will actually increase D usage in the long
 run. You are prone to this, given your nature to please people who ask
 for things, or to prove people wrong (as you yourself admitted).

 I apologize for not remembering any example at the moment, but I know
 there was quite a few, especially many years back. It usually went
 like this:

 C++ community guy: "D is crap, it's not gonna be used without X"
 *some time later*
 Walter: "Ok, I've now implemented X in D!"
 the same C++ community guy: either finds another feature or change to
 complain about (repeat), or goes silent, or goes "meh, D is still not
 good"
 Me and other people from D community: "ok... now we have a new
 half-baked functionality in D, adding complexity for little value, and
 put here only to please people that are extremely unlikely to ever be
 using D whatever any case"...
I find this assessment inaccurate. In my own experience, I have come to see Walter as Dr. No (in a good sense!) in that he has said no to a great many feature requests over the years. The instances where a feature was implemented that took the community by surprise have been rare indeed. And even then, we are not privy to the support requests and other discussions that Walter has with the businesses using D. I'm confident that what goes on in his head when deciding to pursue a change or enhancement has little to do with willy-nilly complaints by C++ users.
Dr. No for the D community. If someone from D community said "D won't succeed without X", or "D can't be made to work without X", that wouldn't have much clout with Walter. (unless that someone is behind a company using D commercially, or considering so) But if people from the C++ community said it, OMG, then Walter goes "let's add it to D!", just to prove a point or something. *Mind you*: all this I'm saying is pre TDPL book stuff. After the book was out, things stabilized. But way back, even more so before D2, it would happen quite often. Again apologies for no references or examples, but this is all stuff from 4-7 years ago so it's hard to remember exact cases. -- Bruno Medeiros https://twitter.com/brunodomedeiros
Sep 17 2015
next sibling parent reply Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 17/09/2015 12:57, Bruno Medeiros wrote:
 But if people from the C++ community said it, OMG, then Walter goes
 "let's add it to D!", just to prove a point or something. *Mind you*:
 all this I'm saying is pre TDPL book stuff. After the book was out,
 things stabilized. But way back, even more so before D2, it would happen
 quite often. Again apologies for no references or examples, but this is
 all stuff from 4-7 years ago so it's hard to remember exact cases.
I do remember though, that the turning point for this was when Andrei joined the D team. Before that it was more or less like this: Walter was the master compiler writer, wohoo, and if someone challenged him to add a feature, it went in. In most cases maybe Walter wasn't challenged directly, but someone in the C++ community would say "Ha, wouldn't it be great if C++ had X!", and then if D didn't had X already, it would get added, so Walter would go "Oh, you know what, D has X!!" Little consideration was given to whether X was worthwhile or not in the big picture. After Andrei came on board, things improved and became more like: Andrei: "Hold on, first let's see if the use case for X is actually a real world need or not. If it is, let's see if it is not satisfactory to use existing D language features to solve that use case. And if it's not, only then we'll consider adding it. But even so let's try to add X in a more generic way, so that it easier to implement and/or so that it could solve other use cases as well." -- Bruno Medeiros https://twitter.com/brunodomedeiros
Sep 17 2015
parent Kagamin <spam here.lot> writes:
Template metaprogramming is probably the only notable feature 
borrowed from C++ (more like a redesign?), the rest looks more 
like borrowed from Java. This actually turns C++ programmers away 
when they see so many things are done differently from C++.
Sep 17 2015
prev sibling parent jmh530 <john.michael.hall gmail.com> writes:
On Thursday, 17 September 2015 at 11:57:29 UTC, Bruno Medeiros 
wrote:
 *Mind you*: all this I'm saying is pre TDPL book stuff. After 
 the book was out, things stabilized.
Can I speak for the people who only became familiar with D after TDPL and say I don't really care about what you're talking about?
Sep 17 2015
prev sibling parent reply "Kagamin" <spam here.lot> writes:
On Saturday, 29 August 2015 at 14:44:01 UTC, Casual D user wrote:
 D is advertised as a system's language, but most of the 
 built-in language features require the GC so you might as well 
 just use C if you can't use the GC.
Are you sure about C? https://news.ycombinator.com/item?id=10139423
Sep 01 2015
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Tuesday, 1 September 2015 at 16:05:43 UTC, Kagamin wrote:
 On Saturday, 29 August 2015 at 14:44:01 UTC, Casual D user 
 wrote:
 D is advertised as a system's language, but most of the 
 built-in language features require the GC so you might as well 
 just use C if you can't use the GC.
Are you sure about C? https://news.ycombinator.com/item?id=10139423
It's been mentioned before that there really isn't much point in using C when you can use D. Even if you completely avoid the GC and the standard library, you're _still_ ahead of where you'd be with C, and you can call C functions trivially. So, you can definitely use D as a better C; you just lose out on a lot of cool stuff that D has to offer beyond that. But D has a lot to offer over C even without using any of that stuff. One of the first projects I used D for was back in college a number of years ago where I got sick of some of the issues I was having with C++ and went with D because it gave me stuff like array bounds checking. I was using very few of D's features (heck, D2 was quite young at that point, and I don't think that ranges had been introduced to Phobos yet at that point, so the standard library was seriously lacking anyway), but it was still easier to use D. - Jonathan M Davis
Sep 01 2015
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 1 September 2015 at 17:14:44 UTC, Jonathan M Davis 
wrote:
 One of the first projects I used D for was back in college a 
 number of years ago where I got sick of some of the issues I 
 was having with C++ and went with D because it gave me stuff 
 like array bounds checking. I was using very few of D's
http://en.cppreference.com/w/cpp/container/array/at http://en.cppreference.com/w/cpp/container/vector/at https://github.com/google/sanitizers
Sep 01 2015
prev sibling parent reply "Laeeth Isharc" <laeethnospam nospamlaeeth.com> writes:
On Tuesday, 1 September 2015 at 17:14:44 UTC, Jonathan M Davis 
wrote:
 It's been mentioned before that there really isn't much point 
 in using C when you can use D. Even if you completely avoid the 
 GC and the standard library, you're _still_ ahead of where 
 you'd be with C, and you can call C functions trivially. So, 
 you can definitely use D as a better C; you just lose out on a 
 lot of cool stuff that D has to offer beyond that. But D has a 
 lot to offer over C even without using any of that stuff.

 One of the first projects I used D for was back in college a 
 number of years ago where I got sick of some of the issues I 
 was having with C++ and went with D because it gave me stuff 
 like array bounds checking. I was using very few of D's 
 features (heck, D2 was quite young at that point, and I don't 
 think that ranges had been introduced to Phobos yet at that 
 point, so the standard library was seriously lacking anyway), 
 but it was still easier to use D.

 - Jonathan M Davis
worthy of a quick blogpost sometime? Laeeth.
Sep 02 2015
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Thursday, 3 September 2015 at 01:54:45 UTC, Laeeth Isharc 
wrote:
 On Tuesday, 1 September 2015 at 17:14:44 UTC, Jonathan M Davis 
 wrote:
 It's been mentioned before that there really isn't much point 
 in using C when you can use D. Even if you completely avoid 
 the GC and the standard library, you're _still_ ahead of where 
 you'd be with C, and you can call C functions trivially. So, 
 you can definitely use D as a better C; you just lose out on a 
 lot of cool stuff that D has to offer beyond that. But D has a 
 lot to offer over C even without using any of that stuff.

 One of the first projects I used D for was back in college a 
 number of years ago where I got sick of some of the issues I 
 was having with C++ and went with D because it gave me stuff 
 like array bounds checking. I was using very few of D's 
 features (heck, D2 was quite young at that point, and I don't 
 think that ranges had been introduced to Phobos yet at that 
 point, so the standard library was seriously lacking anyway), 
 but it was still easier to use D.

 - Jonathan M Davis
worthy of a quick blogpost sometime? Laeeth.
My memory would be pretty sketchy on it at this point. I remember what the project was (it had to do with randomly generating 3D fractals in opengl for a graphics course), but that was back in 2008, I think, and I couldn't really say much interesting about it beyond the fact that I was annoyed enough with C++ at the time to use D for the project. The only thing notable about it is that it was the first thing that I did in D that was actually supposed to do something rather than just messing around with the language. - Jonathan M Davis
Sep 04 2015
parent "Laeeth Isharc" <laeethnospam nospamlaeeth.com> writes:
On Friday, 4 September 2015 at 14:22:17 UTC, Jonathan M Davis 
wrote:
 On Thursday, 3 September 2015 at 01:54:45 UTC, Laeeth Isharc 
 wrote:
 On Tuesday, 1 September 2015 at 17:14:44 UTC, Jonathan M Davis 
 wrote:
 One of the first projects I used D for was back in college a 
 number of years ago where I got sick of some of the issues I 
 was having with C++ and went with D because it gave me stuff 
 like array bounds checking. I was using very few of D's 
 features (heck, D2 was quite young at that point, and I don't 
 think that ranges had been introduced to Phobos yet at that 
 point, so the standard library was seriously lacking anyway), 
 but it was still easier to use D.

 - Jonathan M Davis
worthy of a quick blogpost sometime? Laeeth.
My memory would be pretty sketchy on it at this point. I remember what the project was (it had to do with randomly generating 3D fractals in opengl for a graphics course), but that was back in 2008, I think, and I couldn't really say much interesting about it beyond the fact that I was annoyed enough with C++ at the time to use D for the project. The only thing notable about it is that it was the first thing that I did in D that was actually supposed to do something rather than just messing around with the language. - Jonathan M Davis
Tku for colour.
Sep 04 2015
prev sibling parent reply Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 28/08/2015 22:59, Walter Bright wrote:
 People told me I couldn't write a C compiler, then told me I couldn't
 write a C++ compiler. I'm still the only person who has ever implemented
 a complete C++ compiler (C++98). Then they all (100%) laughed at me for
 starting D, saying nobody would ever use it.

 My whole career is built on stepping over people who told me I couldn't
 do anything and wouldn't amount to anything.
So your whole career is fundamentally based not on bringing value to the software world, but rather merely proving people wrong? That amounts to living your professional life in thrall of other people's validation, and it's not commendable at all. It's a waste of your potential. It is only worthwhile to prove people wrong when it brings you a considerable amount of either monetary resources or clout - and more so than you would have got doing something else with your time. It's not clear to me that was always the case throughout your career... was it? -- Bruno Medeiros https://twitter.com/brunodomedeiros
Sep 16 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/16/2015 7:16 AM, Bruno Medeiros wrote:
 On 28/08/2015 22:59, Walter Bright wrote:
 People told me I couldn't write a C compiler, then told me I couldn't
 write a C++ compiler. I'm still the only person who has ever implemented
 a complete C++ compiler (C++98). Then they all (100%) laughed at me for
 starting D, saying nobody would ever use it.

 My whole career is built on stepping over people who told me I couldn't
 do anything and wouldn't amount to anything.
So your whole career is fundamentally based not on bringing value to the software world, but rather merely proving people wrong? That amounts to living your professional life in thrall of other people's validation, and it's not commendable at all. It's a waste of your potential. It is only worthwhile to prove people wrong when it brings you a considerable amount of either monetary resources or clout - and more so than you would have got doing something else with your time. It's not clear to me that was always the case throughout your career... was it?
Wow, such an interpretation never occurred to me. I will reiterate that I worked on things that I believed had value and nobody else did. I.e. I did not need validation from others.
Sep 16 2015
parent reply Joakim <dlang joakim.fea.st> writes:
On Wednesday, 16 September 2015 at 20:44:00 UTC, Walter Bright 
wrote:
 On 9/16/2015 7:16 AM, Bruno Medeiros wrote:
 On 28/08/2015 22:59, Walter Bright wrote:
 People told me I couldn't write a C compiler, then told me I 
 couldn't
 write a C++ compiler. I'm still the only person who has ever 
 implemented
 a complete C++ compiler (C++98). Then they all (100%) laughed 
 at me for
 starting D, saying nobody would ever use it.

 My whole career is built on stepping over people who told me 
 I couldn't
 do anything and wouldn't amount to anything.
So your whole career is fundamentally based not on bringing value to the software world, but rather merely proving people wrong? That amounts to living your professional life in thrall of other people's validation, and it's not commendable at all. It's a waste of your potential. It is only worthwhile to prove people wrong when it brings you a considerable amount of either monetary resources or clout - and more so than you would have got doing something else with your time. It's not clear to me that was always the case throughout your career... was it?
Wow, such an interpretation never occurred to me. I will reiterate that I worked on things that I believed had value and nobody else did. I.e. I did not need validation from others.
Yeah, I was a bit stunned that that is what Bruno took from your post. I don't think anybody would question that writing a C or C++ compiler in the '80s and '90s had value, and I'm sure you did pretty well off them, considering you retired at 42 (http://www.drdobbs.com/architecture-and-design/how-i-came-to-write-d/240165322). Your point is that nobody thought _you_ or you _alone_ could do these valuable things, and you repeatedly proved them wrong. Those doubting you in this thread, about improving the dmd backend so it's competitive with llvm/gcc while still having time to work on the frontend, may or may not turn out to be right, but you certainly seem to have a good track record at proving such doubters wrong.
Sep 17 2015
parent reply Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 17/09/2015 08:10, Joakim wrote:
 Yeah, I was a bit stunned that that is what Bruno took from your post.
 I don't think anybody would question that writing a C or C++ compiler in
 the '80s and '90s had value, and I'm sure you did pretty well off them,
 considering you retired at 42
 (http://www.drdobbs.com/architecture-and-design/how-i-came-to-write-d/240165322).
I didn't say that Walter's previous work didn't bring *any* value to the software world. It's not like people challenged him to write efficient lolcode or brainfuck(*) compilers, or some other silly challenge, which if he did would have a been a massive waste of time - even if it was technically a very admirable feat. (*) - Yeah those languages weren't around at the time, but that's just an example. My point was that one would certainly bring *more* value to the software world, if that is the primary focus of one's career, instead of merely proving people wrong. That doesn't mean either one has to be an emotionless robot that never does something just for the sake of ego-boosting (which is really the only reward of proving people wrong - unless there are some monetary or other resources at stake). But Walter has so many stories of "I did this [massive project] to prove people wrong." which is what makes me wonder if there isn't too much focus on ego validation. -- Bruno Medeiros https://twitter.com/brunodomedeiros
Sep 17 2015
parent Laeeth Isharc <Laeeth.nospam nospam-laeeth.com> writes:
On Thursday, 17 September 2015 at 11:47:36 UTC, Bruno Medeiros 
wrote:
 On 17/09/2015 08:10, Joakim wrote:
 Yeah, I was a bit stunned that that is what Bruno took from 
 your post.
 I don't think anybody would question that writing a C or C++ 
 compiler in
 the '80s and '90s had value, and I'm sure you did pretty well 
 off them,
 considering you retired at 42
 (http://www.drdobbs.com/architecture-and-design/how-i-came-to-write-d/240165322).
I didn't say that Walter's previous work didn't bring *any* value to the software world. It's not like people challenged him to write efficient lolcode or brainfuck(*) compilers, or some other silly challenge, which if he did would have a been a massive waste of time - even if it was technically a very admirable feat. (*) - Yeah those languages weren't around at the time, but that's just an example. My point was that one would certainly bring *more* value to the software world, if that is the primary focus of one's career, instead of merely proving people wrong. That doesn't mean either one has to be an emotionless robot that never does something just for the sake of ego-boosting (which is really the only reward of proving people wrong - unless there are some monetary or other resources at stake). But Walter has so many stories of "I did this [massive project] to prove people wrong." which is what makes me wonder if there isn't too much focus on ego validation.
Human beings are funny creatures, and able people like to be stretched to the limit of what's possible. Having someone tell you there is no way you can do that is a hint that it's quite a difficult problem, and yet you may correctly perceive how it may be done. A highly talented person of this sort has many ways in which in theory they might add most value, but many fewer viable ways, because they find it harder than most to do what they don't want to do. (And creativity comes when you are following a path that is within you). Cattell and Eysenck wrote about this, and lately Professor Bruce Charlton at Iqpersonalitygenius blog. Plus, following what moves you may be a better guide than rational optimisation given that with the latter one is often fooling oneself since one doesn't even understand the structure of social calculus. I personally find Walter's attitude quite inspiring, although I am not familiar with the pre TDPL days and not so interested at this moment. At least you can say that he recognizes that management is difficult for him and did bring Andrei alongside, not something easy to do to yield total control.
Sep 17 2015
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/18/2015 5:37 AM, Vladimir Panteleev wrote:
 IIRC, I have had three releases affected by optimization/inlining DMD bugs (two
 of Digger and one of RABCDAsm). These do not speak well for D when end-users
ask
 me what the cause of the bug is, and I have to say "Yeah, it's a bug in the
 official D compiler".
Are they filed in bugzilla?
Aug 18 2015
parent reply "Vladimir Panteleev" <thecybershadow.lists gmail.com> writes:
On Tuesday, 18 August 2015 at 19:02:20 UTC, Walter Bright wrote:
 On 8/18/2015 5:37 AM, Vladimir Panteleev wrote:
 IIRC, I have had three releases affected by 
 optimization/inlining DMD bugs (two
 of Digger and one of RABCDAsm). These do not speak well for D 
 when end-users ask
 me what the cause of the bug is, and I have to say "Yeah, it's 
 a bug in the
 official D compiler".
Are they filed in bugzilla?
Yep, just search for wrong-code regressions. The specific bugs in question have been fixed, but that doesn't change the general problem.
Aug 18 2015
next sibling parent reply "Vladimir Panteleev" <thecybershadow.lists gmail.com> writes:
On Tuesday, 18 August 2015 at 20:24:31 UTC, Vladimir Panteleev 
wrote:
 On Tuesday, 18 August 2015 at 19:02:20 UTC, Walter Bright wrote:
 On 8/18/2015 5:37 AM, Vladimir Panteleev wrote:
 IIRC, I have had three releases affected by 
 optimization/inlining DMD bugs (two
 of Digger and one of RABCDAsm). These do not speak well for D 
 when end-users ask
 me what the cause of the bug is, and I have to say "Yeah, 
 it's a bug in the
 official D compiler".
Are they filed in bugzilla?
Yep, just search for wrong-code regressions. The specific bugs in question have been fixed, but that doesn't change the general problem.
I would like to add that fixing the regression does not make it go away. Even though it's fixed in git, and even after the fix ships with a new DMD release, there is still a D version out there that has the bug, and that will never change until the end of time. The consequence of this is that affected programs cannot be built with certain versions of DMD (e.g. RABCDAsm's build tool checks for the compiler bug and asks users to use another compiler version or disable optimizations). This affects users who get DMD by some other means than downloading it from dlang.org themselves, e.g. via their OS package repository (especially LTS OS release users). Fixing regressions is not enough. We need to try harder to prevent them from ending up in DMD releases at all.
Aug 18 2015
parent Walter Bright <newshound2 digitalmars.com> writes:
On 8/18/2015 2:25 PM, Vladimir Panteleev wrote:
 I would like to add that fixing the regression does not make it go away. Even
 though it's fixed in git, and even after the fix ships with a new DMD release,
 there is still a D version out there that has the bug, and that will never
 change until the end of time.
Not necessarily. The reason we split off a new branch of dmd with each release is so that we can patch it if necessary.
 Fixing regressions is not enough. We need to try harder to prevent them from
 ending up in DMD releases at all.
I agree, but stopping development isn't much of a solution.
Aug 18 2015
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/18/2015 1:24 PM, Vladimir Panteleev wrote:
 The specific bugs in question have
 been fixed, but that doesn't change the general problem.
The reason we have regression tests is to make sure things that are fixed stay fixed. Codegen bugs also always had the highest priority. Being paralyzed by fear of introducing new bugs is not a way forward with any project. (Switching to ddmd, and eventually put the back end in D, will also help with this. DMC++ is always built with any changes and tested to exactly duplicate itself, and that filters out a lot of problems. Unfortunately, DMC++ is a 32 bit program and doesn't exercise the 64 bit code gen. Again, ddmd will fix that.)
Aug 18 2015
parent "Vladimir Panteleev" <thecybershadow.lists gmail.com> writes:
On Tuesday, 18 August 2015 at 21:31:17 UTC, Walter Bright wrote:
 On 8/18/2015 1:24 PM, Vladimir Panteleev wrote:
 The specific bugs in question have
 been fixed, but that doesn't change the general problem.
The reason we have regression tests is to make sure things that are fixed stay fixed. Codegen bugs also always had the highest priority.
It doesn't matter. Regression tests protect against the same bugs reappearing, not new bugs. I'm talking about the general pattern: optimization PR? Regression a few months later.
 Being paralyzed by fear of introducing new bugs is not a way 
 forward with any project.
When the risk outweighs the gain, what's the point of moving forward?
 (Switching to ddmd, and eventually put the back end in D, will 
 also help with this. DMC++ is always built with any changes and 
 tested to exactly duplicate itself, and that filters out a lot 
 of problems. Unfortunately, DMC++ is a 32 bit program and 
 doesn't exercise the 64 bit code gen. Again, ddmd will fix 
 that.)
I don't see how switching to D is going to magically reduce the number of regressions.
Aug 18 2015
prev sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 18-Aug-2015 15:37, Vladimir Panteleev wrote:
 I think stability of the DMD backend is a goal of much higher value than
 the performance of the code it emits. DMD is never going to match the
 code generation quality of LLVM and GCC, which have had many, many
 man-years invested in them. Working on DMD optimizations is essentially
 duplicating this work, and IMHO I think it's not only a waste of time,
 but harmful to D because of the risk of regressions.
How about stress-testing with some simple fuzzer: 1. Generate a sequence of pluasable expressions/functions. 2. Spit out results via printf. 3. Permute -O -inline and compare the outputs. -- Dmitry Olshansky
Aug 19 2015
parent reply "qznc" <qznc web.de> writes:
On Wednesday, 19 August 2015 at 11:22:09 UTC, Dmitry Olshansky 
wrote:
 On 18-Aug-2015 15:37, Vladimir Panteleev wrote:
 I think stability of the DMD backend is a goal of much higher 
 value than
 the performance of the code it emits. DMD is never going to 
 match the
 code generation quality of LLVM and GCC, which have had many, 
 many
 man-years invested in them. Working on DMD optimizations is 
 essentially
 duplicating this work, and IMHO I think it's not only a waste 
 of time,
 but harmful to D because of the risk of regressions.
How about stress-testing with some simple fuzzer: 1. Generate a sequence of pluasable expressions/functions. 2. Spit out results via printf. 3. Permute -O -inline and compare the outputs.
Tools like csmith [0] are surprisingly good at finding ICEs, but useless for performance regressions. A "dsmith" would probably find lots of bugs in the dmd backend. [0] https://embed.cs.utah.edu/csmith/
Aug 19 2015
parent "rsw0x" <anonymous anonymous.com> writes:
On Wednesday, 19 August 2015 at 20:54:39 UTC, qznc wrote:
 On Wednesday, 19 August 2015 at 11:22:09 UTC, Dmitry Olshansky 
 wrote:
 On 18-Aug-2015 15:37, Vladimir Panteleev wrote:
 [...]
How about stress-testing with some simple fuzzer: 1. Generate a sequence of pluasable expressions/functions. 2. Spit out results via printf. 3. Permute -O -inline and compare the outputs.
Tools like csmith [0] are surprisingly good at finding ICEs, but useless for performance regressions. A "dsmith" would probably find lots of bugs in the dmd backend. [0] https://embed.cs.utah.edu/csmith/
fwiw, llvm/clang uses their own in-library fuzzer. http://blog.llvm.org/2015/04/fuzz-all-clangs.html
Aug 19 2015
prev sibling next sibling parent reply "ponce" <contact gam3sfrommars.fr> writes:
On Tuesday, 18 August 2015 at 10:45:49 UTC, Walter Bright wrote:
 Martin ran some benchmarks recently that showed that ddmd 
 compiled with dmd was about 30% slower than when compiled with 
 gdc/ldc. This seems to be fairly typical.

 I'm interested in ways to reduce that gap.

 There are 3 broad kinds of optimizations that compilers do:

 1. source translations like rewriting x*2 into x<<1, and 
 function inlining

 2. instruction selection patterns like should one generate:

     SETC AL
     MOVZ EAX,AL

 or:
     SBB EAX
     NEG EAX

 3. data flow analysis optimizations like constant propagation, 
 dead code elimination, register allocation, loop invariants, 
 etc.

 Modern compilers (including dmd) do all three.

 So if you're comparing code generated by dmd/gdc/ldc, and 
 notice something that dmd could do better at (1, 2 or 3), 
 please let me know. Often this sort of thing is low hanging 
 fruit that is fairly easily inserted into the back end.

 For example, recently I improved the usage of the SETcc 
 instructions.

 https://github.com/D-Programming-Language/dmd/pull/4901
 https://github.com/D-Programming-Language/dmd/pull/4904

 A while back I improved usage of BT instructions, the way 
 switch statements were implemented, and fixed integer divide by 
 a constant with multiply by its reciprocal.
I've often looked at the assembly output of ICC. One thing that was striking to me is that it by and large it doesn't use PUSH, POP, and SETcc. Actually I don't remember such an instruction being emitted by it. And indeed using PUSH/POP/SETcc in assembly were often slower than the alternative. Which is _way_ different that the old x86 where each of these things would gain speed. Instead of PUSH/POP it would spill all registers to an RBP-based location the (perhaps taking advantage of the register renamer?). --------------- That said: I entirely agree with Vladimir about the codegen risk. DMD will always be used anyway because it compiles faster.
Aug 18 2015
parent Walter Bright <newshound2 digitalmars.com> writes:
On 8/18/2015 6:01 AM, ponce wrote:
 One thing that was striking to me is that it by and large it doesn't use PUSH,
 POP, and SETcc. Actually I don't remember such an instruction being emitted by
it.

 And indeed using PUSH/POP/SETcc in assembly were often slower than the
 alternative. Which is _way_ different that the old x86 where each of these
 things would gain speed.
The 32 bit code generator does a lot of push/pop, but the 64 bit one does far less because function parameters are passed in registers most of the time.
Aug 18 2015
prev sibling next sibling parent "Ivan Kazmenko" <gassa mail.ru> writes:
On Tuesday, 18 August 2015 at 10:45:49 UTC, Walter Bright wrote:
 ...
 3. data flow analysis optimizations like constant propagation, 
 dead code elimination, register allocation, loop invariants, 
 etc.

 Modern compilers (including dmd) do all three.

 So if you're comparing code generated by dmd/gdc/ldc, and 
 notice something that dmd could do better at (1, 2 or 3), 
 please let me know. Often this sort of thing is low hanging 
 fruit that is fairly easily inserted into the back end.
 ...
I've once tried to trace the slowdown cause in a simple program and reduced it to https://issues.dlang.org/show_bug.cgi?id=11821, I think if falls under point 3 in your post (redundant instruction in a simple loop). Despite 1.5 years have passed, the issue still stands with 2.068.0. That's for -m32. The -m64 version of the loop does not look as having a redundant instruction to me, but is still longer than the output of GDC or LDC.
Aug 18 2015
prev sibling next sibling parent Joseph Rushton Wakeling via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 18/08/15 12:45, Walter Bright via Digitalmars-d wrote:
 Martin ran some benchmarks recently that showed that ddmd compiled with dmd was
 about 30% slower than when compiled with gdc/ldc. This seems to be fairly
typical.

 I'm interested in ways to reduce that gap.
All things considered, looking at the issues involved in the transition, and the benefits of being able to refactor in D, I can't see that it makes sense to worry about such things at this stage. Surely the easiest way to handle the speed gap, in the short term, is to make it easy to use gdc/ldc as the D compiler to build ddmd, and to do exactly that when building dmd binaries for distribution. Admittedly my answer may be Linux-centric (I guess gdc/ldc builds of ddmd are not feasible for Windows), but to be honest, I don't think a 30% slowdown is _that_ terrible a thing to have to deal with, in the short term, to manage such an important transition. Stability at this stage seems _much_ more important.
Aug 18 2015
prev sibling next sibling parent reply "rsw0x" <anonymous anonymous.com> writes:
On Tuesday, 18 August 2015 at 10:45:49 UTC, Walter Bright wrote:
 Martin ran some benchmarks recently that showed that ddmd 
 compiled with dmd was about 30% slower than when compiled with 
 gdc/ldc. This seems to be fairly typical.

 I'm interested in ways to reduce that gap.
retire dmd? this is ridiculous.
Aug 18 2015
parent reply "rsw0x" <anonymous anonymous.com> writes:
On Tuesday, 18 August 2015 at 21:18:34 UTC, rsw0x wrote:
 On Tuesday, 18 August 2015 at 10:45:49 UTC, Walter Bright wrote:
 Martin ran some benchmarks recently that showed that ddmd 
 compiled with dmd was about 30% slower than when compiled with 
 gdc/ldc. This seems to be fairly typical.

 I'm interested in ways to reduce that gap.
retire dmd? this is ridiculous.
To further expand upon this, if you want to make D fast - Fix the interface between the compiler and the runtime(including the inability for compilers to inline simple things like allocations which makes allocations have massive overheads.) Then, fix the GC. Make the GC both shared and immutable aware, then moving the GC to a thread local "island"-style GC would be fairly easy. D's GC is probably the slowest GC of any major language available, and the entire thing is wrapped in mutexes. D has far, far bigger performance problems that dmd's backend. Maybe you should take a look at what Go has recently done with their GC to get an idea of what D's competition has been up to. https://talks.golang.org/2015/go-gc.pdf
Aug 18 2015
next sibling parent reply "Joseph Rushton Wakeling" <joseph.wakeling webdrake.net> writes:
On Tuesday, 18 August 2015 at 21:26:43 UTC, rsw0x wrote:
 On Tuesday, 18 August 2015 at 21:18:34 UTC, rsw0x wrote:
 D has far, far bigger performance problems that dmd's backend.
However true that may be in general, those almost certainly aren't the reasons why ddmd benchmarks 30% slower than dmd. I would suspect that particular speed difference is heavily backend-dependent.
Aug 18 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/18/2015 2:32 PM, Joseph Rushton Wakeling wrote:
 However true that may be in general, those almost certainly aren't the reasons
 why ddmd benchmarks 30% slower than dmd.  I would suspect that particular speed
 difference is heavily backend-dependent.
That's exactly why I started this thread. To find out why, and if there are low cost steps we can take to close that gap.
Aug 18 2015
parent reply "Joseph Rushton Wakeling" <joseph.wakeling webdrake.net> writes:
On Tuesday, 18 August 2015 at 21:58:58 UTC, Walter Bright wrote:
 On 8/18/2015 2:32 PM, Joseph Rushton Wakeling wrote:
 However true that may be in general, those almost certainly 
 aren't the reasons
 why ddmd benchmarks 30% slower than dmd.  I would suspect that 
 particular speed
 difference is heavily backend-dependent.
That's exactly why I started this thread. To find out why, and if there are low cost steps we can take to close that gap.
Well, yes, quite. I was backing up your rationale, even if I disagree with your prioritizing these concerns at this stage of the dmd => ddmd transition.
Aug 18 2015
parent Walter Bright <newshound2 digitalmars.com> writes:
On 8/18/2015 3:07 PM, Joseph Rushton Wakeling wrote:
 I was backing up your rationale, even if I disagree with your
 prioritizing these concerns at this stage of the dmd => ddmd transition.
I want to move to ddmd right now, and I mean right now. But it's stalled, awaiting Daniel and Martin. https://github.com/D-Programming-Language/dmd/pull/4884 I thought I'd investigate back end issues while waiting, as Martin is very concerned about the ddmd compile speed.
Aug 18 2015
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/18/2015 2:26 PM, rsw0x wrote:
 if you want to make D fast - Fix the interface between the compiler and the
 runtime(including the inability for compilers to inline simple things like
 allocations which makes allocations have massive overheads.) Then, fix the GC.
 Make the GC both shared and immutable aware, then moving the GC to a thread
 local "island"-style GC would be fairly easy.
The fundamental issue of island GCs is what to do with casting of data from one island to another.
 Maybe you should take a look at what Go has recently done with their GC to get
 an idea of what D's competition has been up to.
 https://talks.golang.org/2015/go-gc.pdf
"you"? There's a whole community here, we're all in this together. Pull requests are welcome.
Aug 18 2015
parent reply "rsw0x" <anonymous anonymous.com> writes:
On Tuesday, 18 August 2015 at 21:36:39 UTC, Walter Bright wrote:
 On 8/18/2015 2:26 PM, rsw0x wrote:
 if you want to make D fast - Fix the interface between the 
 compiler and the
 runtime(including the inability for compilers to inline simple 
 things like
 allocations which makes allocations have massive overheads.) 
 Then, fix the GC.
 Make the GC both shared and immutable aware, then moving the 
 GC to a thread
 local "island"-style GC would be fairly easy.
The fundamental issue of island GCs is what to do with casting of data from one island to another.
If you want D to have a GC, you have to design the language around having a GC. Right now, D could be likened to using C++ with Boehm. Something needs done with shared to fix this problem, but everything I could suggest would probably be deemed entirely too big of a change(e.g, making casting to/from shared undefined, and putting methods in the GC API to explicitly move memory between heaps)
 Maybe you should take a look at what Go has recently done with 
 their GC to get
 an idea of what D's competition has been up to.
 https://talks.golang.org/2015/go-gc.pdf
"you"? There's a whole community here, we're all in this together. Pull requests are welcome.
How many people here do you think know the intricacies of dmd as well as you do? At most, a handful and I'm certainly not one of those people.
Aug 18 2015
parent reply "Meta" <jared771 gmail.com> writes:
On Tuesday, 18 August 2015 at 21:45:42 UTC, rsw0x wrote:
 If you want D to have a GC, you have to design the language 
 around having a GC. Right now, D could be likened to using C++ 
 with Boehm.
The irony is that most GC-related complaints are the exact opposite - that the language depends too much on the GC.
Aug 18 2015
parent reply "rsw0x" <anonymous anonymous.com> writes:
On Tuesday, 18 August 2015 at 21:53:43 UTC, Meta wrote:
 On Tuesday, 18 August 2015 at 21:45:42 UTC, rsw0x wrote:
 If you want D to have a GC, you have to design the language 
 around having a GC. Right now, D could be likened to using C++ 
 with Boehm.
The irony is that most GC-related complaints are the exact opposite - that the language depends too much on the GC.
Phobos relies on the GC, but the language itself is not designed around being GC friendly.
Aug 18 2015
parent reply "Meta" <jared771 gmail.com> writes:
On Tuesday, 18 August 2015 at 22:01:16 UTC, rsw0x wrote:
 On Tuesday, 18 August 2015 at 21:53:43 UTC, Meta wrote:
 On Tuesday, 18 August 2015 at 21:45:42 UTC, rsw0x wrote:
 If you want D to have a GC, you have to design the language 
 around having a GC. Right now, D could be likened to using 
 C++ with Boehm.
The irony is that most GC-related complaints are the exact opposite - that the language depends too much on the GC.
Phobos relies on the GC, but the language itself is not designed around being GC friendly.
There are array literals, delegates, associative arrays, pointers, new, delete, and classes, all of which depend on the GC and are part of the language.
Aug 18 2015
parent "rsw0x" <anonymous anonymous.com> writes:
On Wednesday, 19 August 2015 at 01:52:56 UTC, Meta wrote:
 On Tuesday, 18 August 2015 at 22:01:16 UTC, rsw0x wrote:
 On Tuesday, 18 August 2015 at 21:53:43 UTC, Meta wrote:
 On Tuesday, 18 August 2015 at 21:45:42 UTC, rsw0x wrote:
 If you want D to have a GC, you have to design the language 
 around having a GC. Right now, D could be likened to using 
 C++ with Boehm.
The irony is that most GC-related complaints are the exact opposite - that the language depends too much on the GC.
Phobos relies on the GC, but the language itself is not designed around being GC friendly.
There are array literals, delegates, associative arrays, pointers, new, delete, and classes, all of which depend on the GC and are part of the language.
That doesn't make it GC friendly, that makes it GC reliant.
Aug 18 2015
prev sibling next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Tuesday, 18 August 2015 at 10:45:49 UTC, Walter Bright wrote:
Martin ran some benchmarks recently that showed that ddmd compiled
with dmd was about 30% slower than when compiled with gdc/ldc. This
seems to be fairly typical.
[...] This matches my experience of dmd vs. gdc as well. No surprise there.
I'm interested in ways to reduce that gap.
[...] Replace the backend with GDC or LLVM? :-P T -- Prosperity breeds contempt, and poverty breeds consent. -- Suck.com
Aug 18 2015
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 19-Aug-2015 00:34, H. S. Teoh via Digitalmars-d wrote:
 On Tuesday, 18 August 2015 at 10:45:49 UTC, Walter Bright wrote:
 Martin ran some benchmarks recently that showed that ddmd compiled
 with dmd was about 30% slower than when compiled with gdc/ldc. This
 seems to be fairly typical.
[...] This matches my experience of dmd vs. gdc as well. No surprise there.
 I'm interested in ways to reduce that gap.
[...] Replace the backend with GDC or LLVM? :-P
Oh come on - LLVM was an inferiour backend for some time. So what? Let us no work on it 'cause GCC is faster? Contrary to that turns our C++ plus a better intermediate repr foundation is a big win that allows to close the gap. Also DMD's backend strives to stay fast _and_ generate fine machine code. Getting within 10% of GCC/LLVM and being fast is IMHO both possible and should be done. Lastly a backend written in D may take advantage of D's feature to do in x5 less LOCs what others do in C. And there is plenty of research papers on optimization floating around and implemented in GCC/LLVM/MSVC so most of R&D cost is payed by other backends/researchers. -- Dmitry Olshansky
Aug 19 2015
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 19 August 2015 at 08:22:58 UTC, Dmitry Olshansky 
wrote:
 Also DMD's backend strives to stay fast _and_ generate fine 
 machine code. Getting within 10% of GCC/LLVM and being fast is 
 IMHO both possible and should be done.
But if iOS/OS-X and others are essentially requiring an LLVM-like IR as the object code format then it makes most sense to have LLVM as the default backend. If WebAsm et al is focusing on mimicing LLVM, then D's backend have to do the same. And that is not unlikely giving PNACL being LLVM based. Intel is also supportive of LLVM… Replicating a scalar SSA like LLVM does not make a lot of sense. What would make a lot of sense would be to start work on an experimental SIMD SSA implemented in D that could leverage benefits for next gen x86 SIMD and make Phobos target it. That could attract new people to D and make D beat LLVM. You could even combine LLVM and your own SIMD backend (run both, then profile and pick the best code in production on a function-by-function base) Or a high level compile-time oriented IR for D that can boost templates semantics and compilation speed.
 And there is plenty of research papers on optimization floating 
 around and implemented in GCC/LLVM/MSVC so most of R&D cost is 
 payed by other backends/researchers.
I think you underestimate the amount of experimental work that has gone into those backends, work that ends up being trashed. It's not like you have to implement what LLVM has now. You have to implement what LLVM has and a lot of the stuff they have thrown out.
Aug 19 2015
next sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 19-Aug-2015 12:26, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 And there is plenty of research papers on optimization floating around
 and implemented in GCC/LLVM/MSVC so most of R&D cost is payed by other
 backends/researchers.
I think you underestimate the amount of experimental work that has gone into those backends, work that ends up being trashed. It's not like you have to implement what LLVM has now. You have to implement what LLVM has and a lot of the stuff they have thrown out.
I do not. I underestime the benefits of tons of subtle passes that play into 0.1-0.2% in some cases. There are lots and lots of this in GCC/LLVM. If having the best code generated out there is not the goal we can safely omit most of these focusing on the most critical bits. -- Dmitry Olshansky
Aug 19 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 19 August 2015 at 09:29:31 UTC, Dmitry Olshansky 
wrote:
 I do not. I underestime the benefits of tons of subtle passes 
 that play into 0.1-0.2% in some cases. There are lots and lots 
 of this in GCC/LLVM. If having the best code generated out 
 there is not the goal we can safely omit most of these focusing 
 on the most critical bits.
Well, you can start on this now, but by the time it is ready and hardened, LLVM might have received improved AVX2 and AVX-512 code gen from Intel. Which basically will leave DMD in the dust.
Aug 19 2015
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 19-Aug-2015 12:46, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Wednesday, 19 August 2015 at 09:29:31 UTC, Dmitry Olshansky wrote:
 I do not. I underestime the benefits of tons of subtle passes that
 play into 0.1-0.2% in some cases. There are lots and lots of this in
 GCC/LLVM. If having the best code generated out there is not the goal
 we can safely omit most of these focusing on the most critical bits.
Well, you can start on this now, but by the time it is ready and hardened, LLVM might have received improved AVX2 and AVX-512 code gen from Intel. Which basically will leave DMD in the dust.
On numerics, video-codecs and the like. Not like compilers solely depend on AVX. -- Dmitry Olshansky
Aug 19 2015
next sibling parent reply "ponce" <contact gam3sfrommars.fr> writes:
On Wednesday, 19 August 2015 at 09:55:19 UTC, Dmitry Olshansky 
wrote:
 On 19-Aug-2015 12:46, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
 <ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Wednesday, 19 August 2015 at 09:29:31 UTC, Dmitry Olshansky 
 wrote:
 I do not. I underestime the benefits of tons of subtle passes 
 that
 play into 0.1-0.2% in some cases. There are lots and lots of 
 this in
 GCC/LLVM. If having the best code generated out there is not 
 the goal
 we can safely omit most of these focusing on the most 
 critical bits.
Well, you can start on this now, but by the time it is ready and hardened, LLVM might have received improved AVX2 and AVX-512 code gen from Intel. Which basically will leave DMD in the dust.
On numerics, video-codecs and the like. Not like compilers solely depend on AVX.
Even in video codec, AVX2 is not that useful and barely brings a 10% improvements over SSE, while being extra careful with SSE-AVX transition penalty. And to reap this benefit you would have to write in intrinsics/assembly. For AVX-512 I can't even imagine what to use such large register for. Larger registers => more spilling because of calling conventions, and more fiddling around with complicated shuffle instructions. There is a steep diminishing returns with increasing registers size.
Aug 19 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 19 August 2015 at 10:08:48 UTC, ponce wrote:
 Even in video codec, AVX2 is not that useful and barely brings 
 a 10% improvements over SSE, while being extra careful with 
 SSE-AVX transition penalty. And to reap this benefit you would 
 have to write in intrinsics/assembly.
Masked AVX instructions are turned into NOPs. So you can remove conditionals from inner loops. Performance of new instructions tend to improve generation by generation.
 For AVX-512 I can't even imagine what to use such large 
 register for. Larger registers => more spilling because of 
 calling conventions, and more fiddling around with complicated 
 shuffle instructions. There is a steep diminishing returns with 
 increasing registers size.
You have to plan your data layout. Which is why libraries should target it, so end users don't have to think too much about it. If your computations are trivial, then you are essentially memory I/O limited. SOA processing isn't really limited by shuffling. Stuff like mapping a pure function over a collection of arrays.
Aug 19 2015
parent reply "ponce" <contact gam3sfrommars.fr> writes:
On Wednesday, 19 August 2015 at 10:16:18 UTC, Ola Fosheim Grøstad 
wrote:
 On Wednesday, 19 August 2015 at 10:08:48 UTC, ponce wrote:
 Even in video codec, AVX2 is not that useful and barely brings 
 a 10% improvements over SSE, while being extra careful with 
 SSE-AVX transition penalty. And to reap this benefit you would 
 have to write in intrinsics/assembly.
Masked AVX instructions are turned into NOPs. So you can remove conditionals from inner loops. Performance of new instructions tend to improve generation by generation.
Loops in video coding already have no conditional. And for the one who have, conditionals were already removeable with existing instructions.
 For AVX-512 I can't even imagine what to use such large 
 register for. Larger registers => more spilling because of 
 calling conventions, and more fiddling around with complicated 
 shuffle instructions. There is a steep diminishing returns 
 with increasing registers size.
You have to plan your data layout. Which is why libraries should target it, so end users don't have to think too much about it. If your computations are trivial, then you are essentially memory I/O limited. SOA processing isn't really limited by shuffling. Stuff like mapping a pure function over a collection of arrays.
I stand by what I know and measured: previously few things are speed up by AVX-xxx. It almost always better investing this time to optimize somewhere else.
Aug 19 2015
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 19 August 2015 at 10:25:14 UTC, ponce wrote:
 Loops in video coding already have no conditional. And for the 
 one who have, conditionals were already removeable with 
 existing instructions.
I think you are side-stepping the issue. Most people don't write video codecs. Most people also don't want to hand optimize their inner loops. The typical and most likely scenario is to run some easy-to-read-but-suboptimal function over a dataset. You both need library and compiler support for that to work out. But even then: 10% difference in CPU benchmarks is a disaster.
 I stand by what I know and measured: previously few things are 
 speed up by AVX-xxx. It almost always better investing this 
 time to optimize somewhere else.
AVX-512 is too far into the future, but if you are going to write a backend you have to think about increasing register sizes. Just because register size increase does not mean that throughput increases in the generation it was introduced (it could translate into several micro-ops). But if you start redesigning your back end now then maybe you have something good in 5 years, so you need to plan ahead, not thinking about current gen, but 1-3 generations ahead. Keep in mind that clock speeds are unlikely to increase, but stacking of memory on top of the CPU and getting improved memory bus speeds is a quite likely scenario. A good use for the DMD backend would be to improve and redesign it for compile time evaluation. Then use LLVM for codegen.
Aug 19 2015
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 19 August 2015 at 09:55:19 UTC, Dmitry Olshansky 
wrote:
 On 19-Aug-2015 12:46, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
 <ola.fosheim.grostad+dlang gmail.com>" wrote:
 Well, you can start on this now, but by the time it is ready 
 and
 hardened, LLVM might have received improved AVX2 and AVX-512 
 code gen
 from Intel. Which basically will leave DMD in the dust.
On numerics, video-codecs and the like. Not like compilers solely depend on AVX.
Compilers are often written for scalars, but they are also just one benchmark that compilers are evaluated by. DMD could use multiple backends, use it's own performance-estimator (ran on generated code) and pick the best output from each backend. D could leverage increased register sizes for parameter transfer between non-C callable functions. Just that alone could be beneficial. Clearly having 256/512 bit wide registers matters. And you need to coordinate how the packing is done so you don't have to shuffle. Lots of options in there, but you need to be different from LLVM. You can't just take an old SSA and improve on it. Another option is to take the C++ to D converter used for building DDMD and see if it can be extended to work on LLVM.
Aug 19 2015
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 19-Aug-2015 13:09, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Wednesday, 19 August 2015 at 09:55:19 UTC, Dmitry Olshansky wrote:
 On 19-Aug-2015 12:46, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?=
 <ola.fosheim.grostad+dlang gmail.com>" wrote:
 Well, you can start on this now, but by the time it is ready and
 hardened, LLVM might have received improved AVX2 and AVX-512 code gen
 from Intel. Which basically will leave DMD in the dust.
On numerics, video-codecs and the like. Not like compilers solely depend on AVX.
Compilers are often written for scalars, but they are also just one benchmark that compilers are evaluated by. DMD could use multiple backends, use it's own performance-estimator (ran on generated code) and pick the best output from each backend.
This meets what goal? As I said it's apparent that folks like DMD for fast compile times not for inhumanly good codegen.
 D could leverage increased register sizes for parameter transfer between
 non-C callable functions. Just that alone could be beneficial. Clearly
 having 256/512 bit wide registers matters.
Load/unload via shuffling or RTT through stack going to murder that though.
 And you need to coordinate
 how the packing is done so you don't have to shuffle.
Given how flexble the current data types are I hardly see it implemented in a sane way not to mention benefits could berather slim. Lastly - why the "omnipotnent" (by this thread) LLVM/GCC guys won't implement it yet?
 Lots of options in there, but you need to be different from LLVM. You
 can't just take an old SSA and improve on it.
To slightly gain? Again the goal of maximizing the gains of vectors ops is hardly interesting IMO. -- Dmitry Olshansky
Aug 19 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 19 August 2015 at 10:33:40 UTC, Dmitry Olshansky 
wrote:
 Given how flexble the current data types are I hardly see it 
 implemented in a sane way not to mention benefits could 
 berather slim. Lastly - why the "omnipotnent" (by this thread) 
 LLVM/GCC guys won't implement it yet?
They are stuck on C semantics, and so are their optimizer. But LLVM have other calling conventions for Haskell and other languages. I believe Pony is going to use register passing internally and C ABI externally using LLVM.
 To slightly gain? Again the goal of maximizing the gains of 
 vectors ops is hardly interesting IMO.
Well… I can't argue with what you find interesting. Memory throughput and pipeline bubbles are the key bottlenecks these days. But I agree that the key point should be compilation speed / debugging. In terms of PR it would be better to say that DMD are making debug builds than to say it has a subpar optimizer.
Aug 19 2015
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 19 August 2015 at 10:50:24 UTC, Ola Fosheim Grøstad 
wrote:
 Well… I can't argue with what you find interesting. Memory 
 throughput and pipeline bubbles are the key bottlenecks these 
 days.
And just to stress this point: if you code is spending 50% of the time waiting for memory and your code is 25% slower than the competitor, then it might actually be 50% slower than the competitor for code that is memory optimal. So it's not like you just have to make your code a little bit faster, you have to make it twice as fast. The only way to go past that is to have a very intelligent optimizer that can remove memory bottle necks and then you need the much more advanced cache/SIMD-oriented optimizer and probably also change the language semantics so that memory layout can be reordered.
Aug 19 2015
prev sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Wednesday, 19 August 2015 at 09:26:43 UTC, Ola Fosheim Grøstad 
wrote:
 On Wednesday, 19 August 2015 at 08:22:58 UTC, Dmitry Olshansky 
 wrote:
 Also DMD's backend strives to stay fast _and_ generate fine 
 machine code. Getting within 10% of GCC/LLVM and being fast is 
 IMHO both possible and should be done.
But if iOS/OS-X and others are essentially requiring an LLVM-like IR as the object code format then it makes most sense to have LLVM as the default backend. If WebAsm et al is focusing on mimicing LLVM, then D's backend have to do the same. And that is not unlikely giving PNACL being LLVM based. Intel is also supportive of LLVM…
Apple is invested in LLVM. For other thing you mention, WebAssembly is an AST representation, which is both dumb and do not look like anything like LLVM IR.
 Replicating a scalar SSA like LLVM does not make a lot of 
 sense. What would make a lot of sense would be to start work on 
 an experimental SIMD SSA implemented in D that could leverage 
 benefits for next gen x86 SIMD and make Phobos target it. That 
 could attract new people to D and make D beat LLVM. You could 
 even combine LLVM and your own SIMD backend (run both, then 
 profile and pick the best code in production on a 
 function-by-function base)
WAT ?
 Or a high level compile-time oriented IR for D that can boost 
 templates semantics and compilation speed.
That's impossible in the state of template right now (I know I've been there and dropped it as the return on investement was too low).
Aug 19 2015
next sibling parent reply "jmh530" <john.michael.hall gmail.com> writes:
On Wednesday, 19 August 2015 at 17:25:13 UTC, deadalnix wrote:
 Apple is invested in LLVM. For other thing you mention, 
 WebAssembly is an AST representation, which is both dumb and do 
 not look like anything like LLVM IR.
I saw more similarity between wasm and SPIR-V than LLVM, but it definitely seems to have some differences. I'm not sure what you mean when you say that using the AST representation is dumb. It probably wouldn't be what you would design initially, but I think part of the motivation of the design was to work within the context of the web's infrastructure.
Aug 19 2015
parent "deadalnix" <deadalnix gmail.com> writes:
On Wednesday, 19 August 2015 at 18:26:45 UTC, jmh530 wrote:
 On Wednesday, 19 August 2015 at 17:25:13 UTC, deadalnix wrote:
 Apple is invested in LLVM. For other thing you mention, 
 WebAssembly is an AST representation, which is both dumb and 
 do not look like anything like LLVM IR.
I saw more similarity between wasm and SPIR-V than LLVM, but it definitely seems to have some differences. I'm not sure what you mean when you say that using the AST representation is dumb. It probably wouldn't be what you would design initially, but I think part of the motivation of the design was to work within the context of the web's infrastructure.
AST is a useful representation to extract something usable from source code and to perform some semantic analysis. It is NOT a good representation to do optimization and codegen. For these SSA and/or stack machines + CFG is much more practical. Having wasm as an AST forces the process to go roughly as follow : source code -> AST -> SSA-CFG -> optimized SSA-CFG -> AST -> wasm -> AST -> SSA-CFG -> optimized SSA-CFG -> machine code. It add many steps in the process for no good reason. Well, in fact there is a good reason. pNaCl is SSA-CFG but mozilla spend a fair amount of time to explain us how bad and evil it is compared to the glorious asm.js they proposed. Going back to SSA would be an admission of defeat, which nobody likes to do, and webasm want mozilla to be onboard, so sidestepping the whole issue by going AST makes sense politically. It has no engineering merit.
Aug 19 2015
prev sibling next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 19 August 2015 at 17:25:13 UTC, deadalnix wrote:
 Apple is invested in LLVM. For other thing you mention, 
 WebAssembly is an AST representation, which is both dumb and do 
 not look like anything like LLVM IR.
For the time being asm.js is a starting point. Nobody knows what WebAssembly will look like, but if emscripten is anything to go buy it will most likely pay off being in the LLVM eco system.
 Replicating a scalar SSA like LLVM does not make a lot of 
 sense. What would make a lot of sense would be to start work
 WAT ?
When simplifying over scalars you make a trade off. By having a simplifier that is optimized for keeping everything in vector units you can get better results for some code sections.
 Or a high level compile-time oriented IR for D that can boost 
 templates semantics and compilation speed.
That's impossible in the state of template right now (I know I've been there and dropped it as the return on investement was too low).
What is it about D template mechanics that make JITing difficult?
Aug 21 2015
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Friday, 21 August 2015 at 10:11:52 UTC, Ola Fosheim Grøstad 
wrote:
 Replicating a scalar SSA like LLVM does not make a lot of 
 sense. What would make a lot of sense would be to start work
 WAT ?
When simplifying over scalars you make a trade off. By having a simplifier that is optimized for keeping everything in vector units you can get better results for some code sections.
That still do not make any sense.
 Or a high level compile-time oriented IR for D that can boost 
 templates semantics and compilation speed.
That's impossible in the state of template right now (I know I've been there and dropped it as the return on investement was too low).
What is it about D template mechanics that make JITing difficult?
"Or a high level compile-time oriented IR for D that can boost templates semantics and compilation speed." "What is it about D template mechanics that make JITing difficult?" You are not even trying to make any sense, do you ?
Aug 21 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 21 August 2015 at 20:41:58 UTC, deadalnix wrote:
 On Friday, 21 August 2015 at 10:11:52 UTC, Ola Fosheim Grøstad 
 wrote:
 Replicating a scalar SSA like LLVM does not make a lot of 
 sense. What would make a lot of sense would be to start work
 WAT ?
When simplifying over scalars you make a trade off. By having a simplifier that is optimized for keeping everything in vector units you can get better results for some code sections.
That still do not make any sense.
 Or a high level compile-time oriented IR for D that can 
 boost templates semantics and compilation speed.
That's impossible in the state of template right now (I know I've been there and dropped it as the return on investement was too low).
What is it about D template mechanics that make JITing difficult?
"Or a high level compile-time oriented IR for D that can boost templates semantics and compilation speed." "What is it about D template mechanics that make JITing difficult?" You are not even trying to make any sense, do you ?
What is the source of your reading comprehension problems? You are deliberately trolling. I know.
Aug 21 2015
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Friday, 21 August 2015 at 20:47:36 UTC, Ola Fosheim Grøstad 
wrote:
 On Friday, 21 August 2015 at 20:41:58 UTC, deadalnix wrote:
 On Friday, 21 August 2015 at 10:11:52 UTC, Ola Fosheim Grøstad 
 wrote:
 Replicating a scalar SSA like LLVM does not make a lot of 
 sense. What would make a lot of sense would be to start work
 WAT ?
When simplifying over scalars you make a trade off. By having a simplifier that is optimized for keeping everything in vector units you can get better results for some code sections.
That still do not make any sense.
 Or a high level compile-time oriented IR for D that can 
 boost templates semantics and compilation speed.
That's impossible in the state of template right now (I know I've been there and dropped it as the return on investement was too low).
What is it about D template mechanics that make JITing difficult?
"Or a high level compile-time oriented IR for D that can boost templates semantics and compilation speed." "What is it about D template mechanics that make JITing difficult?" You are not even trying to make any sense, do you ?
What is the source of your reading comprehension problems? You are deliberately trolling. I know.
The answer is blue. KAMOULOX !
Aug 21 2015
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 21 August 2015 at 21:04:08 UTC, deadalnix wrote:
 The answer is blue. KAMOULOX !
You need to change your attitude.
Aug 21 2015
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/19/2015 10:25 AM, deadalnix wrote:
 Replicating a scalar SSA like LLVM does not make a lot of sense. What would
 make a lot of sense would be to start work on an experimental SIMD SSA
 implemented in D that could leverage benefits for next gen x86 SIMD and make
 Phobos target it. That could attract new people to D and make D beat LLVM. You
 could even combine LLVM and your own SIMD backend (run both, then profile and
 pick the best code in production on a function-by-function base)
WAT ?
By synergistic disambiguation of mission critical components, optimizers can be tasked with blue sky leverage of core competencies to generate rock star empowerment to maximally utilize resources. FTFY Sorry, Ola :-)
Aug 21 2015
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Fri, Aug 21, 2015 at 07:17:03PM -0700, Walter Bright via Digitalmars-d wrote:
 On 8/19/2015 10:25 AM, deadalnix wrote:
Replicating a scalar SSA like LLVM does not make a lot of sense.
What would make a lot of sense would be to start work on an
experimental SIMD SSA implemented in D that could leverage benefits
for next gen x86 SIMD and make Phobos target it. That could attract
new people to D and make D beat LLVM. You could even combine LLVM
and your own SIMD backend (run both, then profile and pick the best
code in production on a function-by-function base)
WAT ?
By synergistic disambiguation of mission critical components, optimizers can be tasked with blue sky leverage of core competencies to generate rock star empowerment to maximally utilize resources.
[...] Ah, yes...: http://emptybottle.org/bullshit/index.php T -- Not all rumours are as misleading as this one.
Aug 21 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 22 August 2015 at 02:42:48 UTC, H. S. Teoh wrote:
 Ah, yes...: http://emptybottle.org/bullshit/index.php
It would be a lot more helpful if you had provided a link to a paper on scalar branch divergence and memory divergence.
Aug 22 2015
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 22 August 2015 at 07:10:28 UTC, Ola Fosheim Grøstad 
wrote:
 On Saturday, 22 August 2015 at 02:42:48 UTC, H. S. Teoh wrote:
 Ah, yes...: http://emptybottle.org/bullshit/index.php
It would be a lot more helpful if you had provided a link to a paper on scalar branch divergence and memory divergence.
http://www.cs.virginia.edu/~skadron/Papers/meng_dws_isca10.pdf Here you go.
Aug 22 2015
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 22 August 2015 at 07:31:45 UTC, deadalnix wrote:
 On Saturday, 22 August 2015 at 07:10:28 UTC, Ola Fosheim 
 Grøstad wrote:
 On Saturday, 22 August 2015 at 02:42:48 UTC, H. S. Teoh wrote:
 Ah, yes...: http://emptybottle.org/bullshit/index.php
It would be a lot more helpful if you had provided a link to a paper on scalar branch divergence and memory divergence.
http://www.cs.virginia.edu/~skadron/Papers/meng_dws_isca10.pdf Here you go.
Not relevant.
Aug 22 2015
prev sibling parent Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 22 August 2015 at 09:31, deadalnix via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 On Saturday, 22 August 2015 at 07:10:28 UTC, Ola Fosheim Gr=C3=B8stad wro=
te:
 On Saturday, 22 August 2015 at 02:42:48 UTC, H. S. Teoh wrote:

 Ah, yes...: http://emptybottle.org/bullshit/index.php
It would be a lot more helpful if you had provided a link to a paper on scalar branch divergence and memory divergence.
http://www.cs.virginia.edu/~skadron/Papers/meng_dws_isca10.pdf Here you go.
Also relevant: https://www.jstage.jst.go.jp/article/trol/7/3/7_147/_pdf ;-)
Aug 22 2015
prev sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 22 August 2015 at 02:17:04 UTC, Walter Bright wrote:
 On 8/19/2015 10:25 AM, deadalnix wrote:
 Replicating a scalar SSA like LLVM does not make a lot of 
 sense. What would
 make a lot of sense would be to start work on an experimental 
 SIMD SSA
 implemented in D that could leverage benefits for next gen 
 x86 SIMD and make
 Phobos target it. That could attract new people to D and make 
 D beat LLVM. You
 could even combine LLVM and your own SIMD backend (run both, 
 then profile and
 pick the best code in production on a function-by-function 
 base)
WAT ?
By synergistic disambiguation of mission critical components, optimizers can be tasked with blue sky leverage of core competencies to generate rock star empowerment to maximally utilize resources. FTFY Sorry, Ola :-)
There was nothing in unclear in what I wrote. Sorry.
Aug 21 2015
prev sibling parent reply Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 19/08/2015 09:22, Dmitry Olshansky wrote:
 I'm interested in ways to reduce that gap.
[...] Replace the backend with GDC or LLVM? :-P
Oh come on - LLVM was an inferiour backend for some time. So what? Let us no work on it 'cause GCC is faster?
I can't figure out what you meant: LLVM has an inferiour backend to what? GCC or DMD? -- Bruno Medeiros https://twitter.com/brunodomedeiros
Aug 27 2015
parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Thursday, 27 August 2015 at 13:49:18 UTC, Bruno Medeiros wrote:
 On 19/08/2015 09:22, Dmitry Olshansky wrote:
 I'm interested in ways to reduce that gap.
[...] Replace the backend with GDC or LLVM? :-P
Oh come on - LLVM was an inferiour backend for some time. So what? Let us no work on it 'cause GCC is faster?
I can't figure out what you meant: LLVM has an inferiour backend to what? GCC or DMD?
I think he's saying that the argument: "Don't work on DMD because it's already far behind" could have been applied to working on LLVM when it was far behind GCC. I don't agree, but I think that's what he means.
Aug 27 2015
next sibling parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"John Colvin"  wrote in message news:qlbnpjnizwpslrdpktsw forum.dlang.org...

 I think he's saying that the argument: "Don't work on DMD because it's 
 already far behind" could have been applied to working on LLVM when it was 
 far behind GCC.  I don't agree, but I think that's what he means.
It helps that LLVM has a superior license.
Aug 27 2015
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 28 August 2015 at 01:42:22 UTC, Daniel Murphy wrote:
 "John Colvin"  wrote in message 
 news:qlbnpjnizwpslrdpktsw forum.dlang.org...

 I think he's saying that the argument: "Don't work on DMD 
 because it's already far behind" could have been applied to 
 working on LLVM when it was far behind GCC.  I don't agree, 
 but I think that's what he means.
It helps that LLVM has a superior license.
+ LLVM started as an academic research project in compiler design, so it was never far behind conceptually...
Aug 28 2015
prev sibling parent Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 27/08/2015 17:14, John Colvin wrote:
 On Thursday, 27 August 2015 at 13:49:18 UTC, Bruno Medeiros wrote:
 On 19/08/2015 09:22, Dmitry Olshansky wrote:
 I'm interested in ways to reduce that gap.
[...] Replace the backend with GDC or LLVM? :-P
Oh come on - LLVM was an inferiour backend for some time. So what? Let us no work on it 'cause GCC is faster?
I can't figure out what you meant: LLVM has an inferiour backend to what? GCC or DMD?
I think he's saying that the argument: "Don't work on DMD because it's already far behind" could have been applied to working on LLVM when it was far behind GCC. I don't agree, but I think that's what he means.
Duh, I somehow misread it as "LLVM has an inferiour backend for some time now", which is why I got confused. Yeah, it's clear he was comparing with GCC. I don't agree with his point either. -- Bruno Medeiros https://twitter.com/brunodomedeiros
Aug 28 2015
prev sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 18 August 2015 at 21:26:43 UTC, rsw0x wrote:
 On Tuesday, 18 August 2015 at 21:18:34 UTC, rsw0x wrote:
 On Tuesday, 18 August 2015 at 10:45:49 UTC, Walter Bright 
 wrote:
 Martin ran some benchmarks recently that showed that ddmd 
 compiled with dmd was about 30% slower than when compiled 
 with gdc/ldc. This seems to be fairly typical.

 I'm interested in ways to reduce that gap.
retire dmd? this is ridiculous.
To further expand upon this, if you want to make D fast - Fix the interface between the compiler and the runtime(including the inability for compilers to inline simple things like allocations which makes allocations have massive overheads.) Then, fix the GC. Make the GC both shared and immutable aware, then moving the GC to a thread local "island"-style GC would be fairly easy. D's GC is probably the slowest GC of any major language available, and the entire thing is wrapped in mutexes.
I've been working on that for a while. It is definitively the right direction fro D IMO, but that is far from being "fairly easy".
 D has far, far bigger performance problems that dmd's backend.

 Maybe you should take a look at what Go has recently done with 
 their GC to get an idea of what D's competition has been up to. 
 https://talks.golang.org/2015/go-gc.pdf
Aug 18 2015
parent reply "rsw0x" <anonymous anonymous.com> writes:
On Tuesday, 18 August 2015 at 21:41:26 UTC, deadalnix wrote:
 On Tuesday, 18 August 2015 at 21:26:43 UTC, rsw0x wrote:
 On Tuesday, 18 August 2015 at 21:18:34 UTC, rsw0x wrote:
 On Tuesday, 18 August 2015 at 10:45:49 UTC, Walter Bright 
 wrote:
 Martin ran some benchmarks recently that showed that ddmd 
 compiled with dmd was about 30% slower than when compiled 
 with gdc/ldc. This seems to be fairly typical.

 I'm interested in ways to reduce that gap.
retire dmd? this is ridiculous.
To further expand upon this, if you want to make D fast - Fix the interface between the compiler and the runtime(including the inability for compilers to inline simple things like allocations which makes allocations have massive overheads.) Then, fix the GC. Make the GC both shared and immutable aware, then moving the GC to a thread local "island"-style GC would be fairly easy. D's GC is probably the slowest GC of any major language available, and the entire thing is wrapped in mutexes.
I've been working on that for a while. It is definitively the right direction fro D IMO, but that is far from being "fairly easy".
 D has far, far bigger performance problems that dmd's backend.

 Maybe you should take a look at what Go has recently done with 
 their GC to get an idea of what D's competition has been up 
 to. https://talks.golang.org/2015/go-gc.pdf
I used 'fairly easy' in the 'the implementation is left to the reader' sort of way ;) But yes, we discussed this on Twitter and Walter confirmed what I thought would be a huge issue with this. Shared needs to be changed for a GC like that to be implemented. D's current GC could see improvements, but it will never ever catch up to the GC of any other major language without changes to the language itself.
Aug 18 2015
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 18 August 2015 at 21:48:13 UTC, rsw0x wrote:
 D's current GC could see improvements, but it will never ever 
 catch up to the GC of any other major language without changes 
 to the language itself.
There's been lots and lots and lots of forum discussions in the past few years of how the language can be changed to support better GC latency, but… Walter does not want to carry ownership information in pointer types. Which I don't think is all that consistent given const and shared, but there you go. So the most likely path to success is to minimize the role of the GC and leave it as a "prototyping tool".
Aug 18 2015
prev sibling next sibling parent reply "anonymous" <a b.cd> writes:
On Tuesday, 18 August 2015 at 10:45:49 UTC, Walter Bright wrote:
 So if you're comparing code generated by dmd/gdc/ldc, and 
 notice something that dmd could do better at (1, 2 or 3), 
 please let me know. Often this sort of thing is low hanging 
 fruit that is fairly easily inserted into the back end.
I have a about 30 lines of numerical code (using real) where the gap is about 200%-300% between ldc/gdc and dmd (linux x86_64). In fact dmd -O etc is at the level of ldc/gdc without any optimizations and dmd without -0 is even slower. With double instead of real the gap is about 30%. dmd is unable to inline 3 function calls (pragma(inline, true) => compiler error) but for ldc disabling inlining does not really hurt performance. My knowledge of asm and compiler optimizations are quite limited and I can't figure out what dmd could do better. If someone is interested to investigate this, I can put the source + input file for the benchmark on github. Just ping me:) Notice: I use ldc/gdc anyway for such stuff and imo the performance of dmd is not the most important issue with D - e.g. compared to C++ interface (mainly std::vector).
Aug 19 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/19/2015 7:34 AM, anonymous wrote:
 I have a about 30 lines of numerical code (using real) where the gap is about
 200%-300% between ldc/gdc and dmd (linux x86_64). In fact dmd -O etc is at the
 level of ldc/gdc without any optimizations and dmd without -0 is even slower.
 With double instead of real the gap is about 30%.
If it's just 30 lines of code, you can put it on bugzilla.
Aug 19 2015
parent reply "anonymous" <a b.cd> writes:
On Wednesday, 19 August 2015 at 17:30:13 UTC, Walter Bright wrote:
 On 8/19/2015 7:34 AM, anonymous wrote:
 I have a about 30 lines of numerical code (using real) where 
 the gap is about
 200%-300% between ldc/gdc and dmd (linux x86_64). In fact dmd 
 -O etc is at the
 level of ldc/gdc without any optimizations and dmd without -0 
 is even slower.
 With double instead of real the gap is about 30%.
If it's just 30 lines of code, you can put it on bugzilla.
The problem are not the 30 lines + white space but the input file used in my benchmark. The whole benchmark programm has 115 lines including empty lines and braces. The input file is 4.8 MB large. Anyway the raw asm generated by the different compiler may be helpful to the expert:) https://issues.dlang.org/show_bug.cgi?id=14937
Aug 19 2015
parent Walter Bright <newshound2 digitalmars.com> writes:
On 8/19/2015 12:39 PM, anonymous wrote:
 The problem are not the 30 lines + white space but the input file used in my
 benchmark. The whole benchmark programm has 115 lines including empty lines and
 braces. The input file is 4.8 MB large.

 Anyway the raw asm generated by the different compiler may be helpful to the
 expert:)

 https://issues.dlang.org/show_bug.cgi?id=14937
Thanks!
Aug 19 2015
prev sibling next sibling parent reply Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 18 August 2015 at 12:45, Walter Bright via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 Martin ran some benchmarks recently that showed that ddmd compiled with
 dmd was about 30% slower than when compiled with gdc/ldc. This seems to be
 fairly typical.

 I'm interested in ways to reduce that gap.

 There are 3 broad kinds of optimizations that compilers do:

 1. source translations like rewriting x*2 into x<<1, and function inlining

 2. instruction selection patterns like should one generate:

     SETC AL
     MOVZ EAX,AL

 or:
     SBB EAX
     NEG EAX

 3. data flow analysis optimizations like constant propagation, dead code
 elimination, register allocation, loop invariants, etc.

 Modern compilers (including dmd) do all three.

 So if you're comparing code generated by dmd/gdc/ldc, and notice something
 that dmd could do better at (1, 2 or 3), please let me know. Often this
 sort of thing is low hanging fruit that is fairly easily inserted into the
 back end.

 For example, recently I improved the usage of the SETcc instructions.

 https://github.com/D-Programming-Language/dmd/pull/4901
 https://github.com/D-Programming-Language/dmd/pull/4904

 A while back I improved usage of BT instructions, the way switch
 statements were implemented, and fixed integer divide by a constant with
 multiply by its reciprocal.
You didn't fix integer divide on all targets? https://issues.dlang.org/show_bug.cgi?id=14936 (Consider this my contribution to your low hanging fruit)
Aug 19 2015
parent Walter Bright <newshound2 digitalmars.com> writes:
On 8/19/2015 9:53 AM, Iain Buclaw via Digitalmars-d wrote:
 https://issues.dlang.org/show_bug.cgi?id=14936

 (Consider this my contribution to your low hanging fruit)
Thanks!
Aug 19 2015
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2015-08-18 12:45, Walter Bright wrote:
 Martin ran some benchmarks recently that showed that ddmd compiled with
 dmd was about 30% slower than when compiled with gdc/ldc. This seems to
 be fairly typical.
Not sure how the compilers behave in this case but what about devirtualization? Since I think most developers compile their D programs with all files at once there should be pretty good opportunities to do devirtualization. -- /Jacob Carlborg
Aug 19 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/19/2015 11:03 AM, Jacob Carlborg wrote:
 Not sure how the compilers behave in this case but what about devirtualization?
 Since I think most developers compile their D programs with all files at once
 there should be pretty good opportunities to do devirtualization.
It's true that if generating an exe, the compiler can mark leaf classes as final and get devirtualization. (Of course, you can manually add 'final' to classes.) It's one way D can generate faster code than C++.
Aug 19 2015
next sibling parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Wednesday, 19 August 2015 at 18:41:07 UTC, Walter Bright wrote:
 On 8/19/2015 11:03 AM, Jacob Carlborg wrote:
 Not sure how the compilers behave in this case but what about 
 devirtualization?
 Since I think most developers compile their D programs with 
 all files at once
 there should be pretty good opportunities to do 
 devirtualization.
It's true that if generating an exe, the compiler can mark leaf classes as final and get devirtualization. (Of course, you can manually add 'final' to classes.) It's one way D can generate faster code than C++.
C++ also has final and if I am not mistaken both LLVM and Visual C++ do devirtualization, not sure about other compilers.
Aug 19 2015
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Wednesday, 19 August 2015 at 18:47:21 UTC, Paulo Pinto wrote:
 On Wednesday, 19 August 2015 at 18:41:07 UTC, Walter Bright 
 wrote:
 On 8/19/2015 11:03 AM, Jacob Carlborg wrote:
 Not sure how the compilers behave in this case but what about 
 devirtualization?
 Since I think most developers compile their D programs with 
 all files at once
 there should be pretty good opportunities to do 
 devirtualization.
It's true that if generating an exe, the compiler can mark leaf classes as final and get devirtualization. (Of course, you can manually add 'final' to classes.) It's one way D can generate faster code than C++.
C++ also has final and if I am not mistaken both LLVM and Visual C++ do devirtualization, not sure about other compilers.
GCC is much better than LLVM at this. This is an active area of work in both compiler right now. Note that combined with PGO, you can do some very nice speculative devirtualization. D has a problem here: template are duck typed. That means the compiler can't know what possible instantiations may be done, especially when shared object are in the party. This is one area where stronger typing for metaprogramming is a win.
Aug 19 2015
parent Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 19 August 2015 at 21:00, deadalnix via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 On Wednesday, 19 August 2015 at 18:47:21 UTC, Paulo Pinto wrote:

 On Wednesday, 19 August 2015 at 18:41:07 UTC, Walter Bright wrote:

 On 8/19/2015 11:03 AM, Jacob Carlborg wrote:

 Not sure how the compilers behave in this case but what about
 devirtualization?
 Since I think most developers compile their D programs with all files
 at once
 there should be pretty good opportunities to do devirtualization.
It's true that if generating an exe, the compiler can mark leaf classes as final and get devirtualization. (Of course, you can manually add 'final' to classes.) It's one way D can generate faster code than C++.
C++ also has final and if I am not mistaken both LLVM and Visual C++ do devirtualization, not sure about other compilers.
GCC is much better than LLVM at this. This is an active area of work in both compiler right now.
Can't speak for LLVM, but scope classes in GDC are *always* devirtualized because the compiler knows the vtable layout and using constant propagation to find the direct call. You *could* do this with all classes in general, but this is a missed opportunity because the vtable is initialized in the library using memcpy, rather than by the compiler using a direct copy assignment. https://issues.dlang.org/show_bug.cgi?id=14912 I not sure even LTO/PGO could see through the memcpy to devirtualize even the most basic calls.
Aug 19 2015
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2015-08-19 20:41, Walter Bright wrote:

 It's true that if generating an exe, the compiler can mark leaf classes
 as final and get devirtualization.
Since D has the "override" keyword it should be possible to devirtualize a call to a non leaf classes as well, as long as the method is not overridden.
 (Of course, you can manually add'final' to classes.)
The same way as you can manually do many of the other optimizations as well. One case where "final" does not help is if there's a piece of code the is shared between two projects. In one project there is a subclass with a method overridden but in the other project there's not. // shared code module base; class Base { void foo () {} } // project A import base; class A : Base { } // project B import base; class B : Base { override void foo () {} } -- /Jacob Carlborg
Aug 19 2015
prev sibling next sibling parent reply "tsbockman" <thomas.bockman gmail.com> writes:
On Tuesday, 18 August 2015 at 10:45:49 UTC, Walter Bright wrote:
 Martin ran some benchmarks recently that showed that ddmd 
 compiled with dmd was about 30% slower than when compiled with 
 gdc/ldc. This seems to be fairly typical.

 I'm interested in ways to reduce that gap.

 There are 3 broad kinds of optimizations that compilers do:

 1. source translations like rewriting x*2 into x<<1, and 
 function inlining

 2. instruction selection patterns like should one generate:

     SETC AL
     MOVZ EAX,AL

 or:
     SBB EAX
     NEG EAX

 3. data flow analysis optimizations like constant propagation, 
 dead code elimination, register allocation, loop invariants, 
 etc.

 Modern compilers (including dmd) do all three.

 So if you're comparing code generated by dmd/gdc/ldc, and 
 notice something that dmd could do better at (1, 2 or 3), 
 please let me know. Often this sort of thing is low hanging 
 fruit that is fairly easily inserted into the back end.

 For example, recently I improved the usage of the SETcc 
 instructions.

 https://github.com/D-Programming-Language/dmd/pull/4901
 https://github.com/D-Programming-Language/dmd/pull/4904

 A while back I improved usage of BT instructions, the way 
 switch statements were implemented, and fixed integer divide by 
 a constant with multiply by its reciprocal.
I lack the assembly language skills to determine the cause(s) myself, but my [CheckedInt](https://github.com/tsbockman/CheckedInt) benchmark runs about 10x slower when compiled with DMD rather than GDC. I'm sure there's some low-hanging fruit in there somewhere... Note that while it's far from being a minimal test case, the runtime code is nowhere near as complicated as it might appear at first - the vast majority of the complexity in the code is compile-time logic. I could produce a similar example without most of the compile-time obfuscation, if requested. Also note that the speed difference has nothing to do with the use of core.checkedint intrinsics, as it was there before those were implemented in GDC.
Aug 19 2015
parent reply "tsbockman" <thomas.bockman gmail.com> writes:
On Wednesday, 19 August 2015 at 22:15:46 UTC, tsbockman wrote:
 I lack the assembly language skills to determine the cause(s) 
 myself, but my 
 [CheckedInt](https://github.com/tsbockman/CheckedInt) benchmark 
 runs about 10x slower when compiled with DMD rather than GDC. 
 I'm sure there's some low-hanging fruit in there somewhere...
While doing some refactoring and updating CheckedInt for DMD 2.068, I have discovered one major source of slowness: DMD cannot inline even trivial struct constructors: // Error: constructor foo.this cannot inline function struct foo { int bar; pragma(inline, true) this(int bar) { this.bar = bar; } } Refactoring my code to reduce the use of struct constructors yielded a 2x speed boost. The workaround is stupidly simple, though ugly: struct foo { int bar; pragma(inline, true) static auto inline_cons(int bar) { foo ret = void; ret.bar = bar; return ret; } }
Aug 28 2015
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Fri, Aug 28, 2015 at 11:57:06PM +0000, tsbockman via Digitalmars-d wrote:
 On Wednesday, 19 August 2015 at 22:15:46 UTC, tsbockman wrote:
I lack the assembly language skills to determine the cause(s) myself,
but my [CheckedInt](https://github.com/tsbockman/CheckedInt)
benchmark runs about 10x slower when compiled with DMD rather than
GDC. I'm sure there's some low-hanging fruit in there somewhere...
While doing some refactoring and updating CheckedInt for DMD 2.068, I have discovered one major source of slowness: DMD cannot inline even trivial struct constructors:
[...] Sounds like very low-hanging fruit. Is there a bug reported for this yet? T -- Which is worse: ignorance or apathy? Who knows? Who cares? -- Erich Schubert
Aug 28 2015
parent "tsbockman" <thomas.bockman gmail.com> writes:
On Saturday, 29 August 2015 at 00:06:39 UTC, H. S. Teoh wrote:
 On Fri, Aug 28, 2015 at 11:57:06PM +0000, tsbockman via 
 Digitalmars-d wrote:
 On Wednesday, 19 August 2015 at 22:15:46 UTC, tsbockman wrote:
I lack the assembly language skills to determine the cause(s) 
myself, but my 
[CheckedInt](https://github.com/tsbockman/CheckedInt) 
benchmark runs about 10x slower when compiled with DMD rather 
than GDC. I'm sure there's some low-hanging fruit in there 
somewhere...
While doing some refactoring and updating CheckedInt for DMD 2.068, I have discovered one major source of slowness: DMD cannot inline even trivial struct constructors:
[...] Sounds like very low-hanging fruit. Is there a bug reported for this yet? T
I couldn't find one, so I made a new one just now: [Issue 14975](https://issues.dlang.org/show_bug.cgi?id=14975)
Aug 28 2015
prev sibling next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08/18/2015 12:45 PM, Walter Bright wrote:
 Martin ran some benchmarks recently that showed that ddmd compiled with
 dmd was about 30% slower than when compiled with gdc/ldc. This seems to
 be fairly typical.

 I'm interested in ways to reduce that gap.

 There are 3 broad kinds of optimizations that compilers do:

 1. source translations like rewriting x*2 into x<<1, and function inlining

 2. instruction selection patterns like should one generate:

      SETC AL
      MOVZ EAX,AL

 or:
      SBB EAX
      NEG EAX

 3. data flow analysis optimizations like constant propagation, dead code
 elimination, register allocation, loop invariants, etc.

 Modern compilers (including dmd) do all three.

 So if you're comparing code generated by dmd/gdc/ldc, and notice
 something that dmd could do better at (1, 2 or 3), please let me know.
 Often this sort of thing is low hanging fruit that is fairly easily
 inserted into the back end.

 For example, recently I improved the usage of the SETcc instructions.

 https://github.com/D-Programming-Language/dmd/pull/4901
 https://github.com/D-Programming-Language/dmd/pull/4904

 A while back I improved usage of BT instructions, the way switch
 statements were implemented, and fixed integer divide by a constant with
 multiply by its reciprocal.
Maybe relevant: There's some work on automatically discovering peephole optimizations that a compiler misses, e.g. http://blog.regehr.org/archives/1109
Aug 19 2015
parent "welkam" <wwwelkam gmail.com> writes:
On Wednesday, 19 August 2015 at 22:58:35 UTC, Timon Gehr wrote:
 Maybe relevant: There's some work on automatically discovering 
 peephole optimizations that a compiler misses, e.g. 
 http://blog.regehr.org/archives/1109
There is a talk of this https://www.youtube.com/watch?v=Ux0YnVEaI6A
Aug 19 2015
prev sibling next sibling parent reply "ixid" <adamsibson hotmail.com> writes:
On Tuesday, 18 August 2015 at 10:45:49 UTC, Walter Bright wrote:
 Martin ran some benchmarks recently that showed that ddmd 
 compiled with dmd was about 30% slower than when compiled with 
 gdc/ldc. This seems to be fairly typical.

 I'm interested in ways to reduce that gap.
One of D's potential massive wins is speed and I think that has the most easily conveyed impacted on the audience. If we had the best benchmark site for a very large range of languages it would not only draw people here but drive the improvement of D on all compilers (and perhaps allow us to make LDC/GCD for run speed, DMD for compile speed more explicit) as every time there is a benchmark contest we seem to find some small thing that needs a fix and then D blows others away. This would also convey more idiomatic D for performance as D seems to suffer from people writing it as whatever language they come from. People love competitions, the current benchmark site that seems to weirdly dislike D is one of people's go to references. I do not have the ability to do this but it would seem like an excellent project for someone outside the major development group, a Summer of Code-esque thing.
Aug 24 2015
next sibling parent reply "jmh530" <john.michael.hall gmail.com> writes:
On Monday, 24 August 2015 at 08:59:57 UTC, ixid wrote:
 One of D's potential massive wins is speed and I think that has 
 the most easily conveyed impacted on the audience. If we had 
 the best benchmark site for a very large range of languages it 
 would not only draw people here but drive the improvement of D 
 on all compilers (and perhaps allow us to make LDC/GCD for run 
 speed, DMD for compile speed more explicit) as every time there 
 is a benchmark contest we seem to find some small thing that 
 needs a fix and then D blows others away. This would also 
 convey more idiomatic D for performance as D seems to suffer 
 from people writing it as whatever language they come from. 
 People love competitions, the current benchmark site that seems 
 to weirdly dislike D is one of people's go to references. I do 
 not have the ability to do this but it would seem like an 
 excellent project for someone outside the major development 
 group, a Summer of Code-esque thing.
You mean this site? http://benchmarksgame.alioth.debian.org/
Aug 24 2015
parent "ixid" <adamsibson hotmail.com> writes:
On Monday, 24 August 2015 at 13:45:58 UTC, jmh530 wrote:
 You mean this site?
 http://benchmarksgame.alioth.debian.org/
Yes, precisely that but try to one up it with more challenges.
Aug 24 2015
prev sibling parent reply "Isaac Gouy" <igouy2 yahoo.com> writes:
On Monday, 24 August 2015 at 08:59:57 UTC, ixid wrote:
-snip-
 People love competitions, the current benchmark site that seems 
 to weirdly dislike D is one of people's go to references. I do 
 not have the ability to do this but it would seem like an 
 excellent project for someone outside the major development 
 group, a Summer of Code-esque thing.
Lest we forget, this time last year -- http://forum.dlang.org/post/lv9s7n$1trl$1 digitalmars.com
Aug 24 2015
parent reply "ixid" <adamsibson hotmail.com> writes:
On Monday, 24 August 2015 at 15:03:48 UTC, Isaac Gouy wrote:
 On Monday, 24 August 2015 at 08:59:57 UTC, ixid wrote:
 -snip-
 People love competitions, the current benchmark site that 
 seems to weirdly dislike D is one of people's go to 
 references. I do not have the ability to do this but it would 
 seem like an excellent project for someone outside the major 
 development group, a Summer of Code-esque thing.
Lest we forget, this time last year -- http://forum.dlang.org/post/lv9s7n$1trl$1 digitalmars.com
Yes, it requires someone to pick up the baton for what is clearly a very significant task. Your site is excellent and it's very unfortunate that D is absent.
Aug 24 2015
parent "Isaac Gouy" <igouy2 yahoo.com> writes:
On Monday, 24 August 2015 at 15:36:42 UTC, ixid wrote:
-snip-
 Yes, it requires someone to pick up the baton for what is 
 clearly a very significant task. Your site is excellent and 
 it's very unfortunate that D is absent.
iirc I asked Peter Alexander about progress last December and he had successfully used the provided scripts without any difficulty. Someone has published a Python comparison website (even re-using the PHP scripts as-is!) without needing to ask me any questions at all -- http://pybenchmarks.org/ It just needs "someone to pick up the baton" and do it, instead of talking about doing it.
Aug 24 2015
prev sibling next sibling parent reply "Daniel N" <ufo orbiting.us> writes:
On Tuesday, 18 August 2015 at 10:45:49 UTC, Walter Bright wrote:
 I'm interested in ways to reduce that gap.

 There are 3 broad kinds of optimizations that compilers do:

 1. source translations like rewriting x*2 into x<<1, and 
 function inlining

 So if you're comparing code generated by dmd/gdc/ldc, and 
 notice something that dmd could do better at (1, 2 or 3), 
 please let me know.
One low-hanging fruit coming right up: https://issues.dlang.org/show_bug.cgi?id=14840
Aug 29 2015
parent reply "Temtaime" <temtaime gmail.com> writes:
You misunderstood me.
It can be definitely done. The question is about rationality and 
quality.
Okay, you've done a C++ compiler. Nobody uses that compiler in 
real projects. It cannot even compile curl or another large 
project.

You done x86 and x64 backends and plans to make an ARM one. Okee, 
but in benchmarks it will be always behind gcc/llvm, get over it.

That's a reality.
Aug 29 2015
parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Saturday, 29 August 2015 at 12:38:45 UTC, Temtaime wrote:
 Okay, you've done a C++ compiler. Nobody uses that compiler in 
 real projects.
For about a decade, dmc was outcompeting the efforts of big companies in features, compile speed, code optimization, AND stability. Sure, it has fallen behind now, but only because Walter sat down for 15 years so they could catch up.... (time he used to get streets ahead by creating this thing called 'Mars', which again the big guys are trying to catch up to). I'm happy with the codegen the way it is, it is good enough for me, but let's not make mountains out of hills.
Aug 29 2015
next sibling parent "Temtaime" <temtaime gmail.com> writes:
On Saturday, 29 August 2015 at 12:59:59 UTC, Adam D. Ruppe wrote:
 On Saturday, 29 August 2015 at 12:38:45 UTC, Temtaime wrote:
 Okay, you've done a C++ compiler. Nobody uses that compiler in 
 real projects.
For about a decade, dmc was outcompeting the efforts of big companies in features, compile speed, code optimization, AND stability. Sure, it has fallen behind now, but only because Walter sat down for 15 years so they could catch up.... (time he used to get streets ahead by creating this thing called 'Mars', which again the big guys are trying to catch up to). I'm happy with the codegen the way it is, it is good enough for me, but let's not make mountains out of hills.
I'm writing a 3D engine in D. There's many, many of math. In my benchmarks when it's compiled with DMD there's ~100 fps and with LDC fps raises to ~500. LDC vectorizes all operations with matrix, inlines, etc. If it's good to you, it's not so for others. Quality of codegen is not only performance, but also how many battery apps drain on portables.
Aug 29 2015
prev sibling parent reply "David Nadlinger" <code klickverbot.at> writes:
On Saturday, 29 August 2015 at 12:59:59 UTC, Adam D. Ruppe wrote:
 I'm happy with the codegen the way it is, it is good enough for 
 me, but let's not make mountains out of hills.
But the fact is that many people are not. Even the core language team, who doesn't want their compiler to get 30% slower on the next release. — David
Aug 29 2015
next sibling parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Sat, Aug 29, 2015 at 01:06:56PM +0000, David Nadlinger via Digitalmars-d
wrote:
 On Saturday, 29 August 2015 at 12:59:59 UTC, Adam D. Ruppe wrote:
I'm happy with the codegen the way it is, it is good enough for me,
but let's not make mountains out of hills.
But the fact is that many people are not. Even the core language team, who doesn't want their compiler to get 30% slower on the next release.
[...] The good thing about switching to DDMD the next release is that (finally) we are forced to address codegen issues. I hope this will finally get Walter going on codegen improvements, which I'm quite sure he's well capable of, but just hasn't gotten around to it until now. Here's to hoping https://issues.dlang.org/show_bug.cgi?id=14943 will be fixed by next release... T -- "No, John. I want formats that are actually useful, rather than over-featured megaliths that address all questions by piling on ridiculous internal links in forms which are hideously over-complex." -- Simon St. Laurent on xml-dev
Aug 29 2015
prev sibling next sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Saturday, 29 August 2015 at 13:06:58 UTC, David Nadlinger 
wrote:
 But the fact is that many people are not. Even the core 
 language team, who doesn't want their compiler to get 30% 
 slower on the next release.
That's why your work on ldc and Iain's on gdc matters! I don't object to work on dmd's optimizations, but I'm ok with them staying the way they are too since ldc and gdc are fairly easy to use now.
Aug 29 2015
prev sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Saturday, 29 August 2015 at 13:06:58 UTC, David Nadlinger 
wrote:
 On Saturday, 29 August 2015 at 12:59:59 UTC, Adam D. Ruppe 
 wrote:
 I'm happy with the codegen the way it is, it is good enough 
 for me, but let's not make mountains out of hills.
But the fact is that many people are not. Even the core language team, who doesn't want their compiler to get 30% slower on the next release.
LOL. I've actually run into a fun problem with the program that I use to update dmd on my box. It needs root permissions to install dmd, so it does sudo up front to get the password and then reruns it before every command to reset the sudo timer. Before dmd switched to the D front-end, that worked, and I didn't have to type in my password again. So, I could just kick off the update program and leave. However, after the switch to D, the Phobos build and tests take longer than 5 minutes (or whatever the exact sudo timeout is), and I keep having to rerun my program to update dmd, because I run it and forget about it, and sudo eventually times out waiting for me to type in the password and terminates the update program. If my computer were faster, this wouldn't be a problem, but it worked prior to the move to D, and it doesn't now. So, from that standpoint, the 30% loss of speed in dmd is already costing me. However, the biggest problem with dmd's slow codegen is probably ultimately PR. It's the reference compiler, so it's what folks are going to grab first and what folks are most likely to compare with their C++ code. That's comparing apples to oranges, but they'll do it. And a slow dmd will cost us on some level in that regard. The folks who know what they're doing and care about performance enough will use ldc or gdc, but it's not what the newcomers are likely to grab. - Jonathan M Davis
Aug 29 2015
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08/30/2015 01:10 AM, Jonathan M Davis wrote:
 On Saturday, 29 August 2015 at 13:06:58 UTC, David Nadlinger wrote:
 On Saturday, 29 August 2015 at 12:59:59 UTC, Adam D. Ruppe wrote:
 I'm happy with the codegen the way it is, it is good enough for me,
 but let's not make mountains out of hills.
But the fact is that many people are not. Even the core language team, who doesn't want their compiler to get 30% slower on the next release.
LOL. I've actually run into a fun problem with the program that I use to update dmd on my box. It needs root permissions to install dmd, so it does sudo up front to get the password and then reruns it before every command to reset the sudo timer. Before dmd switched to the D front-end, that worked, and I didn't have to type in my password again. So, I could just kick off the update program and leave. However, after the switch to D, the Phobos build and tests take longer than 5 minutes (or whatever the exact sudo timeout is), and I keep having to rerun my program to update dmd, because I run it and forget about it, and sudo eventually times out waiting for me to type in the password and terminates the update program. If my computer were faster, this wouldn't be a problem, but it worked prior to the move to D, and it doesn't now. So, from that standpoint, the 30% loss of speed in dmd is already costing me.
I think there's a way to increase the default timeout. :-)
Aug 29 2015
parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Saturday, 29 August 2015 at 23:16:19 UTC, Timon Gehr wrote:
 On 08/30/2015 01:10 AM, Jonathan M Davis wrote:
 LOL. I've actually run into a fun problem with the program 
 that I use to
 update dmd on my box. It needs root permissions to install 
 dmd, so it
 does sudo up front to get the password and then reruns it 
 before every
 command to reset the sudo timer. Before dmd switched to the D 
 front-end,
 that worked, and I didn't have to type in my password again. 
 So, I could
 just kick off the update program and leave. However, after the 
 switch to
 D, the Phobos build and tests take longer than 5 minutes (or 
 whatever
 the exact sudo timeout is), and I keep having to rerun my 
 program to
 update dmd, because I run it and forget about it, and sudo 
 eventually
 times out waiting for me to type in the password and 
 terminates the
 update program. If my computer were faster, this wouldn't be a 
 problem,
 but it worked prior to the move to D, and it doesn't now. So, 
 from that
 standpoint, the 30% loss of speed in dmd is already costing me.
I think there's a way to increase the default timeout. :-)
LOL. Likely so, but I'd prefer not to have to muck with my system settings just to be able to deal with the dmd build. What would probably be better would be to fix it so that the update program can continue from where it was left off if it needs to... Regardless, this situation caught me by surprise, and it was a result of dmd's loss in performance with the switch to D. - Jonathan M Davis
Aug 29 2015
prev sibling parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Saturday, 29 August 2015 at 23:10:29 UTC, Jonathan M Davis 
wrote:
 It's the reference compiler, so it's what folks are going to 
 grab first and what folks are most likely to compare with their 
 C++ code.
Maybe we should rebrand it to be the "development preview compiler" and make gdc the "stable production compiler". or something.
Aug 29 2015
parent reply "rsw0x" <anonymous anonymous.com> writes:
On Sunday, 30 August 2015 at 02:13:59 UTC, Adam D. Ruppe wrote:
 On Saturday, 29 August 2015 at 23:10:29 UTC, Jonathan M Davis 
 wrote:
 It's the reference compiler, so it's what folks are going to 
 grab first and what folks are most likely to compare with 
 their C++ code.
Maybe we should rebrand it to be the "development preview compiler" and make gdc the "stable production compiler". or something.
Then people write code that works on dmd just to realize GDC is 2-3 versions behind and doesn't compile their code(no offense meant to the GDC team)
Aug 29 2015
next sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Sunday, 30 August 2015 at 02:44:51 UTC, rsw0x wrote:
 Maybe we should rebrand it to be the "development preview 
 compiler" and make gdc the "stable production compiler". or 
 something.
Then people write code that works on dmd just to realize GDC is 2-3 versions behind and doesn't compile their code(no offense meant to the GDC team)
That would be expected if dmd is the preview release and gdc is the stable release. It'd be like using 2.1-alpha and expecting all the new stuff to still work when you switch to 2.0.
Aug 29 2015
prev sibling parent Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 30 Aug 2015 4:45 am, "rsw0x via Digitalmars-d" <
digitalmars-d puremagic.com> wrote:
 On Sunday, 30 August 2015 at 02:13:59 UTC, Adam D. Ruppe wrote:
 On Saturday, 29 August 2015 at 23:10:29 UTC, Jonathan M Davis wrote:
 It's the reference compiler, so it's what folks are going to grab first
and what folks are most likely to compare with their C++ code.
 Maybe we should rebrand it to be the "development preview compiler" and
make gdc the "stable production compiler". or something.
 Then people write code that works on dmd just to realize GDC is 2-3
versions behind and doesn't compile their code(no offense meant to the GDC team) I made a chart around 6 months back that tracks the timeline of gdc vs. dmd versions of D2. And in it you can see a clear change of pattern from when it was just Walter maintaining and releasing to when dmd switched over to github and founded the core language team. But someone else or I should make a few more, such as one that tracks code changes between releases, to get a wider picture of why it seems that other vendors appear to fall behind. Iain.
Aug 30 2015
prev sibling next sibling parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 18-Aug-2015 13:45, Walter Bright wrote:
 Martin ran some benchmarks recently that showed that ddmd compiled with
 dmd was about 30% slower than when compiled with gdc/ldc. This seems to
 be fairly typical.

 I'm interested in ways to reduce that gap.
..
 2. instruction selection patterns like should one generate:

      SETC AL
      MOVZ EAX,AL

 or:
      SBB EAX
      NEG EAX
See section "Problematic instructions" here: http://www.agner.org/optimize/optimizing_assembly.pdf And some invaluable material on each CPU specifics for all x86 from Pentium to Haswell and AMD from K6 toBuldozer: http://www.agner.org/optimize/microarchitecture.pdf Hope this helps. -- Dmitry Olshansky
Sep 05 2015
prev sibling parent reply BBasile <bb.temp gmx.com> writes:
On Tuesday, 18 August 2015 at 10:45:49 UTC, Walter Bright wrote:
 Martin ran some benchmarks recently that showed that ddmd 
 compiled with dmd was about 30% slower than when compiled with 
 gdc/ldc. This seems to be fairly typical.

 I'm interested in ways to reduce that gap.

 There are 3 broad kinds of optimizations that compilers do:

 1. source translations like rewriting x*2 into x<<1, and 
 function inlining

 2. instruction selection patterns like should one generate:

     SETC AL
     MOVZ EAX,AL

 or:
     SBB EAX
     NEG EAX

 3. data flow analysis optimizations like constant propagation, 
 dead code elimination, register allocation, loop invariants, 
 etc.

 Modern compilers (including dmd) do all three.

 So if you're comparing code generated by dmd/gdc/ldc, and 
 notice something that dmd could do better at (1, 2 or 3), 
 please let me know. Often this sort of thing is low hanging 
 fruit that is fairly easily inserted into the back end.

 For example, recently I improved the usage of the SETcc 
 instructions.

 https://github.com/D-Programming-Language/dmd/pull/4901
 https://github.com/D-Programming-Language/dmd/pull/4904

 A while back I improved usage of BT instructions, the way 
 switch statements were implemented, and fixed integer divide by 
 a constant with multiply by its reciprocal.
Maybe the ENTER instruction should be replaced by a full prologue: - https://github.com/D-Programming-Language/dmd/blob/ef24f9acd99aa52ed28e7221cb0997099ab85f4a/src/backend/cod3.c#L2939 - http://stackoverflow.com/questions/5959890/enter-vs-push-ebp-mov-ebp-esp-sub-esp-imm-and-leave-vs-mov-esp-ebp It seems that since the Pentium I, ENTER is always slower. But i don't know if it's used as a kind of optimization for the binary size. Actually before using DMD I had **never** seen an ENTER.
Sep 13 2015
next sibling parent ponce <contact gam3sfrommars.fr> writes:
On Sunday, 13 September 2015 at 17:30:12 UTC, BBasile wrote:
 It seems that since the Pentium I, ENTER is always slower. But 
 i don't know if it's used as a kind of optimization for the 
 binary size. Actually before using DMD I had **never** seen an 
 ENTER.
Same here, I thought nobody used this one instruction.
Sep 13 2015
prev sibling parent reply Martin Nowak <code+news.digitalmars dawg.eu> writes:
On 09/13/2015 07:30 PM, BBasile wrote:
 It seems that since the Pentium I, ENTER is always slower. But i don't
 know if it's used as a kind of optimization for the binary size.
 Actually before using DMD I had **never** seen an ENTER.
https://github.com/D-Programming-Language/dmd/pull/5073
Sep 13 2015
parent reply BBasile <bb.temp gmx.com> writes:
On Sunday, 13 September 2015 at 18:33:52 UTC, Martin Nowak wrote:
 On 09/13/2015 07:30 PM, BBasile wrote:
 It seems that since the Pentium I, ENTER is always slower. But 
 i don't know if it's used as a kind of optimization for the 
 binary size. Actually before using DMD I had **never** seen an 
 ENTER.
https://github.com/D-Programming-Language/dmd/pull/5073
Yeah, that was fast. With the hope it'll be approved.
Sep 13 2015
parent Martin Nowak <code+news.digitalmars dawg.eu> writes:
On 09/13/2015 08:45 PM, BBasile wrote:
 Yeah, that was fast. With the hope it'll be approved.
If only it wasn't for me to do this...
Sep 13 2015