www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Why C++ compiles slowly

reply Walter Bright <newshound2 digitalmars.com> writes:
http://www.drdobbs.com/blog/archives/2010/08/c_compilation_s.html

I'll be doing a followup on why D compiles fast.
Aug 18 2010
next sibling parent Marianne Gagnon <auria.mg gmail.com> writes:
Very right, and even more I might add a thing : the STL itself is just HUGE;
and unless you live in a shell, you're going to use some library; that some
library in all likeliness will include the STL directly or indirectly; and each
and everyone of your files end up building the entire STL everytime they're
built

 http://www.drdobbs.com/blog/archives/2010/08/c_compilation_s.html
 
 I'll be doing a followup on why D compiles fast.
Aug 18 2010
prev sibling next sibling parent Justin Johansson <no spam.com> writes:
On 19/08/10 10:35, Walter Bright wrote:
 http://www.drdobbs.com/blog/archives/2010/08/c_compilation_s.html

 I'll be doing a followup on why D compiles fast.
While I am not a compiler writer, I do have a fairly good understanding of compiler mechanics. I think the length and depth of your article pretty is just about right and accordingly I found it sufficiently concise and succinct in explaining the issues with C++ compilation speed that one does not need much further explanation. May I join others in looking forward to the part 2 follow-up on why D compiles fast. Cheers Justin Johansson
Aug 18 2010
prev sibling next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:

 http://www.drdobbs.com/blog/archives/2010/08/c_compilation_s.html
 I'll be doing a followup on why D compiles fast.
Thank you, the article is nice and I didn't know most of the things it contains.
the compiler is doomed to uselessly reprocess them when one file is #include'd
multiple times, even if it is protected by #ifndef pairs. (Kenneth Boyd tells
me that upon careful reading the Standard may allow a compiler to skip
reprocessing #include's protected by #ifndef pairs. I don't know which
compilers, if any, take advantage of this.)<
Probably the latest GCC versions are able to do that. And then there is #pragma once too: http://en.wikipedia.org/wiki/Pragma_once
Just #include'ing the Standard results, on Ubuntu, in 74 files being read of
37,687 lines (not including any lines from multiple #include's of the same
file).<
As benchmark on this Clang (the C/C++ compiler based on LLVM) uses a small program (~7500 lines of Object-C) that includes Cocoa/Cocoa.h, that is quite large: http://clang.llvm.org/performance.html Bye, bearophile
Aug 18 2010
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Walter Bright wrote:
 http://www.drdobbs.com/blog/archives/2010/08/c_compilation_s.html
 
 I'll be doing a followup on why D compiles fast.
On reddit: http://www.reddit.com/r/programming/comments/d2wwp/why_c_compiles_slow/
Aug 19 2010
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
Walter Bright wrote:
 Walter Bright wrote:
 http://www.drdobbs.com/blog/archives/2010/08/c_compilation_s.html

 I'll be doing a followup on why D compiles fast.
On reddit: http://www.reddit.com/r/programming/comments/d2wwp/why_c_compiles_slow/
Hacker News: http://news.ycombinator.com/item?id=1617133
Aug 19 2010
prev sibling parent reply Seth Hoenig <seth.a.hoenig gmail.com> writes:
On Thu, Aug 19, 2010 at 4:45 AM, Walter Bright
<newshound2 digitalmars.com>wrote:

 Walter Bright wrote:

 http://www.drdobbs.com/blog/archives/2010/08/c_compilation_s.html

 I'll be doing a followup on why D compiles fast.
On reddit: http://www.reddit.com/r/programming/comments/d2wwp/why_c_compiles_slow/
Thanks for the free Karma, btw :P
Aug 19 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 08/19/2010 09:53 PM, Seth Hoenig wrote:
 On Thu, Aug 19, 2010 at 4:45 AM, Walter Bright
 <newshound2 digitalmars.com <mailto:newshound2 digitalmars.com>> wrote:

     Walter Bright wrote:

         http://www.drdobbs.com/blog/archives/2010/08/c_compilation_s.html

         I'll be doing a followup on why D compiles fast.


     On reddit:

     http://www.reddit.com/r/programming/comments/d2wwp/why_c_compiles_slow/



 Thanks for the free Karma, btw     :P
At over 200 points, that was a homerun. I think it would be really classy if Walter did /not/ write "Why D compiles quickly" for his next installment. Andrei
Aug 19 2010
parent BCS <none anon.com> writes:
Hello Andrei,

 I think it would be really classy if Walter did /not/ write "Why D
 compiles quickly" for his next installment.
 
Maybe hold off till the one after that? If he doesn't do it sometime, I'll be bumed. -- ... <IXOYE><
Aug 19 2010
prev sibling next sibling parent reply Eldar Insafutdinov <e.insafutdinov gmail.com> writes:
 I'll be doing a followup on why D compiles fast.
I will say the contrary. Compiling medium size projects doesn't matter in either language. But when the size of your project starts getting very big you will have troubles in D because there is no incremental compilation. You will end up recompiling the whole thing each time which will take longer than just recompiling a single file in C++. Please be sure to mention it in your next article, otherwise it is a false advertisement. Of course it is not the language issue, but it's the issue of its only implementation. P.S. This problem was raised many times here by Tomasz Stachowiak.
Aug 19 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 08/19/2010 07:48 AM, Eldar Insafutdinov wrote:
 I'll be doing a followup on why D compiles fast.
I will say the contrary. Compiling medium size projects doesn't matter in either language. But when the size of your project starts getting very big you will have troubles in D because there is no incremental compilation.
I'm a bit confused - how do you define incremental compilation? The build system can be easily set up to compile individual D files to object files, and the use the linker in a traditional manner.
 You will end up
 recompiling the whole thing each time which will take longer than just
recompiling
 a single file in C++. Please be sure to mention it in your next article,
otherwise
 it is a false advertisement. Of course it is not the language issue, but it's
the
 issue of its only implementation.

 P.S. This problem was raised many times here by Tomasz Stachowiak.
I'm not sure about that. On the large C++ systems I work on, compilation is absolute agony. I don't think that that sets the bar too high. Andrei
Aug 19 2010
next sibling parent reply Eldar Insafutdinov <e.insafutdinov gmail.com> writes:
== Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s article

 I'm a bit confused - how do you define incremental compilation? The
 build system can be easily set up to compile individual D files to
 object files, and the use the linker in a traditional manner.
I am not sure here, you'd better check that in posts of Tomasz Stachowiak. There was something wrong with how dmd emits template instantiations. He had to create a custom build tool that does some hackery. From my experience with D you just can't do that. I get weird errors and I end up rebuilding the whole thing.
 I'm not sure about that. On the large C++ systems I work on, compilation
 is absolute agony. I don't think that that sets the bar too high.
 Andrei
Can you please elaborate on that? From my experience and understanding if you modify one cpp file for instance, only this file will be recompiled, then the project is linked and ready to be run. If you modify a header(which happens less often) the build system quite fairly recompiles files that include it. And I use make -j of course, which makes things even easier.
Aug 19 2010
parent Sean Kelly <sean invisibleduck.org> writes:
Eldar Insafutdinov Wrote:

 == Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s article
 
 I'm a bit confused - how do you define incremental compilation? The
 build system can be easily set up to compile individual D files to
 object files, and the use the linker in a traditional manner.
I am not sure here, you'd better check that in posts of Tomasz Stachowiak. There was something wrong with how dmd emits template instantiations. He had to create a custom build tool that does some hackery. From my experience with D you just can't do that. I get weird errors and I end up rebuilding the whole thing.
There used to be a number of issues with where TypeInfo was generated, references to in/out contracts and other auto-generated functions, etc, but I think they've all been addressed.
 I'm not sure about that. On the large C++ systems I work on, compilation
 is absolute agony. I don't think that that sets the bar too high.
 Andrei
Can you please elaborate on that? From my experience and understanding if you modify one cpp file for instance, only this file will be recompiled, then the project is linked and ready to be run. If you modify a header(which happens less often) the build system quite fairly recompiles files that include it. And I use make -j of course, which makes things even easier.
It's always possible to use headers in D as well, though I think the tipping point is far different from where it is in C++.
Aug 19 2010
prev sibling parent reply Leandro Lucarella <luca llucax.com.ar> writes:
Andrei Alexandrescu, el 19 de agosto a las 08:50 me escribiste:
 On 08/19/2010 07:48 AM, Eldar Insafutdinov wrote:
I'll be doing a followup on why D compiles fast.
I will say the contrary. Compiling medium size projects doesn't matter in either language. But when the size of your project starts getting very big you will have troubles in D because there is no incremental compilation.
I'm a bit confused - how do you define incremental compilation? The build system can be easily set up to compile individual D files to object files, and the use the linker in a traditional manner.
I think in D you can do the same level of incremental compilation as in C/C++ but is not as natural. For one, in D is not natural to separate declarations from definitions, so a file in D tends to be dependent in *many* *many* other files because of excessive imports, so even when you can do separate compilation, unless you are *extremely* careful (much more than in C/C++ I think) you'll end up having to recompile the whole project even you change just one file because of the dependency madness. I know you can do separate compilation as in C/C++ writing the declarations in a different file, or generating/using .di files, but also you'll probably end up using libraries that don't do that (as somebody mentioned for C++ + STL) and end up in a dependency madness anyway. It's just not natural to do so in D, it even encourages not doing it as one of the main advertised features is you don't have to separate declarations from definitions. And I'm not saying that is an easy to solve problem, I'm just saying that I agree D doesn't scale well in terms of incremental compilations for big projects, unless you go against D natural way on doing things. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Hay manos capaces de fabricar herramientas con las que se hacen máquinas para hacer ordenadores que a su vez diseñan máquinas que hacen herramientas para que las use la mano
Aug 19 2010
next sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Leandro Lucarella (luca llucax.com.ar)'s article
 I know you can do separate compilation as in C/C++ writing the
 declarations in a different file, or generating/using .di files, but
 also you'll probably end up using libraries that don't do that (as
 somebody mentioned for C++ + STL) and end up in a dependency madness
 anyway. It's just not natural to do so in D, it even encourages not
 doing it as one of the main advertised features is you don't have to
 separate declarations from definitions.
 And I'm not saying that is an easy to solve problem, I'm just saying
 that I agree D doesn't scale well in terms of incremental compilations
 for big projects, unless you go against D natural way on doing things.
I think this is a perfectly reasonable design principle. Sometimes you have to resort to things that are ugly, unsafe, a PITA, etc. to deal with some practical reality. What D gets right is that you shouldn't have to be burdened with it when you don't need it, and the simple, clean, safe way that works most of the time should be the idiomatic way, but the ugly/unsafe/inconvenient way that works in the corner cases should be available, even if no serious effort is put into making it not ugly/unsafe/inconvenient. Languages like C++ and Java tend to ignore the simple, common case and force you to do things the hard way all the time, even when you don't need the benefits of doing things the hard way. Thus, these languages are utterly useless for anything but huge, enterprisey projects.
Aug 19 2010
parent reply bearophile <bearophileHUGS lycos.com> writes:
dsimcha:
 What D gets right is that you shouldn't have to be burdened with it when
 you don't need it, and the simple, clean, safe way that works most of the time
 should be the idiomatic way, but the ugly/unsafe/inconvenient way that works in
 the corner cases should be available, even if no serious effort is put into
making
 it not ugly/unsafe/inconvenient.
 
 Languages like C++ and Java tend to ignore the simple, common case and force
you
 to do things the hard way all the time, even when you don't need the benefits
of
 doing things the hard way.  Thus, these languages are utterly useless for
anything
 but huge, enterprisey projects.
When you compile a Java program the compiler is able to find and fetch the files it needs. DMD isn't able to. So Java is more handy for small projects composed of something like 10-20 files. So I don't agree with you. (It's a feature I've asked for in my second message on the D newsgroups.) Bye, bearophile
Aug 19 2010
parent reply retard <re tard.com.invalid> writes:
Thu, 19 Aug 2010 15:52:25 -0400, bearophile wrote:

 dsimcha:
 What D gets right is that you shouldn't have to be burdened with it
 when you don't need it, and the simple, clean, safe way that works most
 of the time should be the idiomatic way, but the
 ugly/unsafe/inconvenient way that works in the corner cases should be
 available, even if no serious effort is put into making it not
 ugly/unsafe/inconvenient.
 
 Languages like C++ and Java tend to ignore the simple, common case and
 force you to do things the hard way all the time, even when you don't
 need the benefits of doing things the hard way.  Thus, these languages
 are utterly useless for anything but huge, enterprisey projects.
When you compile a Java program the compiler is able to find and fetch the files it needs. DMD isn't able to. So Java is more handy for small projects composed of something like 10-20 files. So I don't agree with you. (It's a feature I've asked for in my second message on the D newsgroups.)
Having written several university assignments in Java, small (< 500 LOC) to medium size (50000 LOC), I haven't encountered a single compilation related problem. One exception to this are some bindings to native code libraries -- you need to be careful with URLs when packaging external libraries inside a JAR. The class centric programming paradigm often gets in your way when programming in the small, but it's quite acceptable on large scale IMO. How is Java so utterly useless and D much better? Any use cases?
Aug 19 2010
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from retard (re tard.com.invalid)'s article
 Thu, 19 Aug 2010 15:52:25 -0400, bearophile wrote:
 dsimcha:
 What D gets right is that you shouldn't have to be burdened with it
 when you don't need it, and the simple, clean, safe way that works most
 of the time should be the idiomatic way, but the
 ugly/unsafe/inconvenient way that works in the corner cases should be
 available, even if no serious effort is put into making it not
 ugly/unsafe/inconvenient.

 Languages like C++ and Java tend to ignore the simple, common case and
 force you to do things the hard way all the time, even when you don't
 need the benefits of doing things the hard way.  Thus, these languages
 are utterly useless for anything but huge, enterprisey projects.
When you compile a Java program the compiler is able to find and fetch the files it needs. DMD isn't able to. So Java is more handy for small projects composed of something like 10-20 files. So I don't agree with you. (It's a feature I've asked for in my second message on the D newsgroups.)
Having written several university assignments in Java, small (< 500 LOC) to medium size (50000 LOC), I haven't encountered a single compilation related problem. One exception to this are some bindings to native code libraries -- you need to be careful with URLs when packaging external libraries inside a JAR. The class centric programming paradigm often gets in your way when programming in the small, but it's quite acceptable on large scale IMO. How is Java so utterly useless and D much better? Any use cases?
I didn't mean my comment in terms of the compilation system. I meant it as a more general statement of how these languages eschew convenience features. Examples: The class centric paradigm is one example. The ridiculously fine grained standard library import system. If you really want to make your imports this fine-grained, you should use selective imports. Strictly explicit, nominative typing. Lack of higher order functions and closure just because you **can** simulate these with classes, even though this is horribly verbose. No RAII, scope statements, or anything similar just because you **can** get by with finally statements, even though this is again horribly verbose, error-prone and unreadable. The requirement that you only have one top-level, public class per file. Lack of default function arguments just because these **can** be simulated with overloading, even though this is ridiculously verbose. Lack of operator overloading just because you **can** use regular method calls, even though properly used operator overloading makes code much more succinct and readable.
Aug 19 2010
parent reply "Nick Sabalausky" <a a.a> writes:
"dsimcha" <dsimcha yahoo.com> wrote in message 
news:i4k4b4$jsj$1 digitalmars.com...
 I didn't mean my comment in terms of the compilation system.  I meant it 
 as a more
 general statement of how these languages eschew convenience features. 
 Examples:

 The class centric paradigm is one example.

 The ridiculously fine grained standard library import system.  If you 
 really want
 to make your imports this fine-grained, you should use selective imports.

 Strictly explicit, nominative typing.

 Lack of higher order functions and closure just because you **can** 
 simulate these
 with classes, even though this is horribly verbose.

 No RAII, scope statements, or anything similar just because you **can** 
 get by
 with finally statements, even though this is again horribly verbose, 
 error-prone
 and unreadable.

 The requirement that you only have one top-level, public class per file.

 Lack of default function arguments just because these **can** be simulated 
 with
 overloading, even though this is ridiculously verbose.

 Lack of operator overloading just because you **can** use regular method 
 calls,
 even though properly used operator overloading makes code much more 
 succinct and
 readable.
Yea. If Java's design philosophy were a valid one, there would never have been any reason to move beyond Altair-style programming (ie, entering machine code (not asm) in binary, one byte at a time, via physical toggle switches). You *can* do anything you need like that (It's Turing-complete!).
Aug 19 2010
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Nick Sabalausky wrote:
 Yea. If Java's design philosophy were a valid one, there would never have 
 been any reason to move beyond Altair-style programming (ie, entering 
 machine code (not asm) in binary, one byte at a time, via physical toggle 
 switches). You *can* do anything you need like that (It's Turing-complete!).
Yeah, and I've seen OOP done in C, and it works. It's just awful. I've even seen OOP done in assembler (Optlink!).
Aug 19 2010
next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On Fri, Aug 20, 2010 at 2:48 AM, Walter Bright
<newshound2 digitalmars.com> wrote:
 Nick Sabalausky wrote:
 Yea. If Java's design philosophy were a valid one, there would never have
 been any reason to move beyond Altair-style programming (ie, entering
 machine code (not asm) in binary, one byte at a time, via physical toggle
 switches). You *can* do anything you need like that (It's Turing-complete!).
Yeah, and I've seen OOP done in C, and it works. It's just awful. I've even seen OOP done in assembler (Optlink!).
There's even a book about it! [pdf] http://www.cs.rit.edu/~ats/books/ooc.pdf I've never read it though. You could do OOP in HLA (of course nobody treats that as a real assembler :p. But the book that comes with it is great.).
Aug 19 2010
parent reply bearophile <bearophileHUGS lycos.com> writes:
Andrej Mitrovic:
 You could do OOP in HLA (of course nobody treats that as a real
 assembler :p. But the book that comes with it is great.).
I may like to see the built-in asm of D replaced by HLA :-) Bye, bearophile
Aug 19 2010
parent reply "Simen kjaeraas" <simen.kjaras gmail.com> writes:
bearophile <bearophileHUGS lycos.com> wrote:

 Andrej Mitrovic:
 You could do OOP in HLA (of course nobody treats that as a real
 assembler :p. But the book that comes with it is great.).
I may like to see the built-in asm of D replaced by HLA :-)
But why? Could you not simply drop in and out of assembly and use D for flow-control and the like? -- Simen
Aug 20 2010
parent reply bearophile <bearophileHUGS lycos.com> writes:
Simen kjaeraas:
 But why? Could you not simply drop in and out of assembly and use
 D for flow-control and the like?
I don't know. I think that every time you drop in and out of assembly, unless your used naked assembly, the compiler adds some leading and trailing instructions. In the biological and technological evolution most of the changes happen when a new "species" appear, then later the "species" are almost frozen, changes appear only very slowly. So for example you see many improvements in Java compared to many years of C/C++ evolution. This is part of the Punctuated Equilibria theory by S. J. Gould and others, and it's not specific of biological evolution, it's a property of dynamic systems that are in evolution. Assembly and assemblers were born many years ago, and even if today we have invented many better ideas to software, those ideas are usually not applied to asm world. The good thing of HLA is that it tries to break some of that tradition, and to bring a bit of innovation in the world of asm programming, and it does it well enough (despite the innovations it brings are probably mostly 30 years old, about as new as the original Pascal is; there are far more newer ideas that may be applied to asm programming. Some newer ideas can be seen in CorePy: http://www.corepy.org/ that allows to write computational kernels through Python code that are usually faster than D code). This is why there are moments when I'd like a more modern asm inside D. I've written few hundred lines of asm code inside D programs, this is not a lot, it's just a bit of code, but for me it's a pain to write asm normally, and I can seen tens of ways to improve that work of mine :-) Bye, bearophile
Aug 20 2010
parent reply Adam Ruppe <destructionator gmail.com> writes:
Glancing over it really quickly, High Level Assembly is /completely
insane/. The whole point of writing assembly language is to see and
write exactly what the computer sees and executes. This makes it
useful for coding, and also very easy to read (in the small, at
least).

The HLA examples on Wikipedia are horribly ugly messes of macros and
other weird stuff. It is like a cross of Perl and C++!

The Microsoft assembler used to have a whole bunch of weird macro
capabilities and strange syntax. I hated it. This looks like that
turned up to 11.


D's assembler is almost perfect... it integrates without hassle, it
gives you what you need, and it is very read/writable. The only
complaint I have with it is that you have to capitalize register
names. Blargh.
Aug 20 2010
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Adam Ruppe wrote:
 The Microsoft assembler used to have a whole bunch of weird macro
 capabilities and strange syntax. I hated it.
What I did when faced with such code is assemble it, *disassemble* the output, and paste the output back in the source code and work from that.
Aug 20 2010
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Walter Bright (newshound2 digitalmars.com)'s article
 Adam Ruppe wrote:
 The Microsoft assembler used to have a whole bunch of weird macro
 capabilities and strange syntax. I hated it.
What I did when faced with such code is assemble it, *disassemble* the output, and paste the output back in the source code and work from that.
How did you do this? Don't you lose some important stuff like label names in the translation? Instead of LSomeLabelName you get some raw, inscrutable hexadecimal number in your jump instructions.
Aug 20 2010
next sibling parent reply Adam Ruppe <destructionator gmail.com> writes:
On 8/20/10, dsimcha <dsimcha yahoo.com> wrote:
 How did you do this?  Don't you lose some important stuff like label names
 in the translation?
Yes, though a lot of label names aren't all that helpful in the first place. "done:" or worse yet, "L1:" don't help much. Those names are obvious from context anyway.
 Instead of LSomeLabelName you get some raw, inscrutable hexadecimal
number in your jump instructions. A lot of disassemblers generate a label name instead of giving the hex. obj2asm for example translates most jumps into Lxxx: labels.
Aug 20 2010
parent BCS <none anon.com> writes:
Hello Adam,

 Instead of LSomeLabelName you get some raw, inscrutable hexadecimal
 number in your jump instructions.
A lot of disassemblers generate a label name instead of giving the hex. obj2asm for example translates most jumps into Lxxx: labels.
that plus find/replace will get you a long way. -- ... <IXOYE><
Aug 21 2010
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
dsimcha wrote:
 == Quote from Walter Bright (newshound2 digitalmars.com)'s article
 Adam Ruppe wrote:
 The Microsoft assembler used to have a whole bunch of weird macro
 capabilities and strange syntax. I hated it.
What I did when faced with such code is assemble it, *disassemble* the output, and paste the output back in the source code and work from that.
How did you do this?
obj2asm foo.obj >foo.asm
 Don't you lose some important stuff like label names in the
 translation?  Instead of LSomeLabelName you get some raw, inscrutable
hexadecimal
 number in your jump instructions.
Sure, it might need a bit of tidying up by hand, but that was a lot easier than trying to spelunk what those macros actually did. I'm not the only one. I know a team at a large unnamed company that was faced with updating some legacy asm code that the original, long gone, programmers had gone to town with inventing their own high level macro language. Programmer after programmer gave up working on it, until one guy had no problem. He was asked how he worked with that mess, and he said no problem, he assembled it, obj2asm'd it, and that was the new source.
Aug 20 2010
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:i4kjdp$2o9f$1 digitalmars.com...
 Nick Sabalausky wrote:
 Yea. If Java's design philosophy were a valid one, there would never have 
 been any reason to move beyond Altair-style programming (ie, entering 
 machine code (not asm) in binary, one byte at a time, via physical toggle 
 switches). You *can* do anything you need like that (It's 
 Turing-complete!).
Yeah, and I've seen OOP done in C, and it works. It's just awful. I've even seen OOP done in assembler (Optlink!).
I've seen high-precision PI calculation done in MS batch: http://thedailywtf.com/Articles/Stupid-Coding-Tricks-A-Batch-of-Pi.aspx And Adam Ruppe did cgi in Asm: http://www.arsdnet.net/cgi-bin/a.out And some massochist did a compile-time raytracer in C++: http://ompf.org/forum/viewtopic.php?t=1556 Yea, I know that had already been done in D, but D's compile-time processing doesn't suck :)
Aug 19 2010
next sibling parent reply Adam Ruppe <destructionator gmail.com> writes:
On 8/19/10, Nick Sabalausky <a a.a> wrote:
 And Adam Ruppe did cgi in Asm:

 http://www.arsdnet.net/cgi-bin/a.out
Did I post that to this list, or did it find its way around the Internet on its own? I saw it randomly pop up on a Google search last year too, on a list I've never even heard of! The best part is it is mostly just a hello world...
Aug 19 2010
parent "Nick Sabalausky" <a a.a> writes:
"Adam Ruppe" <destructionator gmail.com> wrote in message 
news:mailman.383.1282266517.13841.digitalmars-d puremagic.com...
 On 8/19/10, Nick Sabalausky <a a.a> wrote:
 And Adam Ruppe did cgi in Asm:

 http://www.arsdnet.net/cgi-bin/a.out
Did I post that to this list, or did it find its way around the Internet on its own?
I honestly don't remember. All I know is whenever I did first see it, I created a saved IM away message about it. I remembered I had it there, went to get the link from it, and thought "Oh, hey, I recognize that domain!" :)
 I saw it randomly pop up on a Google search last
 year too, on a list I've never even heard of!
Funny how that happens sometimes. Back in college, a friend of mine was inspired by the Pokey The Penguin online comic ( http://ompf.org/forum/viewtopic.php?t=1556 ) and its deliberate MSPaint crappiness. So he created a "Poop and Friends" comic in a similar vein. It was deliberately stupid humor, although not gross-out stuff, despite the name. (It's no longer around in any form, and the wayback machine doesn't have any of the images: http://web.archive.org/web/20031118234038/http://www.poopandfriends.cjb.net/ ). But a few years after my friend started it, my brother was told by one of his friends "There's this site you have to see!" Turned out to be Poop and Friends.
Aug 19 2010
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
Some guys are using a hotkey automation scripting language to
write/execute machine code:

http://www.autohotkey.com/forum/viewtopic.php?t=21172&postdays=0&postorder=asc&start=0

On Fri, Aug 20, 2010 at 3:05 AM, Nick Sabalausky <a a.a> wrote:
 "Walter Bright" <newshound2 digitalmars.com> wrote in message
 news:i4kjdp$2o9f$1 digitalmars.com...
 Nick Sabalausky wrote:
 Yea. If Java's design philosophy were a valid one, there would never have
 been any reason to move beyond Altair-style programming (ie, entering
 machine code (not asm) in binary, one byte at a time, via physical toggle
 switches). You *can* do anything you need like that (It's
 Turing-complete!).
Yeah, and I've seen OOP done in C, and it works. It's just awful. I've even seen OOP done in assembler (Optlink!).
I've seen high-precision PI calculation done in MS batch: http://thedailywtf.com/Articles/Stupid-Coding-Tricks-A-Batch-of-Pi.aspx And Adam Ruppe did cgi in Asm: http://www.arsdnet.net/cgi-bin/a.out And some massochist did a compile-time raytracer in C++: http://ompf.org/forum/viewtopic.php?t=1556 Yea, I know that had already been done in D, but D's compile-time processing doesn't suck :)
Aug 19 2010
prev sibling next sibling parent reply BCS <none anon.com> writes:
Hello Nick,

 "Walter Bright" <newshound2 digitalmars.com> wrote in message
 news:i4kjdp$2o9f$1 digitalmars.com...
 
 Nick Sabalausky wrote:
 
 Yea. If Java's design philosophy were a valid one, there would never
 have been any reason to move beyond Altair-style programming (ie,
 entering machine code (not asm) in binary, one byte at a time, via
 physical toggle switches). You *can* do anything you need like that
 (It's Turing-complete!).
 
Yeah, and I've seen OOP done in C, and it works. It's just awful. I've even seen OOP done in assembler (Optlink!).
I've seen high-precision PI calculation done in MS batch: http://thedailywtf.com/Articles/Stupid-Coding-Tricks-A-Batch-of-Pi.asp x And Adam Ruppe did cgi in Asm: http://www.arsdnet.net/cgi-bin/a.out And some massochist did a compile-time raytracer in C++: http://ompf.org/forum/viewtopic.php?t=1556 Yea, I know that had already been done in D, but D's compile-time processing doesn't suck :)
Um... does Boost fit in here? -- ... <IXOYE><
Aug 19 2010
parent "Nick Sabalausky" <a a.a> writes:
"BCS" <none anon.com> wrote in message 
news:a6268ff1a3d88cd0def4795927c news.digitalmars.com...
 Hello Nick,

 "Walter Bright" <newshound2 digitalmars.com> wrote in message
 news:i4kjdp$2o9f$1 digitalmars.com...

 Yeah, and I've seen OOP done in C, and it works. It's just awful.
 I've even seen OOP done in assembler (Optlink!).
I've seen high-precision PI calculation done in MS batch: http://thedailywtf.com/Articles/Stupid-Coding-Tricks-A-Batch-of-Pi.asp x And Adam Ruppe did cgi in Asm: http://www.arsdnet.net/cgi-bin/a.out And some massochist did a compile-time raytracer in C++: http://ompf.org/forum/viewtopic.php?t=1556 Yea, I know that had already been done in D, but D's compile-time processing doesn't suck :)
Um... does Boost fit in here?
Zing! :)
Aug 19 2010
prev sibling parent Lutger <lutger.blijdestijn gmail.com> writes:
Nick Sabalausky wrote:

 "Walter Bright" <newshound2 digitalmars.com> wrote in message
 news:i4kjdp$2o9f$1 digitalmars.com...
 Nick Sabalausky wrote:
 Yea. If Java's design philosophy were a valid one, there would never have
 been any reason to move beyond Altair-style programming (ie, entering
 machine code (not asm) in binary, one byte at a time, via physical toggle
 switches). You *can* do anything you need like that (It's
 Turing-complete!).
Yeah, and I've seen OOP done in C, and it works. It's just awful. I've even seen OOP done in assembler (Optlink!).
I've seen high-precision PI calculation done in MS batch: http://thedailywtf.com/Articles/Stupid-Coding-Tricks-A-Batch-of-Pi.aspx And Adam Ruppe did cgi in Asm: http://www.arsdnet.net/cgi-bin/a.out And some massochist did a compile-time raytracer in C++: http://ompf.org/forum/viewtopic.php?t=1556 Yea, I know that had already been done in D, but D's compile-time processing doesn't suck :)
Don't forget the perl regex to check for a prime number: perl -wle 'print "Prime" if (1 x shift) !~ /^1?$|^(11+?)\1+$/' [number] http://montreal.pm.org/tech/neil_kandalgaonkar.shtml
Aug 20 2010
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Leandro Lucarella wrote:
 I think in D you can do the same level of incremental compilation as in
 C/C++ but is not as natural. For one, in D is not natural to separate
 declarations from definitions, so a file in D tends to be dependent in
 *many* *many* other files because of excessive imports, so even when you
 can do separate compilation, unless you are *extremely* careful (much
 more than in C/C++ I think) you'll end up having to recompile the whole
 project even you change just one file because of the dependency madness.
That's why dmd can *automatically* generate .di files. But still, even writing .di files by hand cannot be any harder than writing a C++ .h file.
 I know you can do separate compilation as in C/C++ writing the
 declarations in a different file, or generating/using .di files, but
 also you'll probably end up using libraries that don't do that (as
 somebody mentioned for C++ + STL) and end up in a dependency madness
 anyway. It's just not natural to do so in D, it even encourages not
 doing it as one of the main advertised features is you don't have to
 separate declarations from definitions.
 
 And I'm not saying that is an easy to solve problem, I'm just saying
 that I agree D doesn't scale well in terms of incremental compilations
 for big projects, unless you go against D natural way on doing things.
In no case is it worse than C++, and as soon as you import a file more than once you're faster.
Aug 19 2010
parent Leandro Lucarella <luca llucax.com.ar> writes:
Walter Bright, el 19 de agosto a las 11:00 me escribiste:
I know you can do separate compilation as in C/C++ writing the
declarations in a different file, or generating/using .di files, but
also you'll probably end up using libraries that don't do that (as
somebody mentioned for C++ + STL) and end up in a dependency madness
anyway. It's just not natural to do so in D, it even encourages not
doing it as one of the main advertised features is you don't have to
separate declarations from definitions.

And I'm not saying that is an easy to solve problem, I'm just saying
that I agree D doesn't scale well in terms of incremental compilations
for big projects, unless you go against D natural way on doing things.
In no case is it worse than C++, and as soon as you import a file more than once you're faster.
Is worse in the sense that you have the feeling that is free in D, but it's not. In C++ you *have* to be careful, otherwise the compiler eats you. In D, when this starts to be significant, you already have a huge project. And again, I agree that it might be a very reasonable trade-off, but that doesn't mean the problem doesn't exist. That's all. I'm not trying to convince anyone that C++ is better, I'm just saying in C++ the problem is obvious while in D is much less visible, and you note it *only* when your project is big enough and you *need* incremental compilation. And I know also that DMD (and every DMD-based D compiler) can generate .di files. It would be really nice to have a -M option like GCC that automatically writes Makefile dependencies. But that's another topic. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- A veces quisiera ser un barco, para flotar como floto siendo humano, y no hundirme como me hundo
Aug 19 2010
prev sibling parent Eric Poggel <dnewsgroup2 yage3d.net> writes:
On 8/19/2010 11:13 AM, Leandro Lucarella wrote:
 Andrei Alexandrescu, el 19 de agosto a las 08:50 me escribiste:
 On 08/19/2010 07:48 AM, Eldar Insafutdinov wrote:
 I'll be doing a followup on why D compiles fast.
I will say the contrary. Compiling medium size projects doesn't matter in either language. But when the size of your project starts getting very big you will have troubles in D because there is no incremental compilation.
I'm a bit confused - how do you define incremental compilation? The build system can be easily set up to compile individual D files to object files, and the use the linker in a traditional manner.
I think in D you can do the same level of incremental compilation as in C/C++ but is not as natural. For one, in D is not natural to separate declarations from definitions, so a file in D tends to be dependent in *many* *many* other files because of excessive imports, so even when you can do separate compilation, unless you are *extremely* careful (much more than in C/C++ I think) you'll end up having to recompile the whole project even you change just one file because of the dependency madness. I know you can do separate compilation as in C/C++ writing the declarations in a different file, or generating/using .di files, but also you'll probably end up using libraries that don't do that (as somebody mentioned for C++ + STL) and end up in a dependency madness anyway. It's just not natural to do so in D, it even encourages not doing it as one of the main advertised features is you don't have to separate declarations from definitions. And I'm not saying that is an easy to solve problem, I'm just saying that I agree D doesn't scale well in terms of incremental compilations for big projects, unless you go against D natural way on doing things.
I link my game engine (20kloc) with derelict, which is much larger. On my 5 year old laptop, it takes about 3-4 seconds to compile the engine, importing the non di'd derelict headers, and linking with the derelict lib. If I compile the whole lot, it takes about 30 seconds. Just wanted to share some real-world stats.
Aug 19 2010
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Adam Ruppe:

The whole point of writing assembly language is to see and write exactly what
the computer sees and executes.<
HLA allows you to have a 1:1 mapping, if you want. You can find answers here: http://webster.cs.ucr.edu/AsmTools/HLA/HLADoc/HTMLDoc/hlafaq.txt Look especially at the answer to questions 6 and 23. Bye, bearophile
Aug 20 2010
parent Walter Bright <newshound2 digitalmars.com> writes:
bearophile wrote:
 HLA allows you to have a 1:1 mapping, if you want.
 You can find answers here:
 http://webster.cs.ucr.edu/AsmTools/HLA/HLADoc/HTMLDoc/hlafaq.txt
 Look especially at the answer to questions 6 and 23.
I found this amusing: =============================================== 6: q. Why is HLA necessary? What's wrong with MASM, TASM, GAS, or NASM? Do we really need another incompatible assembler out there? a. HLA was written with two purposes in mind: The first was to provide a tool that makes it very easy (or, at least, easier) to teach assembly language programming to University students. Experiences at UCR bear out the success of HLA's design (even with prototype/alpha code with tons of bugs and little documentation, students are producing better projects than past courses that used MASM). =============================================== because they weren't teaching assembler, they were teaching their pascal-like embedded language. Of course that's easier than assembler, but it isn't assembler.
Aug 20 2010
prev sibling next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 18 Aug 2010 21:05:34 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 http://www.drdobbs.com/blog/archives/2010/08/c_compilation_s.html

 I'll be doing a followup on why D compiles fast.
Very interesting stuff. I'd like to have an article describing how to diagnose slow D compilation :P Dcollections with unit tests compiles in over a minute, with I think about 12 files that contain implementation. I estimate probably 5000 loc. -Steve
Aug 23 2010
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Steven Schveighoffer wrote:
 On Wed, 18 Aug 2010 21:05:34 -0400, Walter Bright 
 <newshound2 digitalmars.com> wrote:
 
 http://www.drdobbs.com/blog/archives/2010/08/c_compilation_s.html

 I'll be doing a followup on why D compiles fast.
Very interesting stuff. I'd like to have an article describing how to diagnose slow D compilation :P Dcollections with unit tests compiles in over a minute, with I think about 12 files that contain implementation. I estimate probably 5000 loc.
You can start with -v.
Aug 23 2010
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 23 Aug 2010 12:44:50 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 Steven Schveighoffer wrote:
 On Wed, 18 Aug 2010 21:05:34 -0400, Walter Bright  
 <newshound2 digitalmars.com> wrote:

 http://www.drdobbs.com/blog/archives/2010/08/c_compilation_s.html

 I'll be doing a followup on why D compiles fast.
Very interesting stuff. I'd like to have an article describing how to diagnose slow D compilation :P Dcollections with unit tests compiles in over a minute, with I think about 12 files that contain implementation. I estimate probably 5000 loc.
You can start with -v.
I get a long list of functions proceeding at a reasonable rate. I've done that in the past, I feel it's some sort of inner loop problem. Essentially, something takes way longer to compile than it should, but way longer on the order of .05 seconds instead of .005 seconds, so you don't notice it normally. But somehow my library is able to harness that deficiency and multiply by 1000. I don't know, it doesn't seem like dcollections should evoke such a long compile time. -Steve
Aug 23 2010
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Steven Schveighoffer wrote:
 I get a long list of functions proceeding at a reasonable rate.  I've 
 done that in the past, I feel it's some sort of inner loop problem.  
 Essentially, something takes way longer to compile than it should, but 
 way longer on the order of .05 seconds instead of .005 seconds, so you 
 don't notice it normally.  But somehow my library is able to harness 
 that deficiency and multiply by 1000.
 
 I don't know, it doesn't seem like dcollections should evoke such a long 
 compile time.
with or without -O ?
Aug 23 2010
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 23 Aug 2010 13:41:07 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 Steven Schveighoffer wrote:
 I get a long list of functions proceeding at a reasonable rate.  I've  
 done that in the past, I feel it's some sort of inner loop problem.   
 Essentially, something takes way longer to compile than it should, but  
 way longer on the order of .05 seconds instead of .005 seconds, so you  
 don't notice it normally.  But somehow my library is able to harness  
 that deficiency and multiply by 1000.
  I don't know, it doesn't seem like dcollections should evoke such a  
 long compile time.
with or without -O ?
The compile line is: dmd -unittest unit_test.d dcollections/*.d dcollections/model/*.d Where unit_test.d is a dummy main. -Steve
Aug 23 2010
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Steven Schveighoffer wrote:
 On Mon, 23 Aug 2010 13:41:07 -0400, Walter Bright 
 <newshound2 digitalmars.com> wrote:
 
 Steven Schveighoffer wrote:
 I get a long list of functions proceeding at a reasonable rate.  I've 
 done that in the past, I feel it's some sort of inner loop problem.  
 Essentially, something takes way longer to compile than it should, 
 but way longer on the order of .05 seconds instead of .005 seconds, 
 so you don't notice it normally.  But somehow my library is able to 
 harness that deficiency and multiply by 1000.
  I don't know, it doesn't seem like dcollections should evoke such a 
 long compile time.
with or without -O ?
The compile line is: dmd -unittest unit_test.d dcollections/*.d dcollections/model/*.d Where unit_test.d is a dummy main.
You could try running dmd under a profiler, then.
Aug 23 2010
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 23 Aug 2010 14:11:52 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 Steven Schveighoffer wrote:
 On Mon, 23 Aug 2010 13:41:07 -0400, Walter Bright  
 <newshound2 digitalmars.com> wrote:

 Steven Schveighoffer wrote:
 I get a long list of functions proceeding at a reasonable rate.  I've  
 done that in the past, I feel it's some sort of inner loop problem.   
 Essentially, something takes way longer to compile than it should,  
 but way longer on the order of .05 seconds instead of .005 seconds,  
 so you don't notice it normally.  But somehow my library is able to  
 harness that deficiency and multiply by 1000.
  I don't know, it doesn't seem like dcollections should evoke such a  
 long compile time.
with or without -O ?
The compile line is: dmd -unittest unit_test.d dcollections/*.d dcollections/model/*.d Where unit_test.d is a dummy main.
You could try running dmd under a profiler, then.
I recompiled dmd 2.047 with -pg added and with the COV options uncommented out (not sure what all is needed) I then tried running my build script, and it took about 5 minutes for me to give up :) So I reduced the build line to build just what is necessary to build a hash map. The compile line looks like this: dmd -unittest unit_test.d dcollections/HashMap.d dcollections/Hash.d dcollections/Iterators.d dcollections/model/* I don't think model/* is really needed, but I don't suspect there is too much code in there to compile, it's all interfaces, no unit tests. So without profiling, the compiler takes 4 seconds to compile this one file with unit tests. With profiling enabled, gprof outputs this as the top hitters: Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 77.76 6.68 6.68 2952 2.26 2.26 elf_findstr(Outbuffer*, char const*, char const*) 2.10 6.86 0.18 4342 0.04 0.04 searchfixlist 1.28 6.97 0.11 663755 0.00 0.00 ScopeDsymbol::search(Loc, Identifier*, int) 1.05 7.06 0.09 2623497 0.00 0.00 isType(Object*) 0.76 7.12 0.07 911667 0.00 0.00 match(Object*, Object*, TemplateDeclaration*, Scope*) 0.76 7.19 0.07 656268 0.00 0.00 _aaGetRvalue(AA*, void*) 0.58 7.24 0.05 2507041 0.00 0.00 isTuple(Object*) 0.52 7.29 0.04 2548939 0.00 0.00 isExpression(Object*) 0.47 7.33 0.04 10124 0.00 0.01 ClassDeclaration::search(Loc, Identifier*, int) 0.35 7.36 0.03 136688 0.00 0.00 StringTable::search(char const*, unsigned int) 0.35 7.38 0.03 122998 0.00 0.00 Scope::search(Loc, Identifier*, Dsymbol**) 0.35 7.42 0.03 79912 0.00 0.00 Parameter::dim(Parameters*) 0.35 7.45 0.03 43500 0.00 0.00 AliasDeclaration::semantic(Scope*) 0.35 7.47 0.03 26358 0.00 0.01 TemplateInstance::semantic(Scope*, Expressions*) 0.29 7.50 0.03 2537875 0.00 0.00 isDsymbol(Object*) 0.23 7.52 0.02 4974808 0.00 0.00 Tuple::dyncast() 0.23 7.54 0.02 4843755 0.00 0.00 Type::dyncast() 0.23 7.56 0.02 1243524 0.00 0.00 operator new(unsigned int) 0.23 7.58 0.02 904514 0.00 0.00 arrayObjectMatch(Objects*, Objects*, TemplateDeclaration*, Scope*) 0.23 7.60 0.02 365820 0.00 0.00 speller_test(void*, char const*) 0.23 7.62 0.02 285816 0.00 0.00 Array::reserve(unsigned int) 0.23 7.64 0.02 271143 0.00 0.00 calccodsize 0.23 7.66 0.02 149682 0.00 0.00 Dchar::calcHash(char const*, unsigned int) 0.23 7.68 0.02 73379 0.00 0.00 TypeBasic::size(Loc) 0.23 7.70 0.02 39394 0.00 0.00 DsymbolExp::semantic(Scope*) 0.23 7.72 0.02 20885 0.00 0.00 TemplateInstance::semanticTiargs(Loc, Scope*, Objects*, int) 0.23 7.74 0.02 11877 0.00 0.00 TemplateDeclaration::deduceFunctionTemplateMatch(Scope*, Loc, Objects*, Expression*, Expressions*, Objects*) 0.23 7.76 0.02 5442 0.00 0.01 optelem(elem*, int) 0.23 7.78 0.02 __i686.get_pc_thunk.bx 0.12 7.79 0.01 1458990 0.00 0.00 Object::Object() 0.12 7.80 0.01 656266 0.00 0.00 DsymbolTable::lookup(Identifier*) 0.12 7.81 0.01 462797 0.00 0.00 Module::search(Loc, Identifier*, int) 0.12 7.82 0.01 414377 0.00 0.00 Dsymbol::isTemplateInstance() 0.12 7.83 0.01 354954 0.00 0.00 Expression::Expression(Loc, TOK, int) 0.12 7.84 0.01 354693 0.00 0.00 Dsymbol::pastMixin() 0.12 7.85 0.01 167119 0.00 0.00 Dsymbol::checkDeprecated(Loc, Scope*) 0.12 7.86 0.01 151694 0.00 0.00 Type::merge() 0.12 7.87 0.01 123694 0.00 0.00 Lstring::toDchars() 0.12 7.88 0.01 111982 0.00 0.00 el_calloc() 0.12 7.89 0.01 111569 0.00 0.00 resolveProperties(Scope*, Expression*) 0.12 7.90 0.01 107359 0.00 0.00 code_calloc 0.12 7.91 0.01 106932 0.00 0.00 Lexer::peek(Token*) 0.12 7.92 0.01 106468 0.00 0.00 Scope::pop() 0.12 7.93 0.01 99136 0.00 0.00 Array::push(void*) ... I can add more, but I have no idea what part of this is important for diagnosing the problem. From a naive look, it appears that elf_findstr is the problem (only 3k calls, but uses almost 80% of the runtime?), but I have no idea how to interpret this, and I don't know what the compiler does. The compiler ended up eventually not producing an exe with the message "cannot find ld", but I don't think the link step is where the problem is anyways. If you need more data, or want me to run something else, I can. -Steve
Aug 23 2010
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Steven Schveighoffer wrote:
 With profiling enabled, gprof outputs this as the top hitters:
 
 
 Flat profile:
 
 Each sample counts as 0.01 seconds.
   %   cumulative   self              self     total
  time   seconds   seconds    calls  ms/call  ms/call  name
  77.76      6.68     6.68     2952     2.26     2.26  
 elf_findstr(Outbuffer*, char const*, char const*)
   2.10      6.86     0.18     4342     0.04     0.04  searchfixlist
elf_findstr definitely looks like a problem area. I can't look at it right now, so can you post this to bugzilla please?
Aug 24 2010
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:
 elf_findstr definitely looks like a problem area. I can't look at it right
now, 
 so can you post this to bugzilla please?
I am able to find two versions of elf_findstr, one in elfobj.c and one in machobj.c, so it may be possible to remove one of them. Its docstring doesn't seem to show the 'suffix' argument. I have seen that it performs strlen() of str and suffix at the beginning, so using fat pointers (C structs that keep ptr + len) as D may be enough to avoid those strelen calls and save some time. From what I see it seems to perform a linear search inside an Outbuffer, something like a search of strtab~strs inside an array of strings, so the structure may be replaced by an associative set or ordered set lookup instead. Bye, bearophile
Aug 24 2010
parent Jacob Carlborg <doob me.com> writes:
On 2010-08-24 12:25, bearophile wrote:
 Walter Bright:
 elf_findstr definitely looks like a problem area. I can't look at it right now,
 so can you post this to bugzilla please?
I am able to find two versions of elf_findstr, one in elfobj.c and one in machobj.c, so it may be possible to remove one of them.
As the files indicate elfobj.c is for generating ELF (linux) object files and machobj.c is for generating Mach-O (osx) object files, both are needed. I guess he uses the same name for the functions to have a uniform interface, no need to change the code on the calling side.
 Its docstring doesn't seem to show the 'suffix' argument.

 I have seen that it performs strlen() of str and suffix at the beginning, so
using fat pointers (C structs that keep ptr + len) as D may be enough to avoid
those strelen calls and save some time.

  From what I see it seems to perform a linear search inside an Outbuffer,
something like a search of strtab~strs inside an array of strings, so the
structure may be replaced by an associative set or ordered set lookup instead.

 Bye,
 bearophile
-- /Jacob Carlborg
Aug 24 2010
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 24 Aug 2010 03:58:57 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 Steven Schveighoffer wrote:
 With profiling enabled, gprof outputs this as the top hitters:
   Flat profile:
  Each sample counts as 0.01 seconds.
   %   cumulative   self              self     total
  time   seconds   seconds    calls  ms/call  ms/call  name
  77.76      6.68     6.68     2952     2.26     2.26   
 elf_findstr(Outbuffer*, char const*, char const*)
   2.10      6.86     0.18     4342     0.04     0.04  searchfixlist
elf_findstr definitely looks like a problem area. I can't look at it right now, so can you post this to bugzilla please?
http://d.puremagic.com/issues/show_bug.cgi?id=4721 -Steve
Aug 24 2010
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Steven Schveighoffer wrote:
 On Tue, 24 Aug 2010 03:58:57 -0400, Walter Bright 
 <newshound2 digitalmars.com> wrote:
 
 Steven Schveighoffer wrote:
 With profiling enabled, gprof outputs this as the top hitters:
   Flat profile:
  Each sample counts as 0.01 seconds.
   %   cumulative   self              self     total
  time   seconds   seconds    calls  ms/call  ms/call  name
  77.76      6.68     6.68     2952     2.26     2.26  
 elf_findstr(Outbuffer*, char const*, char const*)
   2.10      6.86     0.18     4342     0.04     0.04  searchfixlist
elf_findstr definitely looks like a problem area. I can't look at it right now, so can you post this to bugzilla please?
http://d.puremagic.com/issues/show_bug.cgi?id=4721
Also, putting a printf in elf_findstr to print its arguments will be helpful.
Aug 24 2010
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 24 Aug 2010 14:31:26 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 Steven Schveighoffer wrote:
 On Tue, 24 Aug 2010 03:58:57 -0400, Walter Bright  
 <newshound2 digitalmars.com> wrote:

 Steven Schveighoffer wrote:
 With profiling enabled, gprof outputs this as the top hitters:
   Flat profile:
  Each sample counts as 0.01 seconds.
   %   cumulative   self              self     total
  time   seconds   seconds    calls  ms/call  ms/call  name
  77.76      6.68     6.68     2952     2.26     2.26   
 elf_findstr(Outbuffer*, char const*, char const*)
   2.10      6.86     0.18     4342     0.04     0.04  searchfixlist
elf_findstr definitely looks like a problem area. I can't look at it right now, so can you post this to bugzilla please?
http://d.puremagic.com/issues/show_bug.cgi?id=4721
Also, putting a printf in elf_findstr to print its arguments will be helpful.
Through some more work with printf, I have to agree with bearophile, this lookup function is horrid. I think it's supposed to look for a symbol in the symbol table, but it uses a linear search through all symbols to find it. Not only that, but the table is stored in one giant buffer, so once it finds the current symbol it's checking against doesn't match, it has to still loop through the remaining characters of the unmatched symbol to find the next 0 byte. I added a simple running printout of how many times the function has been called, along with how large the symbol table has grown. The code is as follows: static IDXSTR elf_findstr(Outbuffer *strtab, const char *str, const char *suffix) { + static int ncalls = 0; + ncalls++; + printf("\r%d\t%d", ncalls, strtab->size()); + fflush(stdout); const char *ent = (char *)strtab->buf+1; const char *pend = ent+strtab->size() - 1; At the end, the symbol table is over 4 million characters and the number of calls is 12677. You can watch it slow down noticeably. I also added some code to count the number of times a symbol is matched -- 648, so about 5% of the time. This means that 95% of the time, the whole table is searched. If you multiply those factors together, and take into account the nature of how it grows, you have probably 20 billion loop iterations. Whereas, a hash table would probably be much faster. I'm thinking a correct compilation time should be on the order of 3-4 seconds vs. 67 seconds it now takes. I am not sure how to fix it, but that's the gist of it. I think the symbol table is so large because of the template proliferation of dcollections, and the verbosity of D symbol names. -Steve
Aug 24 2010
next sibling parent reply Mafi <mafi example.org> writes:
Am 24.08.2010 22:56, schrieb Steven Schveighoffer:

 I am not sure how to fix it, but that's the gist of it.  I think the
 symbol table is so large because of the template proliferation of
 dcollections, and the verbosity of D symbol names.
Why are D's symbols verbose? if I understood you corectly, dmd makes a linear search no matter if i used foo or ArrayOutOfBoundsException (that's a real Java exception).
Aug 24 2010
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 24 Aug 2010 17:05:30 -0400, Mafi <mafi example.org> wrote:

 Am 24.08.2010 22:56, schrieb Steven Schveighoffer:

 I am not sure how to fix it, but that's the gist of it.  I think the
 symbol table is so large because of the template proliferation of
 dcollections, and the verbosity of D symbol names.
Why are D's symbols verbose? if I understood you corectly, dmd makes a linear search no matter if i used foo or ArrayOutOfBoundsException (that's a real Java exception).
A symbol includes the module name, and the mangled version of the function argument types, which could be class/struct names, plus any template info associated with it. For example, foo(HashSet!int hs) inside the module testme becomes: _D6testme3fooFC12dcollections7HashSet14__T7HashSetTiZ7HashSetZv -Steve
Aug 24 2010
parent reply bearophile <bearophileHUGS lycos.com> writes:
Steven Schveighoffer:
 For example, foo(HashSet!int hs) inside the module testme becomes:
 _D6testme3fooFC12dcollections7HashSet14__T7HashSetTiZ7HashSetZv
And I think some more things needs to be added to that string, like a representation for the pure attribute, etc. Bye, bearophile
Aug 24 2010
parent reply Jonathan M Davis <jmdavisprog gmail.com> writes:
On Tuesday, August 24, 2010 14:37:09 bearophile wrote:
 Steven Schveighoffer:
 For example, foo(HashSet!int hs) inside the module testme becomes:
 _D6testme3fooFC12dcollections7HashSet14__T7HashSetTiZ7HashSetZv
And I think some more things needs to be added to that string, like a representation for the pure attribute, etc. Bye, bearophile
They probably aren't there because 1. They have nothing to do with overrideability. 2. They have nothing to do with C linking. Presumably, dmd deals with those attributes at the appropriate time and then doesn't bother putting them in the symbol table because they're not relevant any more (or if they are relevant, it has other ways of getting at them). If they were actually necessary in the symbol name, they'd be there. If they aren't necessary, why bother putting them in there, making the symbol names even longer? - Jonathan M Davis
Aug 24 2010
next sibling parent reply "Simen kjaeraas" <simen.kjaras gmail.com> writes:
On Tue, 24 Aug 2010 23:53:44 +0200, Jonathan M Davis  
<jmdavisprog gmail.com> wrote:

 On Tuesday, August 24, 2010 14:37:09 bearophile wrote:
 Steven Schveighoffer:
 For example, foo(HashSet!int hs) inside the module testme becomes:
 _D6testme3fooFC12dcollections7HashSet14__T7HashSetTiZ7HashSetZv
And I think some more things needs to be added to that string, like a representation for the pure attribute, etc. Bye, bearophile
They probably aren't there because 1. They have nothing to do with overrideability. 2. They have nothing to do with C linking. Presumably, dmd deals with those attributes at the appropriate time and then doesn't bother putting them in the symbol table because they're not relevant any more (or if they are relevant, it has other ways of getting at them). If they were actually necessary in the symbol name, they'd be there. If they aren't necessary, why bother putting them in there, making the symbol names even longer?
Pure might be worth stuffing in the symbol name, as the compiler may optimize things differently for pure vs. non-pure(dirty?) code. E.g. the result of a large, pure function that takes a while to compute might be cached to prevent calling it twice. -- Simen
Aug 24 2010
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 24 Aug 2010 18:00:30 -0400, Simen kjaeraas  
<simen.kjaras gmail.com> wrote:

 On Tue, 24 Aug 2010 23:53:44 +0200, Jonathan M Davis  
 <jmdavisprog gmail.com> wrote:

 On Tuesday, August 24, 2010 14:37:09 bearophile wrote:
 Steven Schveighoffer:
 For example, foo(HashSet!int hs) inside the module testme becomes:
 _D6testme3fooFC12dcollections7HashSet14__T7HashSetTiZ7HashSetZv
And I think some more things needs to be added to that string, like a representation for the pure attribute, etc. Bye, bearophile
They probably aren't there because 1. They have nothing to do with overrideability. 2. They have nothing to do with C linking. Presumably, dmd deals with those attributes at the appropriate time and then doesn't bother putting them in the symbol table because they're not relevant any more (or if they are relevant, it has other ways of getting at them). If they were actually necessary in the symbol name, they'd be there. If they aren't necessary, why bother putting them in there, making the symbol names even longer?
Pure might be worth stuffing in the symbol name, as the compiler may optimize things differently for pure vs. non-pure(dirty?) code. E.g. the result of a large, pure function that takes a while to compute might be cached to prevent calling it twice.
These are decisions made at the compilation stage, not the linking stage. LDC I think does some link optimization, so it might make sense there, but I'm not sure. -Steve
Aug 25 2010
parent reply "Simen kjaeraas" <simen.kjaras gmail.com> writes:
Steven Schveighoffer <schveiguy yahoo.com> wrote:

 Pure might be worth stuffing in the symbol name, as the compiler may
 optimize things differently for pure vs. non-pure(dirty?) code.
 E.g. the result of a large, pure function that takes a while to compute
 might be cached to prevent calling it twice.
These are decisions made at the compilation stage, not the linking stage.
Absolutely. Now, you compile your module that uses a pure function foo in another module, and the above optimization is used. Later, that module is changed, and foo is changed to depend on some global state, and is thus no longer pure. After compiling this one module, you link your project, and the cached value is wrong, and boom! Nasal demons. -- Simen
Aug 25 2010
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 25 Aug 2010 11:36:24 -0400, Simen kjaeraas  
<simen.kjaras gmail.com> wrote:

 Steven Schveighoffer <schveiguy yahoo.com> wrote:

 Pure might be worth stuffing in the symbol name, as the compiler may
 optimize things differently for pure vs. non-pure(dirty?) code.
 E.g. the result of a large, pure function that takes a while to compute
 might be cached to prevent calling it twice.
These are decisions made at the compilation stage, not the linking stage.
Absolutely. Now, you compile your module that uses a pure function foo in another module, and the above optimization is used. Later, that module is changed, and foo is changed to depend on some global state, and is thus no longer pure. After compiling this one module, you link your project, and the cached value is wrong, and boom! Nasal demons.
You could say the same about just about any function. Changing implementation can be a bad source of stale object errors, I've had it happen many times in C++ without pure involved at all. Moral is, always recompile everything :) My point is just that name mangling was done to allow overloaded functions of the same name be linked by a linker who doesn't understand overloading. If pure functions cannot be overloaded on purity alone, then there's no reason to mangle purity into the symbol. But it's a moot point, since purity *is* mangled into the symbol name. -Steve
Aug 25 2010
parent Walter Bright <newshound2 digitalmars.com> writes:
Steven Schveighoffer wrote:
 But it's a moot point, since purity *is* mangled into the symbol name.
Yes, that's done because the caller of a function may depend on that function's purity. Changing the name mangling when purity changes will ensure that the caller gets recompiled as well.
Aug 25 2010
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Jonathan M Davis:
 They probably aren't there because
 ...
In Bugzilla there are some pure-related bugs (3833, 3086/3831, maybe 4505) that I think need that attribute in the mangled string. But as usual I may be wrong, and other ways to solve those problems may be invented. Bye, bearophile
Aug 24 2010
parent reply Jacob Carlborg <doob me.com> writes:
On 2010-08-25 02:38, bearophile wrote:
 Jonathan M Davis:
 They probably aren't there because
 ...
In Bugzilla there are some pure-related bugs (3833, 3086/3831, maybe 4505) that I think need that attribute in the mangled string. But as usual I may be wrong, and other ways to solve those problems may be invented. Bye, bearophile
According to the ABI pure should already be in the mangled name (don't know if dmd follows that though). The mangled form looks like this: FuncAttrPure: Na -- /Jacob Carlborg
Aug 25 2010
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Jacob Carlborg:
 According to the ABI pure should already be in the mangled name (don't 
 know if dmd follows that though). The mangled form looks like this:
 
 FuncAttrPure:
      Na
Yes, it's there: import std.c.stdio: printf; int function1(int x) { return x * 2; } pure int function2(int x) { return x * 2; } void main() { printf("%d\n", function1(10)); printf("%d\n", function2(10)); } _D5test29function1FiZi comdat enter 4,0 add EAX,EAX leave ret _D5test29function2FNaiZi comdat assume CS:_D5test29function2FNaiZi enter 4,0 add EAX,EAX leave ret Bye, bearophile
Aug 25 2010
prev sibling parent reply Jonathan M Davis <jmdavisprog gmail.com> writes:
On Wednesday, August 25, 2010 00:42:51 Jacob Carlborg wrote:
 On 2010-08-25 02:38, bearophile wrote:
 Jonathan M Davis:
 They probably aren't there because
 ...
In Bugzilla there are some pure-related bugs (3833, 3086/3831, maybe 4505) that I think need that attribute in the mangled string. But as usual I may be wrong, and other ways to solve those problems may be invented. Bye, bearophile
According to the ABI pure should already be in the mangled name (don't know if dmd follows that though). The mangled form looks like this: FuncAttrPure: Na
So, sodium is pure huh. :) - Jonathan M Davis
Aug 25 2010
parent reply Justin Johansson <no spam.com> writes:
On 26/08/10 02:10, Jonathan M Davis wrote:
 On Wednesday, August 25, 2010 00:42:51 Jacob Carlborg wrote:
 On 2010-08-25 02:38, bearophile wrote:
 Jonathan M Davis:
 They probably aren't there because
 ...
In Bugzilla there are some pure-related bugs (3833, 3086/3831, maybe 4505) that I think need that attribute in the mangled string. But as usual I may be wrong, and other ways to solve those problems may be invented. Bye, bearophile
According to the ABI pure should already be in the mangled name (don't know if dmd follows that though). The mangled form looks like this: FuncAttrPure: Na
So, sodium is pure huh. :) - Jonathan M Davis
And natrium also? :-)
Aug 25 2010
parent reply "Simen kjaeraas" <simen.kjaras gmail.com> writes:
Justin Johansson <no spam.com> wrote:
 FuncAttrPure:
       Na
So, sodium is pure huh. :) - Jonathan M Davis
And natrium also? :-)
Natrium and sodium are the same. -- Simen
Aug 25 2010
parent Justin Johansson <no spam.com> writes:
On 26/08/10 02:35, Simen kjaeraas wrote:
 Justin Johansson <no spam.com> wrote:
 FuncAttrPure:
 Na
So, sodium is pure huh. :) - Jonathan M Davis
And natrium also? :-)
Natrium and sodium are the same.
Of course! Just a bit of tautological silliness on my part. :-)
Aug 25 2010
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Steven Schveighoffer wrote:
 Through some more work with printf, I have to agree with bearophile, 
 this lookup function is horrid.
It is now, but when it was originally written (maybe as long as 20 years ago) there were only a few strings in the table, and it was fine. It's just outlived its design. Clearly, it should now be a hash table. Just goes to show how useful a profiler is.
Aug 24 2010
next sibling parent dsimcha <dsimcha yahoo.com> writes:
== Quote from Walter Bright (newshound2 digitalmars.com)'s article
 Steven Schveighoffer wrote:
 Through some more work with printf, I have to agree with bearophile,
 this lookup function is horrid.
It is now, but when it was originally written (maybe as long as 20 years ago) there were only a few strings in the table, and it was fine. It's just outlived its design. Clearly, it should now be a hash table. Just goes to show how useful a profiler is.
Wow, now it's really hit home for me how much programming languages and libraries have advanced in the past 20 years. Nowadays any reasonable person would generally use a hash table even for small N because it's not any harder to code. Any modern language worth its salt comes with one either built in or in the standard lib. I guess 20 years ago this wasn't so.
Aug 24 2010
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 24 Aug 2010 18:00:32 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 Steven Schveighoffer wrote:
 Through some more work with printf, I have to agree with bearophile,  
 this lookup function is horrid.
It is now, but when it was originally written (maybe as long as 20 years ago) there were only a few strings in the table, and it was fine. It's just outlived its design. Clearly, it should now be a hash table. Just goes to show how useful a profiler is.
Yes, I'm glad you pushed me to do it. Looking forward to the fix. -Steve
Aug 25 2010
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Steven Schveighoffer wrote:
 Just goes to show how useful a profiler is.
Yes, I'm glad you pushed me to do it. Looking forward to the fix.
The two secrets to writing fast code are: 1. using a profiler 2. looking at the assembler output of the compiler In my experience, programmers will go to astonishing lengths to avoid doing those two, and will correspondingly expend hundreds of hours "optimizing" and getting perplexing results.
Aug 25 2010
next sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Walter Bright (newshound2 digitalmars.com)'s article
 Steven Schveighoffer wrote:
 Just goes to show how useful a profiler is.
Yes, I'm glad you pushed me to do it. Looking forward to the fix.
The two secrets to writing fast code are: 1. using a profiler 2. looking at the assembler output of the compiler In my experience, programmers will go to astonishing lengths to avoid doing those two, and will correspondingly expend hundreds of hours "optimizing" and getting perplexing results.
I think you overestimate the amount of programmers that can read assembler nowadays. FWIW I only learned when I posted a bunch of stuff here about various performance issues and you kept asking me to read the disassembly. In hindsight it was well worth it, though. I think reading assembly language and understanding the gist of how things work at that level is still an important skill for modern programmers. While writing assembly is notoriously hard (I've never even tried for anything non-trivial), reading it is a heck of a lot easier to pick up. I went from zero to basically literate in a few evenings.
Aug 25 2010
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
dsimcha wrote:
 I think you overestimate the amount of programmers that can read assembler
 nowadays.
The thing is, you *don't* need to be able to read assembler in order to make sense of the assembler output! For example, if: f(); is in the source code, you don't need to know much assembler to see if it's generating one instruction or a hundred.
 FWIW I only learned when I posted a bunch of stuff here about various
 performance issues and you kept asking me to read the disassembly.  In
hindsight
 it was well worth it, though.  I think reading assembly language and
understanding
 the gist of how things work at that level is still an important skill for
modern
 programmers.  While writing assembly is notoriously hard (I've never even tried
 for anything non-trivial), reading it is a heck of a lot easier to pick up.  I
 went from zero to basically literate in a few evenings.
Right, assembler isn't hard to read after you spend a few moments with it. After all, MOV EAX,3 is hardly rocket science!
Aug 25 2010
parent reply "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:i53ucl$22nt$1 digitalmars.com...
 Right, assembler isn't hard to read after you spend a few moments with it. 
 After all,

 MOV EAX,3

 is hardly rocket science!
Heh, funny thing about difficulty is how relative it can be. I've heard people who do rocketry say that rocket science really isn't as hard as people think. But programming doesn't come very naturally to most people, either. It would be funny to hear one rocket scientist say to another rocket scientist, "Oh come on, it's not computer programming!"
Aug 25 2010
parent Walter Bright <newshound2 digitalmars.com> writes:
Nick Sabalausky wrote:
 "Walter Bright" <newshound2 digitalmars.com> wrote in message 
 is hardly rocket science!
Heh, funny thing about difficulty is how relative it can be. I've heard people who do rocketry say that rocket science really isn't as hard as people think. But programming doesn't come very naturally to most people, either. It would be funny to hear one rocket scientist say to another rocket scientist, "Oh come on, it's not computer programming!"
Doing amateur rocketry isn't that hard, the formulas are simple and the more complex stuff (the engines) are off-the-shelf components. It isn't even that hard to build your own engines. The harder stuff is when you put a man on the top of it, and you try to make it reliable.
Aug 25 2010
prev sibling parent BCS <none anon.com> writes:
Hello dsimcha,

 FWIW I only learned when I posted a bunch of stuff here about various
 performance issues and you kept asking me to read the disassembly. In
 hindsight it was well worth it, though.
 
I still thing CS-101 should be in ASM. It would give people a better understanding of what really happens as well weed out the total incompetents. OTOH I think CS-102 should be in a scheam or one of it's ilk to teach how the theory works independent of the machine. :) -- ... <IXOYE><
Aug 25 2010
prev sibling next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 25 Aug 2010 14:37:33 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 Steven Schveighoffer wrote:
 Just goes to show how useful a profiler is.
Yes, I'm glad you pushed me to do it. Looking forward to the fix.
The two secrets to writing fast code are: 1. using a profiler 2. looking at the assembler output of the compiler In my experience, programmers will go to astonishing lengths to avoid doing those two...
You mean like asking someone who reported low performance of your program on the newsgroup to do it for you? :) -Steve
Aug 25 2010
next sibling parent reply retard <re tard.com.invalid> writes:
Wed, 25 Aug 2010 14:53:58 -0400, Steven Schveighoffer wrote:

 On Wed, 25 Aug 2010 14:37:33 -0400, Walter Bright
 <newshound2 digitalmars.com> wrote:
 
 Steven Schveighoffer wrote:
 Just goes to show how useful a profiler is.
Yes, I'm glad you pushed me to do it. Looking forward to the fix.
The two secrets to writing fast code are: 1. using a profiler 2. looking at the assembler output of the compiler In my experience, programmers will go to astonishing lengths to avoid doing those two...
You mean like asking someone who reported low performance of your program on the newsgroup to do it for you? :)
He forgot: 0. use a better algorithm (the big O notation matters, like in this case)
Aug 25 2010
next sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from retard (re tard.com.invalid)'s article
 Wed, 25 Aug 2010 14:53:58 -0400, Steven Schveighoffer wrote:
 On Wed, 25 Aug 2010 14:37:33 -0400, Walter Bright
 <newshound2 digitalmars.com> wrote:

 Steven Schveighoffer wrote:
 Just goes to show how useful a profiler is.
Yes, I'm glad you pushed me to do it. Looking forward to the fix.
The two secrets to writing fast code are: 1. using a profiler 2. looking at the assembler output of the compiler In my experience, programmers will go to astonishing lengths to avoid doing those two...
You mean like asking someone who reported low performance of your program on the newsgroup to do it for you? :)
He forgot: 0. use a better algorithm (the big O notation matters, like in this case)
Yeah, but unless you use a profiler, how are you going to find those spots where N isn't as small as you thought it would be?
Aug 25 2010
parent reply retard <re tard.com.invalid> writes:
Wed, 25 Aug 2010 19:08:37 +0000, dsimcha wrote:

 == Quote from retard (re tard.com.invalid)'s article
 Wed, 25 Aug 2010 14:53:58 -0400, Steven Schveighoffer wrote:
 On Wed, 25 Aug 2010 14:37:33 -0400, Walter Bright
 <newshound2 digitalmars.com> wrote:

 Steven Schveighoffer wrote:
 Just goes to show how useful a profiler is.
Yes, I'm glad you pushed me to do it. Looking forward to the fix.
The two secrets to writing fast code are: 1. using a profiler 2. looking at the assembler output of the compiler In my experience, programmers will go to astonishing lengths to avoid doing those two...
You mean like asking someone who reported low performance of your program on the newsgroup to do it for you? :)
He forgot: 0. use a better algorithm (the big O notation matters, like in this case)
Yeah, but unless you use a profiler, how are you going to find those spots where N isn't as small as you thought it would be?
Test-driven develoment, automatic testing tools, common sense? Sometimes the profiler's output is too fine-grained.
Aug 25 2010
next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 25 Aug 2010 15:11:17 -0400, retard <re tard.com.invalid> wrote:

 Wed, 25 Aug 2010 19:08:37 +0000, dsimcha wrote:

 == Quote from retard (re tard.com.invalid)'s article
 Wed, 25 Aug 2010 14:53:58 -0400, Steven Schveighoffer wrote:
 On Wed, 25 Aug 2010 14:37:33 -0400, Walter Bright
 <newshound2 digitalmars.com> wrote:

 Steven Schveighoffer wrote:
 Just goes to show how useful a profiler is.
Yes, I'm glad you pushed me to do it. Looking forward to the fix.
The two secrets to writing fast code are: 1. using a profiler 2. looking at the assembler output of the compiler In my experience, programmers will go to astonishing lengths to avoid doing those two...
You mean like asking someone who reported low performance of your program on the newsgroup to do it for you? :)
He forgot: 0. use a better algorithm (the big O notation matters, like in this case)
Yeah, but unless you use a profiler, how are you going to find those spots where N isn't as small as you thought it would be?
Test-driven develoment, automatic testing tools, common sense? Sometimes the profiler's output is too fine-grained.
On the contrary, this was one of those bugs that you almost need a profiler for. Consider that after over 10 years of d compilers nobody has found this deficiency until my little library came along. And even then, it's hard to say there actually *is* a problem, the compiler runs and outputs valid code, and if you use the -v switch it's continuously doing things. Even when you profile it, you can see that the errant function only consumes small chunks of time, but it adds up to an unacceptable level. Test-driven development is only useful if you have a certain criteria you expect to achieve. How do you define how fast the compiler *should* run until you run it? It's a very complex piece of software where performance is secondary to correctness. I can understand not having touched code that outputs an object format for 20 years. I don't regularly go through my code looking for opportunities to increase big-O performance. I'm just glad it's been found and will be fixed. -Steve
Aug 25 2010
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
retard wrote:
 Wed, 25 Aug 2010 19:08:37 +0000, dsimcha wrote:
 Yeah, but unless you use a profiler, how are you going to find those
 spots where N isn't as small as you thought it would be?
Test-driven develoment, automatic testing tools,
Neither of those are designed to find bottlenecks, and I've never seen one that could. Besides, why avoid a tool that is *designed* to find bottlenecks, like a profiler?
 common sense?
Is not a substitute for measurement. Like I alluded to, I've seen lots of programmers using common sense to optimize the wrong part of the program, and failing to get useful results. Yes, I've had them *insist* to me (to the point of yelling) that that's were the bottlenecks were, until I ran the profiler on their code and showed them otherwise.
 Sometimes the profiler's output is too fine-grained.
There are many different profilers, with all kinds of different approaches. Some high level, some at the instruction level, some free, some for pay. All of them are cheaper than spending hundreds of hours optimizing the wrong part of the code.
Aug 25 2010
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
retard:
 0. use a better algorithm (the big O notation matters, like in this case)
This is a big mistake, because: - Optimizing before you know what to optimize is premature optimization. The profiler is one of the best tools to find what to optimize. - Often data structures and algorithms are a trade-off between different needs. So "better" is not absolute, it's problem-specific, and the profiler helps to find such specific problems. And regarding the problem of searching in a sequence of items, if the sequence is small (probably up to 10 or 20 if the items are integers, the language is a low level one and the associative array is not very efficient), a linear search or a binary search is often faster. --------------- dsimcha:
While writing assembly is notoriously hard (I've never even tried for anything
non-trivial), 
Using well certain Java frameworks is harder :-) Bye, bearophile
Aug 25 2010
parent reply Walter Bright <newshound2 digitalmars.com> writes:
bearophile wrote:
 And regarding the problem of searching in a sequence of items, if the
 sequence is small (probably up to 10 or 20 if the items are integers, the
 language is a low level one and the associative array is not very efficient),
 a linear search or a binary search is often faster.
Yup, and that piece of code was written in a time where there were very few items added into the string table. It never showed up on the radar before.
Aug 25 2010
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Walter Bright (newshound2 digitalmars.com)'s article
 bearophile wrote:
 And regarding the problem of searching in a sequence of items, if the
 sequence is small (probably up to 10 or 20 if the items are integers, the
 language is a low level one and the associative array is not very efficient),
 a linear search or a binary search is often faster.
Yup, and that piece of code was written in a time where there were very few items added into the string table. It never showed up on the radar before.
I wonder how much of the compile time of more typical projects is taken up by this linear search. Could it be that that's also why std.stdio compiles relatively slow? It's a big module that does a lot of template instantiations. If this silly bug was a bottleneck everywhere, then I'd love to see D vs. Go compile times after this gets fixed.
Aug 25 2010
parent Walter Bright <newshound2 digitalmars.com> writes:
dsimcha wrote:
 I wonder how much of the compile time of more typical projects is taken up by
this
 linear search.  Could it be that that's also why std.stdio compiles relatively
 slow?  It's a big module that does a lot of template instantiations.  If this
 silly bug was a bottleneck everywhere, then I'd love to see D vs. Go compile
times
 after this gets fixed.
It could very well be the source of these issues.
Aug 25 2010
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
retard wrote:
 He forgot:
 
 0. use a better algorithm (the big O notation matters, like in this case)
No, I didn't forget that. There's no benefit to using a better algorithm in the code that isn't the bottleneck. In my experience, even very experienced developers are nearly always wrong about where the bottlenecks are if they've never used a profiler.
Aug 25 2010
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Steven Schveighoffer wrote:
 You mean like asking someone who reported low performance of your 
 program on the newsgroup to do it for you? :)
1. He had the test case, I didn't. 2. People have repeatedly suggested I delegate some of the compiler work. Why not?
Aug 25 2010
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 25 Aug 2010 16:29:06 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 Steven Schveighoffer wrote:
 You mean like asking someone who reported low performance of your  
 program on the newsgroup to do it for you? :)
1. He had the test case, I didn't.
He == me :) The test case was available as a tarball download at www.dsource.org/projects/dcollections. Not that I mind doing the dirty work, if it gets results, but asking someone to compile your product in a different way and then asking them to try and analyze the output of *your* program isn't the best way to get results. If I told Microsoft that Word was crashing on a document it made, and they responded by sending me the source code for Word and said "You have the test case, so you figure it out" I don't think people would like them very much. I have had this problem for months, and haven't really pushed it except for snide remarks until recently, when I figured if I didn't do it, nobody would. I understand the lack of time, and that was why I did the work, but I didn't really expect to get results.
 2. People have repeatedly suggested I delegate some of the compiler  
 work. Why not?
What I've done hardly qualifies as doing compiler work. I just helped identify the problem :) I hope you plan on fixing it, I can't. -Steve
Aug 25 2010
parent Walter Bright <newshound2 digitalmars.com> writes:
Steven Schveighoffer wrote:
 On Wed, 25 Aug 2010 16:29:06 -0400, Walter Bright 
 <newshound2 digitalmars.com> wrote:
 
 Steven Schveighoffer wrote:
 You mean like asking someone who reported low performance of your 
 program on the newsgroup to do it for you? :)
1. He had the test case, I didn't.
He == me :) The test case was available as a tarball download at www.dsource.org/projects/dcollections. Not that I mind doing the dirty work, if it gets results, but asking someone to compile your product in a different way and then asking them to try and analyze the output of *your* program isn't the best way to get results. If I told Microsoft that Word was crashing on a document it made, and they responded by sending me the source code for Word and said "You have the test case, so you figure it out" I don't think people would like them very much. I have had this problem for months, and haven't really pushed it except for snide remarks until recently, when I figured if I didn't do it, nobody would. I understand the lack of time, and that was why I did the work, but I didn't really expect to get results.
I hope that you enjoyed doing this, and I hope to make building the compiler an easy thing for users to do, if they are so inclined. I also wanted to push the issue of using a profiler <g>.
 2. People have repeatedly suggested I delegate some of the compiler 
 work. Why not?
What I've done hardly qualifies as doing compiler work. I just helped identify the problem :) I hope you plan on fixing it, I can't.
Yes, I'll fix it.
Aug 25 2010
prev sibling parent reply Era Scarecrow <rtcvb32 yahoo.com> writes:
== Quote from Walter Bright (newshound2 digitalmars.com)'s article
 Steven Schveighoffer wrote:
 Just goes to show how useful a profiler is.
Yes, I'm glad you pushed me to do it. Looking forward to the fix.
The two secrets to writing fast code are: 1. using a profiler 2. looking at the assembler output of the compiler In my experience, programmers will go to astonishing lengths to avoid doing those two, and will correspondingly expend hundreds of hours "optimizing" and getting perplexing results.
There are also those who are not programmers, and don't know what they are doing in the first place. A couple years back i was hired as part of a 'rural our-sourcing' experiment where they took people in the local area who had _some_ technical potential. They would then be hired out cheaper than experienced programmers. We went through a 14 week course for Java boot camp. Out of 24 I was the only one who knew anything about programming. Through the course they weren't told anything about profiling, looking at assembly language, or using a debugger. They were taught the absolute minimum. I watched several when the program wouldn't compile or work right they would randomly make changes trying to get the code to work. Be afraid. Be very afraid.
Aug 25 2010
next sibling parent "Nick Sabalausky" <a a.a> writes:
"Era Scarecrow" <rtcvb32 yahoo.com> wrote in message 
news:i54qi9$1d2g$1 digitalmars.com...
 == Quote from Walter Bright (newshound2 digitalmars.com)'s article
 Steven Schveighoffer wrote:
 Just goes to show how useful a profiler is.
Yes, I'm glad you pushed me to do it. Looking forward to the fix.
The two secrets to writing fast code are: 1. using a profiler 2. looking at the assembler output of the compiler In my experience, programmers will go to astonishing lengths to avoid doing those two, and will correspondingly expend hundreds of hours "optimizing" and getting perplexing results.
There are also those who are not programmers, and don't know what they are doing in the first place. A couple years back i was hired as part of a 'rural our-sourcing' experiment where they took people in the local area who had _some_ technical potential. They would then be hired out cheaper than experienced programmers. We went through a 14 week course for Java boot camp. Out of 24 I was the only one who knew anything about programming. Through the course they weren't told anything about profiling, looking at assembly language, or using a debugger. They were taught the absolute minimum. I watched several when the program wouldn't compile or work right they would randomly make changes trying to get the code to work. Be afraid. Be very afraid.
From what I've seen, you get essentially the same results from most HR depts. The worst applicants always seem to look the best to the HR folks and vice versa.
Aug 25 2010
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
Era Scarecrow wrote:
 == Quote from Walter Bright (newshound2 digitalmars.com)'s article
 The two secrets to writing fast code are:
 1. using a profiler
 2. looking at the assembler output of the compiler
 In my experience, programmers will go to astonishing lengths to avoid doing
 those two, and will correspondingly expend hundreds of hours "optimizing" and
 getting perplexing results.
There are also those who are not programmers, and don't know what they are doing in the first place.
Sure, but my advice is directed at the people who *do* know what they are doing, but are avoiding using a profiler and looking at the assembly output.
Aug 25 2010
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Steven Schveighoffer wrote:
 On Tue, 24 Aug 2010 03:58:57 -0400, Walter Bright 
 <newshound2 digitalmars.com> wrote:
 
 Steven Schveighoffer wrote:
 With profiling enabled, gprof outputs this as the top hitters:
   Flat profile:
  Each sample counts as 0.01 seconds.
   %   cumulative   self              self     total
  time   seconds   seconds    calls  ms/call  ms/call  name
  77.76      6.68     6.68     2952     2.26     2.26  
 elf_findstr(Outbuffer*, char const*, char const*)
   2.10      6.86     0.18     4342     0.04     0.04  searchfixlist
elf_findstr definitely looks like a problem area. I can't look at it right now, so can you post this to bugzilla please?
http://d.puremagic.com/issues/show_bug.cgi?id=4721
Let me know how this works: http://www.dsource.org/projects/dmd/changeset/628
Aug 25 2010
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2010-08-26 08:13, Walter Bright wrote:
 Steven Schveighoffer wrote:
 On Tue, 24 Aug 2010 03:58:57 -0400, Walter Bright
 <newshound2 digitalmars.com> wrote:

 Steven Schveighoffer wrote:
 With profiling enabled, gprof outputs this as the top hitters:
 Flat profile:
 Each sample counts as 0.01 seconds.
 % cumulative self self total
 time seconds seconds calls ms/call ms/call name
 77.76 6.68 6.68 2952 2.26 2.26 elf_findstr(Outbuffer*, char const*,
 char const*)
 2.10 6.86 0.18 4342 0.04 0.04 searchfixlist
elf_findstr definitely looks like a problem area. I can't look at it right now, so can you post this to bugzilla please?
http://d.puremagic.com/issues/show_bug.cgi?id=4721
Let me know how this works: http://www.dsource.org/projects/dmd/changeset/628
Shouldn't machobj.c get the same optimization? -- /Jacob Carlborg
Aug 26 2010
parent reply BCS <none anon.com> writes:
Hello Jacob,

 On 2010-08-26 08:13, Walter Bright wrote:
 
 Steven Schveighoffer wrote:
 
 On Tue, 24 Aug 2010 03:58:57 -0400, Walter Bright
 <newshound2 digitalmars.com> wrote:
 Steven Schveighoffer wrote:
 
 With profiling enabled, gprof outputs this as the top hitters:
 Flat profile:
 Each sample counts as 0.01 seconds.
 % cumulative self self total
 time seconds seconds calls ms/call ms/call name
 77.76 6.68 6.68 2952 2.26 2.26 elf_findstr(Outbuffer*, char
 const*,
 char const*)
 2.10 6.86 0.18 4342 0.04 0.04 searchfixlist
elf_findstr definitely looks like a problem area. I can't look at it right now, so can you post this to bugzilla please?
http://d.puremagic.com/issues/show_bug.cgi?id=4721
Let me know how this works: http://www.dsource.org/projects/dmd/changeset/628
Shouldn't machobj.c get the same optimization?
Shouldn't something like a table lookup be shared rather than duplicated? -- ... <IXOYE><
Aug 26 2010
parent Jacob Carlborg <doob me.com> writes:
On 2010-08-26 16:14, BCS wrote:
 Hello Jacob,

 On 2010-08-26 08:13, Walter Bright wrote:

 Steven Schveighoffer wrote:

 On Tue, 24 Aug 2010 03:58:57 -0400, Walter Bright
 <newshound2 digitalmars.com> wrote:
 Steven Schveighoffer wrote:

 With profiling enabled, gprof outputs this as the top hitters:
 Flat profile:
 Each sample counts as 0.01 seconds.
 % cumulative self self total
 time seconds seconds calls ms/call ms/call name
 77.76 6.68 6.68 2952 2.26 2.26 elf_findstr(Outbuffer*, char
 const*,
 char const*)
 2.10 6.86 0.18 4342 0.04 0.04 searchfixlist
elf_findstr definitely looks like a problem area. I can't look at it right now, so can you post this to bugzilla please?
http://d.puremagic.com/issues/show_bug.cgi?id=4721
Let me know how this works: http://www.dsource.org/projects/dmd/changeset/628
Shouldn't machobj.c get the same optimization?
Shouldn't something like a table lookup be shared rather than duplicated?
Yes, that would be better. -- /Jacob Carlborg
Aug 27 2010
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 26 Aug 2010 02:13:34 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 Steven Schveighoffer wrote:
 On Tue, 24 Aug 2010 03:58:57 -0400, Walter Bright  
 <newshound2 digitalmars.com> wrote:

 Steven Schveighoffer wrote:
 With profiling enabled, gprof outputs this as the top hitters:
   Flat profile:
  Each sample counts as 0.01 seconds.
   %   cumulative   self              self     total
  time   seconds   seconds    calls  ms/call  ms/call  name
  77.76      6.68     6.68     2952     2.26     2.26   
 elf_findstr(Outbuffer*, char const*, char const*)
   2.10      6.86     0.18     4342     0.04     0.04  searchfixlist
elf_findstr definitely looks like a problem area. I can't look at it right now, so can you post this to bugzilla please?
http://d.puremagic.com/issues/show_bug.cgi?id=4721
Let me know how this works: http://www.dsource.org/projects/dmd/changeset/628
Better, now takes 20 seconds vs over 60. The new culprit: Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 75.79 6.51 6.51 8103 0.80 0.80 TemplateDeclaration::toJsonBuffer(OutBuffer*) 3.14 6.78 0.27 1668093 0.00 0.00 StructDeclaration::semantic(Scope*) 2.10 6.96 0.18 1 180.00 180.00 do32bit(FL, evc*, int) 1.98 7.13 0.17 15445 0.01 0.01 EnumDeclaration::toJsonBuffer(OutBuffer*) 0.70 7.19 0.06 656268 0.00 0.00 Port::isSignallingNan(long double) 0.47 7.23 0.04 915560 0.00 0.00 StructDeclaration::toCBuffer(OutBuffer*, HdrGenState*) 0.47 7.27 0.04 Dsymbol::searchX(Loc, Scope*, Identifier*) I haven't looked at toJsonBuffer at all (btw, why are we calling this function if I'm not outputting json?) -Steve
Aug 26 2010
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Steven Schveighoffer:
 I haven't looked at toJsonBuffer at all (btw, why are we calling this  
 function if I'm not outputting json?)
Fit for a new bugzilla entry? Bye, bearophile
Aug 26 2010
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 26 Aug 2010 08:36:44 -0400, bearophile <bearophileHUGS lycos.com>  
wrote:

 Steven Schveighoffer:
 I haven't looked at toJsonBuffer at all (btw, why are we calling this
 function if I'm not outputting json?)
Fit for a new bugzilla entry?
I'll just put into the same report, and let Walter decide if it's still a bug. I am less than ignorant when it comes to compiler innards. -Steve
Aug 26 2010
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Steven Schveighoffer wrote:
 Better, now takes 20 seconds vs over 60.  The new culprit:
 
 Flat profile:
 
 Each sample counts as 0.01 seconds.
   %   cumulative   self              self     total
  time   seconds   seconds    calls  ms/call  ms/call  name
  75.79      6.51     6.51     8103     0.80     0.80  
 TemplateDeclaration::toJsonBuffer(OutBuffer*)
This is most peculiar, as that should have shown up on the previous profile.
 I haven't looked at toJsonBuffer at all (btw, why are we calling this 
 function if I'm not outputting json?)
That only happens if -X is passed on the command line, or one of the files on the command line has a .json index.
Aug 26 2010
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 26 Aug 2010 12:53:59 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 Steven Schveighoffer wrote:
 Better, now takes 20 seconds vs over 60.  The new culprit:
  Flat profile:
  Each sample counts as 0.01 seconds.
   %   cumulative   self              self     total
  time   seconds   seconds    calls  ms/call  ms/call  name
  75.79      6.51     6.51     8103     0.80     0.80   
 TemplateDeclaration::toJsonBuffer(OutBuffer*)
This is most peculiar, as that should have shown up on the previous profile.
I did some more testing. I think I compiled the profiled version of the svn trunk dmd wrong. This is what happens when you let idiots debug your code for you ;) I recompiled it, and here is the new list: Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 80.31 11.99 11.99 19000 0.63 0.63 searchfixlist 0.67 12.09 0.10 203173 0.00 0.00 StringTable::search(char const*, unsigned int) 0.60 12.18 0.09 369389 0.00 0.00 Lexer::scan(Token*) 0.54 12.26 0.08 953613 0.00 0.00 ScopeDsymbol::search(Loc, Identifier*, int) 0.47 12.33 0.07 1449798 0.00 0.00 calccodsize 0.40 12.39 0.06 587814 0.00 0.00 code_calloc 0.40 12.45 0.06 41406 0.00 0.00 pinholeopt 0.33 12.50 0.05 901563 0.00 0.00 _aaGetRvalue(AA*, void*) 0.33 12.55 0.05 138329 0.00 0.00 reftoident(int, unsigned long long, Symbol*, unsigned long long, int) 0.33 12.60 0.05 26849 0.00 0.00 ecom(elem**) 0.27 12.64 0.04 230869 0.00 0.00 Type::totym() 0.27 12.68 0.04 62784 0.00 0.00 touchfunc(int) 0.27 12.72 0.04 37623 0.00 0.00 optelem(elem*, int) 0.27 12.76 0.04 28348 0.00 0.00 assignaddrc It looks like searchfixlist is another linear search, looking back at the other profile, it was the second highest consumer of runtime at 2% of the runtime before your fix, so it catapulted up to 80% of the runtime. It looks like a linked-list search, so it might benefit from a hash table as well? I'm not really sure. Also, the 2% was on only one file being compiled, with the shortened run for searchfixlist is much higher. I'll update the bug.
Aug 26 2010
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
Steven Schveighoffer wrote:
 I'll update the bug.
Thanks!
Aug 26 2010
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Steven Schveighoffer wrote:
 Each sample counts as 0.01 seconds.
   %   cumulative   self              self     total
  time   seconds   seconds    calls  ms/call  ms/call  name
  80.31     11.99    11.99    19000     0.63     0.63  searchfixlist
Just for fun, searchfixlist goes back at least to 1983 or so.
Aug 26 2010
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:
 Just for fun, searchfixlist goes back at least to 1983 or so.
It contains this if (I am not able to indent it well): if (s->Sseg == p->Lseg && (s->Sclass == SCstatic || #if TARGET_LINUX || TARGET_OSX || TARGET_FREEBSD || TARGET_SOLARIS (!(config.flags3 & CFG3pic) && s->Sclass == SCglobal)) && #else s->Sclass == SCglobal) && #endif s->Sxtrnnum == 0 && p->Lflags & CFselfrel) { How do you rewrite that in good D code? A possible way is to split that messy if into two nested ifs. Between first and second if you define a boolean variable in two different ways using a static if. And in the second if you use the boolean variable and the second part of the runtime test. Something like this: if (part1) { static if (versions_test) { bool aux = ... } else { bool aux = ... } if (aux && part2) { // ... } else { // ... } } aux is defined in the middle and not before the first if because performing this runtime test is not necessary when part1 fails. Bye, bearophile
Aug 26 2010
prev sibling parent reply BCS <none anon.com> writes:
Hello Walter,

 Steven Schveighoffer wrote:
 
 Each sample counts as 0.01 seconds.
 %   cumulative   self              self     total
 time   seconds   seconds    calls  ms/call  ms/call  name
 80.31     11.99    11.99    19000     0.63     0.63  searchfixlist
Just for fun, searchfixlist goes back at least to 1983 or so.
Early or late '83? I ask because *I* go back to '83 or so. :) -- ... <IXOYE><
Aug 26 2010
parent Walter Bright <newshound2 digitalmars.com> writes:
BCS wrote:
 Hello Walter,
 
 Steven Schveighoffer wrote:

 Each sample counts as 0.01 seconds.
 %   cumulative   self              self     total
 time   seconds   seconds    calls  ms/call  ms/call  name
 80.31     11.99    11.99    19000     0.63     0.63  searchfixlist
Just for fun, searchfixlist goes back at least to 1983 or so.
Early or late '83? I ask because *I* go back to '83 or so. :)
June 7th, 3:26 PM. Give or take 6 months.
Aug 26 2010
prev sibling parent Kagamin <spam here.lot> writes:
Walter Bright Wrote:

 It is now, but when it was originally written (maybe as long as 20 years ago) 
 there were only a few strings in the table, and it was fine. It's just
outlived 
 its design. Clearly, it should now be a hash table.
Where did you get it? Digital Mars seems to not have an elf C compiler.
Aug 25 2010