www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - If Zortech C++ and dmc++ are fast, why is dmd's asm slow?

reply James Lu <jamtlu gmail.com> writes:
 In 1988 Zortech C++ was the first C++ compiler to ship for 
 Windows and the performance of its compiled executables 
 compared favourably against Microsoft C 5.1 and Watcom C 6.5 in 
 a graphics benchmark run by PC Magazine.
If Zortech C++ and dmc++ are fast, why is dmd's asm slow? I thought they used the same machine code backend.
May 29 2021
parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Saturday, 29 May 2021 at 23:34:47 UTC, James Lu wrote:
 If Zortech C++ and dmc++ are fast, why is dmd's asm slow? I 
 thought they used the same machine code backend.
They were fast in 1988. Even in 1998. But since then they've only gotten a little better, whereas the competition has gotten a LOT better.
May 29 2021
parent reply James Lu <jamtlu gmail.com> writes:
On Saturday, 29 May 2021 at 23:36:34 UTC, Adam D. Ruppe wrote:
 On Saturday, 29 May 2021 at 23:34:47 UTC, James Lu wrote:
 If Zortech C++ and dmc++ are fast, why is dmd's asm slow? I 
 thought they used the same machine code backend.
They were fast in 1988. Even in 1998. But since then they've only gotten a little better, whereas the competition has gotten a LOT better.
That makes sense. Why does DMD only have a -O flag, not a -O2? The competition got heavier– slower compiles, too. I wonder if it could be possible to get some CS PhD candidate to work on dmc++ to bring its backend up to modern standards.
May 29 2021
next sibling parent rikki cattermole <rikki cattermole.co.nz> writes:
On 30/05/2021 11:43 AM, James Lu wrote:
 I wonder if it could be possible to get some CS PhD candidate to work on 
 dmc++ to bring its backend up to modern standards.
1 building of them? Sounds about right! Could take a while tho...
May 29 2021
prev sibling next sibling parent reply Ola Fosheim Grostad <ola.fosheim.grostad gmail.com> writes:
On Saturday, 29 May 2021 at 23:43:48 UTC, James Lu wrote:
 I wonder if it could be possible to get some CS PhD candidate 
 to work on dmc++ to bring its backend up to modern standards.
No, phd=research. Not engineering.
May 29 2021
parent reply Max Haughton <maxhaton gmail.com> writes:
On Sunday, 30 May 2021 at 00:04:35 UTC, Ola Fosheim Grostad wrote:
 On Saturday, 29 May 2021 at 23:43:48 UTC, James Lu wrote:
 I wonder if it could be possible to get some CS PhD candidate 
 to work on dmc++ to bring its backend up to modern standards.
No, phd=research. Not engineering.
Lots of compiler work is done by PhD students, especially the groundwork on new algorithms etc. The backend would need a complete rewrite which is basically what Chris Lattner did his PhD thesis on (or at least on the design of said rewrite)
May 29 2021
parent Ola Fosheim Grostad <ola.fosheim.grostad gmail.com> writes:
On Sunday, 30 May 2021 at 02:43:29 UTC, Max Haughton wrote:
 On Sunday, 30 May 2021 at 00:04:35 UTC, Ola Fosheim Grostad 
 wrote:
 On Saturday, 29 May 2021 at 23:43:48 UTC, James Lu wrote:
 I wonder if it could be possible to get some CS PhD candidate 
 to work on dmc++ to bring its backend up to modern standards.
No, phd=research. Not engineering.
Lots of compiler work is done by PhD students, especially the groundwork on new algorithms etc. The backend would need a complete rewrite which is basically what Chris Lattner did his PhD thesis on (or at least on the design of said rewrite)
Master thesis. Phd=advancement in pointer analysis, more theoretical/narrow.
May 29 2021
prev sibling parent reply Abdulhaq <alynch4047 gmail.com> writes:
On Saturday, 29 May 2021 at 23:43:48 UTC, James Lu wrote:
 On Saturday, 29 May 2021 at 23:36:34 UTC, Adam D. Ruppe wrote:
 On Saturday, 29 May 2021 at 23:34:47 UTC, James Lu wrote:
 If Zortech C++ and dmc++ are fast, why is dmd's asm slow? I 
 thought they used the same machine code backend.
They were fast in 1988. Even in 1998. But since then they've only gotten a little better, whereas the competition has gotten a LOT better.
That makes sense. Why does DMD only have a -O flag, not a -O2? The competition got heavier– slower compiles, too. I wonder if it could be possible to get some CS PhD candidate to work on dmc++ to bring its backend up to modern standards.
Why stop at -O2? The languages of the future will go to -O11.
May 30 2021
next sibling parent reply =?UTF-8?B?THXDrXM=?= Ferreira <contact lsferreira.net> writes:
On Sunday, 30 May 2021 at 12:43:22 UTC, Abdulhaq wrote:
 On Saturday, 29 May 2021 at 23:43:48 UTC, James Lu wrote:
 On Saturday, 29 May 2021 at 23:36:34 UTC, Adam D. Ruppe wrote:
 On Saturday, 29 May 2021 at 23:34:47 UTC, James Lu wrote:
 If Zortech C++ and dmc++ are fast, why is dmd's asm slow? I 
 thought they used the same machine code backend.
They were fast in 1988. Even in 1998. But since then they've only gotten a little better, whereas the competition has gotten a LOT better.
That makes sense. Why does DMD only have a -O flag, not a -O2? The competition got heavier– slower compiles, too. I wonder if it could be possible to get some CS PhD candidate to work on dmc++ to bring its backend up to modern standards.
Why stop at -O2? The languages of the future will go to -O11.
A higher level of optimizations doesn't necessarily mean better. With GCC, real-world applications may perform better with -O2 than -O3. A lot of optimizations performed on -O3 may be too aggressive depending on the type of operations, size and other complex variables in the equation. A practical example of this situation is present in Arch Linux. Every package is compiled with -O2 optimizations. See https://github.com/archlinux/devtools/blob/master/makepkg-x86_64.conf#L42 .
May 31 2021
parent reply Max Haughton <maxhaton gmail.com> writes:
On Monday, 31 May 2021 at 20:33:08 UTC, Luís Ferreira wrote:
 On Sunday, 30 May 2021 at 12:43:22 UTC, Abdulhaq wrote:
 On Saturday, 29 May 2021 at 23:43:48 UTC, James Lu wrote:
 On Saturday, 29 May 2021 at 23:36:34 UTC, Adam D. Ruppe wrote:
 On Saturday, 29 May 2021 at 23:34:47 UTC, James Lu wrote:
 If Zortech C++ and dmc++ are fast, why is dmd's asm slow? I 
 thought they used the same machine code backend.
They were fast in 1988. Even in 1998. But since then they've only gotten a little better, whereas the competition has gotten a LOT better.
That makes sense. Why does DMD only have a -O flag, not a -O2? The competition got heavier– slower compiles, too. I wonder if it could be possible to get some CS PhD candidate to work on dmc++ to bring its backend up to modern standards.
Why stop at -O2? The languages of the future will go to -O11.
A higher level of optimizations doesn't necessarily mean better. With GCC, real-world applications may perform better with -O2 than -O3. A lot of optimizations performed on -O3 may be too aggressive depending on the type of operations, size and other complex variables in the equation. A practical example of this situation is present in Arch Linux. Every package is compiled with -O2 optimizations. See https://github.com/archlinux/devtools/blob/master/makepkg-x86_64.conf#L42 .
In my experience I have not bumped into this issue all that often, especially when allowing the compiler to use a specific machine description for the target (Working out exactly which target information most of this is due to is unfortunately quite painful as modern compilers are anything but simple when deciding what to do based on their nicely specified machine description files)
May 31 2021
parent =?ISO-8859-1?Q?Lu=EDs?= Ferreira <contact lsferreira.net> writes:
 In my experience I have not bumped into this issue all that=20
 often, especially when allowing the compiler to use a specific=20
 machine description for the target (Working out exactly which=20
 target information most of this is due to is unfortunately quite=20
 painful as modern compilers are anything but simple when deciding=20
 what to do based on their nicely specified=C2=A0 machine description=20
 files)
Well sure. Arch mainly does it also due to the fact they generate asm for generic x86_64, but my point is rather saying that higher level of optimizations at a certain point comes with tradeoffs between space and speed and a lot of people compile always with -O3 thinking it is the one with best optimizations possible. There's even some compilers with very specific optimizations for like loop unrolling or unsafe loop optimizations that are not in the common optimization flags. --=20 Sincerely, Lu=C3=ADs Ferreira lsferreira.net
Jun 01 2021
prev sibling parent Abdulhaq <alynch4047 gmail.com> writes:
On Sunday, 30 May 2021 at 12:43:22 UTC, Abdulhaq wrote:
 On Saturday, 29 May 2021 at 23:43:48 UTC, James Lu wrote:
 On Saturday, 29 May 2021 at 23:36:34 UTC, Adam D. Ruppe wrote:
 On Saturday, 29 May 2021 at 23:34:47 UTC, James Lu wrote:
 If Zortech C++ and dmc++ are fast, why is dmd's asm slow? I 
 thought they used the same machine code backend.
They were fast in 1988. Even in 1998. But since then they've only gotten a little better, whereas the competition has gotten a LOT better.
That makes sense. Why does DMD only have a -O flag, not a -O2? The competition got heavier– slower compiles, too. I wonder if it could be possible to get some CS PhD candidate to work on dmc++ to bring its backend up to modern standards.
Why stop at -O2? The languages of the future will go to -O11.
OK I was kidding, but this is interesting, from Steve Sinofsky of early Visual C++: "While I wielded a great technology buzzsaw, I was also applying Microsoft’s perspective, not necessarily what Lotus was looking to accomplish or what reviewers would see. For example, my focus on shared code came straight from BillG as that was his hot button. The Lotus products clearly hadn’t focused on that at all. I thought they were “wrong” not simply different. This mismatch was something I had seen in the evaluations of Borland C++ versus Microsoft VC++. For example, Borland had a compiler optimization switch “/O” that was, basically, “make this code as fast as possible by enabling all the best optimizations.” To us compiler-heads at Microsoft, we thought of this as technical nonsense because each of the myriad potential optimizations meant something unique to the programmer (literally the entire alphabet of command line switches), but it had captivated reviewers. I came to champion (and push) the addition of “/O” for our complier and it turned out that it worked with reviewers. When Ami Pro, the Lotus SmartSuite word processor, demonstrated its new ease-of-use features under the umbrella of working together, it similarly captured the attention of reviewers, even if deep down in technical details it didn’t make much sense." I'm really enjoying his blog ATM, a history of his time at MS, recommended: https://hardcoresoftware.learningbyshipping.com/p/031-synchronizing-windows-and-office
Jun 08 2021