www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - On C/C++ undefined behaviours

reply bearophile <bearophileHUGS lycos.com> writes:
Three good blog posts about undefined behaviour in C and C++:
http://blog.regehr.org/archives/213
http://blog.regehr.org/archives/226
http://blog.regehr.org/archives/232

In those posts (and elsewhere) the expert author gives several good bites to
the ass of most compiler writers.

Among other things in those three posts he talks about two programs as:

import std.c.stdio: printf;
void main() {
    printf("%d\n", -int.min);
}

import std.stdio: writeln;
void main() {
    enum int N = (1L).sizeof * 8;
    auto max = (1L << (N - 1)) - 1;
    writeln(max);
}

I believe that D can't be considered a step forward in system language
programming until it gives a much more serious consideration for
integer-related overflows (and integer-related undefined behaviour).

The good thing is that Java is a living example that even if you remove most
integer-related undefined behaviours your Java code is still able to run as
fast as C and sometimes faster (on normal desktops).

Bye,
bearophile
Aug 20 2010
next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
What are these Java programs for the desktop that run fast? I haven't
encountered any, but maybe that's just because I didn't try them all
out. Eclipse takes at least 20 seconds to load on startup on my quad
core, that's not very fast. On the other hand, CodeBlocks which is
coded in C++ and has  a few dozen plugins installed runs in an
instant.

Just firing up a dialog in eclipse takes a good second, maybe two. So
give me the names of those fast Java applications, pls. :)

On Fri, Aug 20, 2010 at 6:38 PM, bearophile <bearophileHUGS lycos.com> wrot=
e:
 Three good blog posts about undefined behaviour in C and C++:
 http://blog.regehr.org/archives/213
 http://blog.regehr.org/archives/226
 http://blog.regehr.org/archives/232

 In those posts (and elsewhere) the expert author gives several good bites=
to the ass of most compiler writers.
 Among other things in those three posts he talks about two programs as:

 import std.c.stdio: printf;
 void main() {
 =A0 =A0printf("%d\n", -int.min);
 }

 import std.stdio: writeln;
 void main() {
 =A0 =A0enum int N =3D (1L).sizeof * 8;
 =A0 =A0auto max =3D (1L << (N - 1)) - 1;
 =A0 =A0writeln(max);
 }

 I believe that D can't be considered a step forward in system language pr=
ogramming until it gives a much more serious consideration for integer-rela= ted overflows (and integer-related undefined behaviour).
 The good thing is that Java is a living example that even if you remove m=
ost integer-related undefined behaviours your Java code is still able to ru= n as fast as C and sometimes faster (on normal desktops).
 Bye,
 bearophile
Aug 20 2010
next sibling parent reply Kagamin <spam here.lot> writes:
Andrej Mitrovic Wrote:

 What are these Java programs for the desktop that run fast? I haven't
 encountered any, but maybe that's just because I didn't try them all
 out. Eclipse takes at least 20 seconds to load on startup on my quad
 core, that's not very fast. On the other hand, CodeBlocks which is
 coded in C++ and has  a few dozen plugins installed runs in an
 instant.
 
 Just firing up a dialog in eclipse takes a good second, maybe two. So
 give me the names of those fast Java applications, pls. :)
He's talking about arithmetical tests. There're faster java applications like Vuze that are faster compared to slower java applications.
Aug 20 2010
parent reply "Nick Sabalausky" <a a.a> writes:
"Kagamin" <spam here.lot> wrote in message 
news:i4mjl0$2ucg$1 digitalmars.com...
 Andrej Mitrovic Wrote:

 What are these Java programs for the desktop that run fast? I haven't
 encountered any, but maybe that's just because I didn't try them all
 out. Eclipse takes at least 20 seconds to load on startup on my quad
 core, that's not very fast. On the other hand, CodeBlocks which is
 coded in C++ and has  a few dozen plugins installed runs in an
 instant.
For me, C::B always took somewhere around ten seconds to start up (and then it kept using 5-10% CPU even when idling with all plugins disabled - a number of other people reported the same thing). But it's been awhile since I used it though, so maybe that's all fixed now. But yea, I can't remember ever seeing a non-trivial Java app that wasn't a bit on the sluggish side.
 Just firing up a dialog in eclipse takes a good second, maybe two. So
 give me the names of those fast Java applications, pls. :)
He's talking about arithmetical tests. There're faster java applications like Vuze that are faster compared to slower java applications.
You have *got* to be kidding me. Vuze is one of the absolute biggest pieces of bloatware I've ever seen in my entire life. *Eclipse* is more responsive for me (although only just barely). There are exactly three programs I've always considered to be roughly tied for "biggest bloatware in history": - Eclipse - Vuze - JetBrains MPS
Aug 20 2010
parent reply retard <re tard.com.invalid> writes:
Fri, 20 Aug 2010 16:04:46 -0400, Nick Sabalausky wrote:

 "Kagamin" <spam here.lot> wrote in message
 news:i4mjl0$2ucg$1 digitalmars.com...
 Andrej Mitrovic Wrote:

 What are these Java programs for the desktop that run fast? I haven't
 encountered any, but maybe that's just because I didn't try them all
 out. Eclipse takes at least 20 seconds to load on startup on my quad
 core, that's not very fast. On the other hand, CodeBlocks which is
 coded in C++ and has  a few dozen plugins installed runs in an
 instant.
For me, C::B always took somewhere around ten seconds to start up (and then it kept using 5-10% CPU even when idling with all plugins disabled - a number of other people reported the same thing). But it's been awhile since I used it though, so maybe that's all fixed now. But yea, I can't remember ever seeing a non-trivial Java app that wasn't a bit on the sluggish side.
If you want, you could go through these applications http://java.sun.com/products/jfc/tsc/sightings/ Maybe some of them don't suck. There are also games written in Java (your Java enabled browser can run these): quake 2 port - http://bytonic.de/html/jake2_webstart.html 4 kB games - http://javaunlimited.net/games/java4k_2007.php some opengl demos - http://slick.cokeandcode.com/static.php?page=demos Our gamedevs use Java 7 ea, so you could try if it's faster (at least the webstart progress dialog is different and can be customized): http://dlc.sun.com.edgesuite.net/jdk7/binaries/index.html
Aug 20 2010
parent reply bearophile <bearophileHUGS lycos.com> writes:
retard:
 quake 2 port - http://bytonic.de/html/jake2_webstart.html
Have you read some of that code? No Java programmer writes code like that. That code is the "exception that confirms the rule" (I don't know how this is normally written in English). The rule is that Java code is slow ;-) Bye, bearophile
Aug 20 2010
next sibling parent retard <re tard.com.invalid> writes:
Fri, 20 Aug 2010 19:06:32 -0400, bearophile wrote:

 retard:
 quake 2 port - http://bytonic.de/html/jake2_webstart.html
Have you read some of that code? No Java programmer writes code like that.
Sure, so write it in more idiomatic Java way and try again? Too slow? These applications are GPU bound so how is it even possible?
 That code is the "exception that confirms the rule" (I don't know
 how this is normally written in English). The rule is that Java code is
 slow ;-)
I don't know what you're talking about. Have you ever tried writing efficient Java applications? Are these any better? http://www.jmonkeyengine.com/movies_demos.php
Aug 20 2010
prev sibling parent reply retard <re tard.com.invalid> writes:
Fri, 20 Aug 2010 19:06:32 -0400, bearophile wrote:

 retard:
 quake 2 port - http://bytonic.de/html/jake2_webstart.html
Have you read some of that code? No Java programmer writes code like that. That code is the "exception that confirms the rule" (I don't know how this is normally written in English). The rule is that Java code is slow ;-)
I tested this game on my low-end HTPC (timedemo, map demo2). It's a Core 2 Duo with OC'd Geforce 8600. Got 850 FPS 1920x1080 with default quality settings. Does this make Java slow? I think I've heard that 30..50 fps is acceptable level in video games. It's hard to believe that idiomatic Java would be 15..30 times slower than this. I played through the whole demo. Not a single problem with the GC, the user experience was much better than with the original version 10 years ago.
Aug 22 2010
parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
Quake 2 is a 13 year old game. :)

On Sun, Aug 22, 2010 at 4:57 PM, retard <re tard.com.invalid> wrote:
 Fri, 20 Aug 2010 19:06:32 -0400, bearophile wrote:

 retard:
 quake 2 port - http://bytonic.de/html/jake2_webstart.html
Have you read some of that code? No Java programmer writes code like that. That code is the "exception that confirms the rule" (I don't know how this is normally written in English). The rule is that Java code is slow ;-)
I tested this game on my low-end HTPC (timedemo, map demo2). It's a Core 2 Duo with OC'd Geforce 8600. Got 850 FPS 1920x1080 with default quality settings. Does this make Java slow? I think I've heard that 30..50 fps is acceptable level in video games. It's hard to believe that idiomatic Java would be 15..30 times slower than this. I played through the whole demo. Not a single problem with the GC, the user experience was much better than with the original version 10 years ago.
Aug 22 2010
parent reply retard <re tard.com.invalid> writes:
Sun, 22 Aug 2010 17:50:06 +0200, Andrej Mitrovic wrote:

 Quake 2 is a 13 year old game. :)
 
The original argument was that Java code is slow, no matter what you're trying to do. What kind of performance levels are you expecting? If you take a look at the browser games available today (at least Andrei probably knows them since he works at Facebook - http:// mashable.com/2009/10/16/top-facebook-games/ ), you can see that not a single one of them requires more performance than Quake 2. Top 10 indie games from 2008: http://gearcrave.com/2009-01-05/the-year-of-indie-games-2008s-ten-best- independent-games/ Again, no problem. Top 10 indie games from the last year: http://news.bigdownload.com/photos/best-indie-games-of-2009-1/ You can basically write any modern, award winning game in Java; there are absolutely no problems whatsoever. Why are we always bringing up the issues with Eclipse, Netbeans, and Vuze when discussing Java's performance? It almost seems like everyone here is trying to prove that one simply cannot write any desktop application in Java, because it's *that* slow. Even if you had a gazillion GHz CPU, a Pong or Tetris in Java would look sluggish and have 10 minute pauses because of the GC. I did even more googling and found very surprising results: http://dotnot.org/blog/archives/2008/03/10/xml-benchmarks-updated-graphs- with-rapidxml/ It appears that in XML parsing Java is actually 5..10 times faster than D1/Phobos. Maybe D2/Phobos has finally fixed these issues.
Aug 22 2010
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"retard" <re tard.com.invalid> wrote in message 
news:i4rqp4$1egl$2 digitalmars.com...
 Sun, 22 Aug 2010 17:50:06 +0200, Andrej Mitrovic wrote:

 Quake 2 is a 13 year old game. :)
The original argument was that Java code is slow, no matter what you're trying to do. What kind of performance levels are you expecting?
Nobody said that. Java apps just tend to be slower than C/C++/D ones. Strutting out a GPU-bound game that was well-known to run blazingly fast on sub-GHz hardware in *software* rendering mode, and saying "Look it's a few hundred fps on my multi-core" doesn't really do much to disprove that.
 http://dotnot.org/blog/archives/2008/03/10/xml-benchmarks-updated-graphs-
 with-rapidxml/

 It appears that in XML parsing Java is actually 5..10 times faster than
 D1/Phobos. Maybe D2/Phobos has finally fixed these issues.
Phobos's XML is known to be half-baked. That same link indicates that D1/Tango's XML beats the snot out of Java's. Yes, part of that is due to some details of the way Tango goes about it, but it's things that D makes easy and natural. Trying to do the same techniques on Java would be a bit of an uphill battle.
Aug 22 2010
next sibling parent "Nick Sabalausky" <a a.a> writes:
"Nick Sabalausky" <a a.a> wrote in message 
news:i4rv1o$2fqc$1 digitalmars.com...
 "retard" <re tard.com.invalid> wrote in message 
 news:i4rqp4$1egl$2 digitalmars.com...
 Sun, 22 Aug 2010 17:50:06 +0200, Andrej Mitrovic wrote:

 Quake 2 is a 13 year old game. :)
The original argument was that Java code is slow, no matter what you're trying to do. What kind of performance levels are you expecting?
Nobody said that. Java apps just tend to be slower than C/C++/D ones. Strutting out a GPU-bound game that was well-known to run blazingly fast on sub-GHz hardware in *software* rendering mode, and saying "Look it's a few hundred fps on my multi-core" doesn't really do much to disprove that.
Speaking of that, try running the Java one in software rendering mode, and compare to the original C/C++ one with software rendering on the same map and resolution.
Aug 22 2010
prev sibling parent reply retard <re tard.com.invalid> writes:
Sun, 22 Aug 2010 15:50:10 -0400, Nick Sabalausky wrote:

 "retard" <re tard.com.invalid> wrote in message
 news:i4rqp4$1egl$2 digitalmars.com...
 Sun, 22 Aug 2010 17:50:06 +0200, Andrej Mitrovic wrote:

 Quake 2 is a 13 year old game. :)
The original argument was that Java code is slow, no matter what you're trying to do. What kind of performance levels are you expecting?
Nobody said that.
In my opinion bearophile spread more or less anti-Java FUD. "That code is the "exception that confirms the rule" (I don't know how this is normally written in English). The rule is that Java code is slow ;-)" Maybe I am not understanding what he means. To me Eclipse feels bloated and slow, but it doesn't tell anything about Java in general. If I buy a new CPU, the Java programs will run faster than C++ ones now on the current CPU. Why is the small performance difference that significant?
 Java apps just tend to be slower than C/C++/D ones.
 Strutting out a GPU-bound game that was well-known to run blazingly fast
 on sub-GHz hardware in *software* rendering mode, and saying "Look it's
 a few hundred fps on my multi-core" doesn't really do much to disprove
 that.
(I don't think the Pentium 2/3 desktops managed to run it that quickly. These were the first games that started the GPU race. It was the original Quake that used software rendering.) Note that my low-end HTPC has 4.32 times the resolution and 17 times the frame rate of the high-end Pentium 2 I used to play the original game with). That's a 7300% speedup. According to Moore's law the hardware got 6400..12800% faster during that period. So definitely Java is not a bottleneck in game development in these kinds of games. I'm just asking, why software like this should be written in buggy D if production ready Java already executes fast enough? You must desire absolute hard-core super mega performance to justify the use of D. Another question is, why would anyone use D/Phobos for writing server side XML intensive applications if Java is so much faster? Fixing the libraries first tends to be bad for productivity. If I had to choose between a production ready Java XML library and something half-baked for D, my boss would treat me as lunatic if I told him to first wait N months to bring the half-baked amateur library on par with Java and only after that write the code in this language that no other company uses anywhere. I'd need to work on free time to make this a reality.
Aug 22 2010
next sibling parent reply Robert Clipsham <robert octarineparrot.com> writes:
On 22/08/10 21:31, retard wrote:
 Maybe I am not understanding what he means. To me Eclipse feels bloated
 and slow, but it doesn't tell anything about Java in general. If I buy a
 new CPU, the Java programs will run faster than C++ ones now on the
 current CPU. Why is the small performance difference that significant?
For the record, I'm on a 6 core 3.2Ghz a core box with 8GB ram and eclipse still feels bloated and slow. Same goes for OpenOffice.org :3 -- Robert http://octarineparrot.com/
Aug 22 2010
parent "Yao G." <nospamyao gmail.com> writes:
On Sun, 22 Aug 2010 15:40:32 -0500, Robert Clipsham  
<robert octarineparrot.com> wrote:

 For the record, I'm on a 6 core 3.2Ghz a core box with 8GB ram and  
 eclipse still feels bloated and slow. Same goes for OpenOffice.org :3
Unfortunately, you are not alone on this, bro :( -- Yao G.
Aug 22 2010
prev sibling next sibling parent Justin Johansson <no spam.com> writes:
On 23/08/10 06:01, retard wrote:
 I'm just asking, why software like this should be written in buggy D if
 production ready Java already executes fast enough? You must desire
 absolute hard-core super mega performance to justify the use of D.
OTOH, to be fair, why the quest for "real-time Java"*** and the myriad of commercial offerings for "real-time Java" systems if standard production ready Java suffices for latency critical applications? *** web search for "real-time Java"
Aug 22 2010
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 08/22/2010 03:31 PM, retard wrote:
 I'm just asking, why software like this should be written in buggy D if
 production ready Java already executes fast enough? You must desire
 absolute hard-core super mega performance to justify the use of D.
Assuming the question is not tendentious: The decision of choosing a language has quite a few more ingredients than speed of generated code and quality of implementation.
 Another question is, why would anyone use D/Phobos for writing server
 side XML intensive applications if Java is so much faster? Fixing the
 libraries first tends to be bad for productivity. If I had to choose
 between a production ready Java XML library and something half-baked for
 D, my boss would treat me as lunatic if I told him to first wait N months
 to bring the half-baked amateur library on par with Java and only after
 that write the code in this language that no other company uses anywhere.
 I'd need to work on free time to make this a reality.
Again, speed is not the only issue. Anyway, clearly Phobos' xml library is in serious need for a revamp. Andrei
Aug 22 2010
parent reply retard <re tard.com.invalid> writes:
Sun, 22 Aug 2010 16:08:41 -0500, Andrei Alexandrescu wrote:

 On 08/22/2010 03:31 PM, retard wrote:
 I'm just asking, why software like this should be written in buggy D if
 production ready Java already executes fast enough? You must desire
 absolute hard-core super mega performance to justify the use of D.
Assuming the question is not tendentious: The decision of choosing a language has quite a few more ingredients than speed of generated code and quality of implementation.
It's really not intentionally tendentious although I often have that kind of tone. We used to have a desperate need for systems programming languages in gamedev mostly because of the efficiency concerns. Some 320x200x8bit VGA game would have had 0.1 FPS on 80486 / Python! Now, given that we're not developing any AAA titles anyway, the major VM unsafe memory operations (pointer arithmetic, segfaults etc.). Those languages come with mature gamedev frameworks and the performance is orders of magnitude better than one could hope. E.g. XNA 4.0 and Unity3D 3.0 are very competitive despite some backwards compatibility problems. You don't even write much code anymore, it's just few mouse clicks here and there. I feel the network effect is against D here.
Aug 22 2010
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
retard wrote:
 You don't even write much code anymore, it's just few mouse clicks here 
 and there. I feel the network effect is against D here.
Undoubtedly, people have found ways to get their work done with Java. But there's just no joy in working with it.
Aug 22 2010
prev sibling parent so <so so.do> writes:
On Mon, 23 Aug 2010 00:31:49 +0300, retard <re tard.com.invalid> wrote:

 Sun, 22 Aug 2010 16:08:41 -0500, Andrei Alexandrescu wrote:

 On 08/22/2010 03:31 PM, retard wrote:
 I'm just asking, why software like this should be written in buggy D if
 production ready Java already executes fast enough? You must desire
 absolute hard-core super mega performance to justify the use of D.
Assuming the question is not tendentious: The decision of choosing a language has quite a few more ingredients than speed of generated code and quality of implementation.
It's really not intentionally tendentious although I often have that kind of tone. We used to have a desperate need for systems programming languages in gamedev mostly because of the efficiency concerns. Some 320x200x8bit VGA game would have had 0.1 FPS on 80486 / Python! Now, given that we're not developing any AAA titles anyway, the major VM unsafe memory operations (pointer arithmetic, segfaults etc.). Those languages come with mature gamedev frameworks and the performance is orders of magnitude better than one could hope. E.g. XNA 4.0 and Unity3D 3.0 are very competitive despite some backwards compatibility problems. You don't even write much code anymore, it's just few mouse clicks here and there. I feel the network effect is against D here.
Both of these engines are for artists or small games, no serious game engine built on nothing but C++, though not saying it is great, it is just best tool for the job. I am sure most of the people that actually into D is high performance application coders, others really don't have many reasons for a migration. -- Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
Oct 07 2010
prev sibling parent "Nick Sabalausky" <a a.a> writes:
"retard" <re tard.com.invalid> wrote in message 
news:i4s1ft$2fed$1 digitalmars.com...
 Sun, 22 Aug 2010 15:50:10 -0400, Nick Sabalausky wrote:

 "retard" <re tard.com.invalid> wrote in message
 news:i4rqp4$1egl$2 digitalmars.com...
 Sun, 22 Aug 2010 17:50:06 +0200, Andrej Mitrovic wrote:

 Quake 2 is a 13 year old game. :)
The original argument was that Java code is slow, no matter what you're trying to do. What kind of performance levels are you expecting?
Nobody said that.
In my opinion bearophile spread more or less anti-Java FUD.
Well, I tend to feel that the "Java is fast now! Honest!" that a lot of people say is pro-Java propaganda. *shrug*
 Java apps just tend to be slower than C/C++/D ones.
 Strutting out a GPU-bound game that was well-known to run blazingly fast
 on sub-GHz hardware in *software* rendering mode, and saying "Look it's
 a few hundred fps on my multi-core" doesn't really do much to disprove
 that.
(I don't think the Pentium 2/3 desktops managed to run it that quickly. These were the first games that started the GPU race. It was the original Quake that used software rendering.)
Quake 2 had both software and hardware rendering built-in. I played it in software mode a lot before I got my first 3D card. Quake 1 did have hardware mode too, although it was added as a separate "glquake" after Quake 1 was released. Quake 3 Arena was the first Quake to require hardware 3D (and was also the first id game to bore me, but that's unrelated ;) )
 Note that my low-end HTPC has 4.32 times the resolution and 17 times the
 frame rate of the high-end Pentium 2 I used to play the original game
 with). That's a 7300% speedup. According to Moore's law the hardware got
 6400..12800% faster during that period. So definitely Java is not a
 bottleneck in game development in these kinds of games.
In the decade or so since that game was released, AAA games have been using far more polys, far more textures, more rendering passes, more physics and AI processing, etc. And yes, a fair amount of that is GPU-bound but the CPU still needs to drive it, and if you plop a modern video card into, say, a Pentium 2 system, it's not going to run, say, Splinter Cell or Halo very well, and even those are close to ten years old. So the CPU performance *is* still important. (Although personally, I have zero interest in whether or not a game *looks* any better than Splinter Cell or Halo, but unforunately, I seem to be in the minority on that among gamers and game devs.)
 I'm just asking, why software like this should be written in buggy D if
 production ready Java already executes fast enough? You must desire
 absolute hard-core super mega performance to justify the use of D.
Well, for one thing, because D isn't anywhere near the enormous pain-in-the-ass that Java is. And because, like I said above, most games these days (for better or worse) require quite a bit more computational power, even on the CPU, than the Quake 2 engine did. And because id put an enormus amount of pioneering work into *getting* Quake to run that fast in the first place (a fair amount of which the Java port inherited), and devs don't like to have to put that much work into it to get good results.
 Another question is, why would anyone use D/Phobos for writing server
 side XML intensive applications if Java is so much faster?
Java is *not* faster at XML. Phobos just has a half-baked xml lib. Take any of the other xml parsers for D, even ones that haven't gotten as much time and energy put into optimization as the Java one has most likely gotten will probably still run at least comparably to Java, if not better. (And the D1/Tango xml parser makes it very clear that there's a LOT of room for improvement over the Java one).
Aug 23 2010
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
retard wrote:
 It appears that in XML parsing Java is actually 5..10 times faster than 
 D1/Phobos. Maybe D2/Phobos has finally fixed these issues.
D *enables* you to write fast code, but you have to know how to write fast code. The code won't be fast simply because you wrote it in D. This also means if you merely translate a program from Java to D, don't expect it to necessarily run faster. You're going to need to refactor/reengineer it for speed. For example, replacing classes with value types.
Aug 22 2010
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Walter Bright wrote:
 This also means if you merely translate a program from Java to D, don't 
 expect it to necessarily run faster. You're going to need to 
 refactor/reengineer it for speed. For example, replacing classes with 
 value types.
Oh, and using slicing, too.
Aug 22 2010
parent reply Jonathan M Davis <jmdavisprog gmail.com> writes:
On Sunday 22 August 2010 14:04:24 Walter Bright wrote:
 Walter Bright wrote:
 This also means if you merely translate a program from Java to D, don't
 expect it to necessarily run faster. You're going to need to
 refactor/reengineer it for speed. For example, replacing classes with
 value types.
Oh, and using slicing, too.
Slicing has got to be one of D's coolest features - especially when combined with ranges. I was ecstatic when I realized that I could use arrays in D like SLists in Haskell - at least for processing them. It's just plain easier to write some algorithms that way. - Jonathan M Davis
Aug 22 2010
parent reply "Nick Sabalausky" <a a.a> writes:
"Jonathan M Davis" <jmdavisprog gmail.com> wrote in message 
news:mailman.467.1282518397.13841.digitalmars-d puremagic.com...
 On Sunday 22 August 2010 14:04:24 Walter Bright wrote:
 Walter Bright wrote:
 This also means if you merely translate a program from Java to D, don't
 expect it to necessarily run faster. You're going to need to
 refactor/reengineer it for speed. For example, replacing classes with
 value types.
Oh, and using slicing, too.
Slicing has got to be one of D's coolest features
Yea, I agree. Back in my C/C++ days, I actively avoided doing any string processing whenever I could, just because it was such a PITA.
Aug 23 2010
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
Nick Sabalausky wrote:
 Yea, I agree. Back in my C/C++ days, I actively avoided doing any string 
 processing whenever I could, just because it was such a PITA. 
I've done an awful lot of C/C++ programming that involved handling strings. It was always a lot of tedious work, and I was always reinventing the wheel on that. But in looking at other "inferior" languages, like BASIC, I was eventually struck by how trivial string handling was for them. I thought D should be able to do that as well.
Aug 23 2010
prev sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
OT: I guess that's one of the reasons why I was so appalled when I was
reading C++ Primer many years ago. The author spent almost the entire
book explaining the difference between C and C++, and half of his book
with a guide on how to create your own string processing functions in
C++. On the other hand "C" Primer, by the same author, was a very nice
C99 book. I used is as a sort of "warmup" before I got into D (I was
doing mostly Python scripts for a while..).

On Mon, Aug 23, 2010 at 10:08 PM, Nick Sabalausky <a a.a> wrote:
 "Jonathan M Davis" <jmdavisprog gmail.com> wrote in message
 news:mailman.467.1282518397.13841.digitalmars-d puremagic.com...
 On Sunday 22 August 2010 14:04:24 Walter Bright wrote:
 Walter Bright wrote:
 This also means if you merely translate a program from Java to D, don't
 expect it to necessarily run faster. You're going to need to
 refactor/reengineer it for speed. For example, replacing classes with
 value types.
Oh, and using slicing, too.
Slicing has got to be one of D's coolest features
Yea, I agree. Back in my C/C++ days, I actively avoided doing any string processing whenever I could, just because it was such a PITA.
Aug 23 2010
prev sibling parent reply so <so so.do> writes:
 It appears that in XML parsing Java is actually 5..10 times faster than
 D1/Phobos. Maybe D2/Phobos has finally fixed these issues.
RapidXML is a 2.5k loc header file, using templates. Equal D code possibly using ranges and slicing wouldn't be more than 1k loc, and it would still be possible to outperform it since C++ don't have any of these and much more. I would love to see a comparison of C++ and D implementation of this lib/header. -- Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
Oct 07 2010
parent "Denis Koroskin" <2korden gmail.com> writes:
On Fri, 08 Oct 2010 07:17:42 +0400, so <so so.do> wrote:

 It appears that in XML parsing Java is actually 5..10 times faster than
 D1/Phobos. Maybe D2/Phobos has finally fixed these issues.
RapidXML is a 2.5k loc header file, using templates. Equal D code possibly using ranges and slicing wouldn't be more than 1k loc, and it would still be possible to outperform it since C++ don't have any of these and much more. I would love to see a comparison of C++ and D implementation of this lib/header.
Tango (D1) XML module performance comparison: http://dotnot.org/blog/archives/2008/02/
Oct 07 2010
prev sibling parent reply retard <re tard.com.invalid> writes:
Fri, 20 Aug 2010 19:04:41 +0200, Andrej Mitrovic wrote:

 What are these Java programs for the desktop that run fast? I haven't
 encountered any, but maybe that's just because I didn't try them all
 out. Eclipse takes at least 20 seconds to load on startup on my quad
 core, that's not very fast. On the other hand, CodeBlocks which is coded
 in C++ and has  a few dozen plugins installed runs in an instant.
Now that's a fair comparison! "Crysis runs so slowly but a hello world written in Go is SO fast. This must prove that Go is much faster than C+ +!" I think CodeBlocks is one of the most lightweight IDEs out there. Does it even have full semantic autocompletion? Eclipse, on the other hand, comes with almost everything you can imagine. If you turn off the syntax check, Eclipse works just as fast as any native application on a modern desktop.
Aug 20 2010
parent reply "Nick Sabalausky" <a a.a> writes:
"retard" <re tard.com.invalid> wrote in message 
news:i4mrss$cam$1 digitalmars.com...
 Fri, 20 Aug 2010 19:04:41 +0200, Andrej Mitrovic wrote:

 What are these Java programs for the desktop that run fast? I haven't
 encountered any, but maybe that's just because I didn't try them all
 out. Eclipse takes at least 20 seconds to load on startup on my quad
 core, that's not very fast. On the other hand, CodeBlocks which is coded
 in C++ and has  a few dozen plugins installed runs in an instant.
Now that's a fair comparison! "Crysis runs so slowly but a hello world written in Go is SO fast. This must prove that Go is much faster than C+ +!" I think CodeBlocks is one of the most lightweight IDEs out there. Does it even have full semantic autocompletion? Eclipse, on the other hand, comes with almost everything you can imagine. If you turn off the syntax check, Eclipse works just as fast as any native application on a modern desktop.
I've tried eclipse with the fancy stuff off, and it's still slower than C::B or PN2 for me.
Aug 20 2010
next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
Kind of like playing Crysis with all settings on low, right? Can't see
sh*t but it still lags. :p

On Fri, Aug 20, 2010 at 11:37 PM, Nick Sabalausky <a a.a> wrote:
 "retard" <re tard.com.invalid> wrote in message
 news:i4mrss$cam$1 digitalmars.com...
 Fri, 20 Aug 2010 19:04:41 +0200, Andrej Mitrovic wrote:

 What are these Java programs for the desktop that run fast? I haven't
 encountered any, but maybe that's just because I didn't try them all
 out. Eclipse takes at least 20 seconds to load on startup on my quad
 core, that's not very fast. On the other hand, CodeBlocks which is code=
d
 in C++ and has =A0a few dozen plugins installed runs in an instant.
Now that's a fair comparison! "Crysis runs so slowly but a hello world written in Go is SO fast. This must prove that Go is much faster than C+ +!" I think CodeBlocks is one of the most lightweight IDEs out there. Does i=
t
 even have full semantic autocompletion? Eclipse, on the other hand, come=
s
 with almost everything you can imagine. If you turn off the syntax check=
,
 Eclipse works just as fast as any native application on a modern desktop=
.
 I've tried eclipse with the fancy stuff off, and it's still slower than C=
::B
 or PN2 for me.
Aug 20 2010
prev sibling next sibling parent reply retard <re tard.com.invalid> writes:
Fri, 20 Aug 2010 17:37:18 -0400, Nick Sabalausky wrote:

 "retard" <re tard.com.invalid> wrote in message
 news:i4mrss$cam$1 digitalmars.com...
 Fri, 20 Aug 2010 19:04:41 +0200, Andrej Mitrovic wrote:

 What are these Java programs for the desktop that run fast? I haven't
 encountered any, but maybe that's just because I didn't try them all
 out. Eclipse takes at least 20 seconds to load on startup on my quad
 core, that's not very fast. On the other hand, CodeBlocks which is
 coded in C++ and has  a few dozen plugins installed runs in an
 instant.
Now that's a fair comparison! "Crysis runs so slowly but a hello world written in Go is SO fast. This must prove that Go is much faster than C+ +!" I think CodeBlocks is one of the most lightweight IDEs out there. Does it even have full semantic autocompletion? Eclipse, on the other hand, comes with almost everything you can imagine. If you turn off the syntax check, Eclipse works just as fast as any native application on a modern desktop.
I've tried eclipse with the fancy stuff off, and it's still slower than C::B or PN2 for me.
Of course it is. You're comparing apples and oranges. The core of Eclipse is much more customizable. I haven't used C::B lately, but does it even have a plugin updater functionality yet? If you want a fair comparison, write the _same_ application in Java and in some other language. E.g. jEdit is probably a bit faster with all plugins turned off. When comparing, you also need to understand how the JIT compiler works. Using the Sun server VM, you need to visit each menu and use each visual component a few times to make the VM recompile the code. After few hours of use everything should run as fast as it can. Upgrading the JVM might also help because there are more JIT optimizations available.
Aug 20 2010
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 08/20/2010 04:48 PM, retard wrote:
 I've tried eclipse with the fancy stuff off, and it's still slower than
 C::B or PN2 for me.
Of course it is. You're comparing apples and oranges. The core of Eclipse is much more customizable.
I've seen a live demo by Erich Gamma in which he wrote a simple Eclipse plugin inside Eclipse and then he immediately used it. It was pretty darn impressive. Today's compiler technology still has us pay for customizability even when it's not realized, so even with all plugins turned off a pluggable program would still be slower than a monolithic one. Andrei
Aug 20 2010
next sibling parent reply retard <re tard.com.invalid> writes:
Fri, 20 Aug 2010 16:57:34 -0500, Andrei Alexandrescu wrote:

 On 08/20/2010 04:48 PM, retard wrote:
 I've tried eclipse with the fancy stuff off, and it's still slower
 than C::B or PN2 for me.
Of course it is. You're comparing apples and oranges. The core of Eclipse is much more customizable.
I've seen a live demo by Erich Gamma in which he wrote a simple Eclipse plugin inside Eclipse and then he immediately used it. It was pretty darn impressive. Today's compiler technology still has us pay for customizability even when it's not realized, so even with all plugins turned off a pluggable program would still be slower than a monolithic one.
Indeed. But even if it (static compilation) allowed controlling the customizability in every possible way, there can't be a single binary that suits everyone. Programs like C::B force you to choose these features on compile time. This is very typical in C/C++ applications - you have the choice, but it has to be done as early as possible. For instance, I forgot to compile in mp3 support when building ffmpeg or some other multimedia lib. Now the only choice I have is to recompile the library and possibly even all dependencies. It's the same thing in some Linux distributions - they forgot to set on the Truetype bytecode interpreter so I need to recompile some core libraries. Eclipse's philosophy seems to be completely different - some of the choices are made on launching the application, others when starting individual components. The only limitation that comes to mind is that because of SWT, Eclipse needs two distributions (32-bit and 64-bit), at least on Linux. Swing using applications don't have this limitation.
Aug 20 2010
next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
The current problem with CB, afaik, is that plugins need to be
recompiled in order to work with new versions of CB. But you can
enable/disable them at will, and most plugins have their settings
appear in the options menu similar to the way Eclipse plugins work.
It's no secret Eclipse is more mature, CB only has a a limited number
of contributors (at least to my knowledge).

On Sat, Aug 21, 2010 at 12:16 AM, retard <re tard.com.invalid> wrote:
 Fri, 20 Aug 2010 16:57:34 -0500, Andrei Alexandrescu wrote:

 On 08/20/2010 04:48 PM, retard wrote:
 I've tried eclipse with the fancy stuff off, and it's still slower
 than C::B or PN2 for me.
Programs like C::B force you to choose these features on compile time.
Aug 20 2010
prev sibling parent dsimcha <dsimcha yahoo.com> writes:
== Quote from retard (re tard.com.invalid)'s article
 Fri, 20 Aug 2010 16:57:34 -0500, Andrei Alexandrescu wrote:
 On 08/20/2010 04:48 PM, retard wrote:
 I've tried eclipse with the fancy stuff off, and it's still slower
 than C::B or PN2 for me.
Of course it is. You're comparing apples and oranges. The core of Eclipse is much more customizable.
I've seen a live demo by Erich Gamma in which he wrote a simple Eclipse plugin inside Eclipse and then he immediately used it. It was pretty darn impressive. Today's compiler technology still has us pay for customizability even when it's not realized, so even with all plugins turned off a pluggable program would still be slower than a monolithic one.
Indeed. But even if it (static compilation) allowed controlling the customizability in every possible way, there can't be a single binary that suits everyone. Programs like C::B force you to choose these features on compile time. This is very typical in C/C++ applications - you have the choice, but it has to be done as early as possible. For instance, I forgot to compile in mp3 support when building ffmpeg or some other multimedia lib. Now the only choice I have is to recompile the library and possibly even all dependencies. It's the same thing in some Linux distributions - they forgot to set on the Truetype bytecode interpreter so I need to recompile some core libraries. Eclipse's philosophy seems to be completely different - some of the choices are made on launching the application, others when starting individual components. The only limitation that comes to mind is that because of SWT, Eclipse needs two distributions (32-bit and 64-bit), at least on Linux. Swing using applications don't have this limitation.
This ties into an idea I've had for awhile as a low-priority "maybe eventually" feature. Template metaprogramming is a big part of writing idiomatic D code. IMHO the emphasis on it and taking it to the Nth degree are **the** main thing that makes D special. Its biggest weakness is that it can't be done at runtime. I wonder if we could eventually do something like make the caller available from the language and allow templates to be instantiated at runtime. For example: import std.getopt; /**A function that is called multiple times with the same constant * and can only be made efficient if that constant is known at * compile time. */ double numericsFunction(int needsConstFolding)(double runtimeArgs) { // do stuff. } void main(string[] args) { int param; getopt(args, "param", &param); // runtimeInstantiate instantiates a function template at runtime, // including recursive instantiation of all templates the // function instantiates, loads it into the current address // space and returns a function pointer to it. auto fun = runtimeInstantiate!numericsFunction(param); foreach(i; 0..someHugeNumber) { fun(i); } } This could be implemented by embedding all necessary source code as a string inside the executable. For structs and classes, runtimeInstantiate would be more difficult, but might return a factory method for creating the struct/class.
Aug 20 2010
prev sibling parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 20/08/2010 22:57, Andrei Alexandrescu wrote:
 On 08/20/2010 04:48 PM, retard wrote:
 I've tried eclipse with the fancy stuff off, and it's still slower than
 C::B or PN2 for me.
Of course it is. You're comparing apples and oranges. The core of Eclipse is much more customizable.
I've seen a live demo by Erich Gamma in which he wrote a simple Eclipse plugin inside Eclipse and then he immediately used it. It was pretty darn impressive.
That's interesting, where was that demo? Was it at QCon? -- Bruno Medeiros - Software Engineer
Oct 01 2010
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
retard wrote:
 After few hours 
 of use everything should run as fast as it can.
I kinda don't really want to wait a few hours for my editor to stop lagging.
Aug 20 2010
parent reply retard <re tard.com.invalid> writes:
Fri, 20 Aug 2010 15:07:50 -0700, Walter Bright wrote:

 retard wrote:
 After few hours
 of use everything should run as fast as it can.
I kinda don't really want to wait a few hours for my editor to stop lagging.
You don't need to use it that long. I only gave a pessimistic estimate on when all JIT compilation stops. Surely the most used features get recompiled faster. The second use of any Java function should already trigger the JIT in some cases. This is a bit unfortunate for desktop applications because the first impression is that everything lags. But the thing is, if you try something TWICE or more, it's orders of magnitude faster.
Aug 20 2010
parent reply Walter Bright <newshound2 digitalmars.com> writes:
retard wrote:
 Fri, 20 Aug 2010 15:07:50 -0700, Walter Bright wrote:
 
 retard wrote:
 After few hours
 of use everything should run as fast as it can.
I kinda don't really want to wait a few hours for my editor to stop lagging.
You don't need to use it that long. I only gave a pessimistic estimate on when all JIT compilation stops. Surely the most used features get recompiled faster. The second use of any Java function should already trigger the JIT in some cases. This is a bit unfortunate for desktop applications because the first impression is that everything lags. But the thing is, if you try something TWICE or more, it's orders of magnitude faster.
I should amend that to saying I don't want to wait at all for my editor to work. I go in and out of it too often. It's why I use a small, natively compiled editor. It loads instantly. Even on DOS <g>. And frankly, it's retarded to compile the same program over and over, every time you use it. Especially when the cost of that I pay for every time I run it.
Aug 20 2010
parent reply SK <sk metrokings.com> writes:
On Fri, Aug 20, 2010 at 4:16 PM, Walter Bright
<newshound2 digitalmars.com> wrote:
 And frankly, it's retarded to compile the same program over and over, every
 time you use it.
How about IBM's TIMI? My understanding is that it's a hybrid approach that gives you machine independence with a one-time recompilation that is stored back into the executable. Or something like that. -steve
Aug 20 2010
parent reply Walter Bright <newshound2 digitalmars.com> writes:
SK wrote:
 On Fri, Aug 20, 2010 at 4:16 PM, Walter Bright
 <newshound2 digitalmars.com> wrote:
 And frankly, it's retarded to compile the same program over and over, every
 time you use it.
How about IBM's TIMI? My understanding is that it's a hybrid approach that gives you machine independence with a one-time recompilation that is stored back into the executable. Or something like that.
Beats me, I've never heard of it.
Aug 20 2010
parent reply SK <sk metrokings.com> writes:
On Fri, Aug 20, 2010 at 7:07 PM, Walter Bright
<newshound2 digitalmars.com> wrote:
 SK wrote:
 On Fri, Aug 20, 2010 at 4:16 PM, Walter Bright
 <newshound2 digitalmars.com> wrote:
 And frankly, it's retarded to compile the same program over and over,
 every
 time you use it.
How about IBM's TIMI? =A0My understanding is that it's a hybrid approach that gives you machine independence with a one-time recompilation that is stored back into the executable. =A0Or something like that.
Beats me, I've never heard of it.
Then to make this more concrete, what if D had an option to suspend compilation after the front-end finished? The resulting executable contains the abstract RTL blobs and the compiler backend, which finishes the job for the specific platform on which the executable is launched. The final binary is cached for subsequent launches. You get good machine independence and the approach provides performance wins for operations like vectorizing where you don't know in advance what kind of SSE support you'll find. Hypothetically, why not? I dug around and found this: http://www.itjungle.com/tfh/tfh082007-printer01.html
From the article.
Without getting too technical, here's what happens on the OS/400 and i5/OS platform when you create applications, which explains the problem customers ran into in 1995 and which IBM wants them to avoid in 2008. A programmer writes an application in say, RPG. They run it through a compiler, either using the Original Program Model (OPM) or the Integrated Language Environment (ILE) compilers, and the code compiles so they can run it. Or, rather, that is what it looks like to the programmer. What is really happening is that this application is compiled into an intermediate stage, which some IBMers have called RPG templates (in the case of RPG applications). These templates have a property called observability, which in essence means they are compiled to the TIMI layer. These intermediate templates are then used by the TIMI layer on an actual piece of hardware with a specific processor and instruction set to compile the application to run on that specific processor. TIMI compiles these RPG templates down to actual compiled code behind the scenes the first time an application runs, and because the code was originally compiled to the TIMI layer, there is no need to change the source code. Only the object code changes, which end users never had access to anyway because only TIMI can reach down there. This is the brilliant way that IBM has preserved customers' vast investments in RPG, COBOL, and other applications over the years
Aug 20 2010
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
SK wrote:
 Then to make this more concrete, what if D had an option to suspend
 compilation after the front-end finished?  The resulting executable
 contains the abstract RTL blobs and the compiler backend, which
 finishes the job for the specific platform on which the executable is
 launched.  The final binary is cached for subsequent launches.  You
 get good machine independence and the approach provides performance
 wins for operations like vectorizing where you don't know in advance
 what kind of SSE support you'll find.
 
 Hypothetically, why not?
It's an old idea, but I think it's pointless. Instead of caching the intermediate code, just cache the source code.
Aug 20 2010
parent reply SK <sk metrokings.com> writes:
On Fri, Aug 20, 2010 at 8:46 PM, Walter Bright
<newshound2 digitalmars.com> wrote:
 SK wrote:
 Then to make this more concrete, what if D had an option to suspend
 compilation after the front-end finished? =A0The resulting executable
 contains the abstract RTL blobs and the compiler backend, which
 finishes the job for the specific platform on which the executable is
 launched. =A0The final binary is cached for subsequent launches. =A0You
 get good machine independence and the approach provides performance
 wins for operations like vectorizing where you don't know in advance
 what kind of SSE support you'll find.

 Hypothetically, why not?
It's an old idea, but I think it's pointless. Instead of caching the intermediate code, just cache the source code.
Huh? Do you mean to say: Instead of shipping the intermediate code, always ship source code. -or- Instead of caching the binary, just cache the source code. Neither of those guesses make general sense so I'm afraid I miss your point= .
Aug 20 2010
parent reply Walter Bright <newshound2 digitalmars.com> writes:
SK wrote:
 Do you mean to say:
 Instead of shipping the intermediate code, always ship source code.
Yes.
 -or-
 Instead of caching the binary, just cache the source code.
 
 Neither of those guesses make general sense so I'm afraid I miss your point.
Why doesn't it make sense?
Aug 20 2010
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:i4nn6k$sos$2 digitalmars.com...
 SK wrote:
 Do you mean to say:
 Instead of shipping the intermediate code, always ship source code.
Yes.
 -or-
 Instead of caching the binary, just cache the source code.

 Neither of those guesses make general sense so I'm afraid I miss your 
 point.
Why doesn't it make sense?
The only possible complaints I can think of are: 1. Redoing the parsing and semantic analysis. 2. Being uncomfortable about releasing source. poor reason because: - There's always reverse engineering. - There's always obfuscation. - IL may even provide better reverse-engineering results than machine code, depending on the IL. - All the companies sinking time and money into JS and PHP middleware don't seem to have a problem with handing out their source. - If someone's gonna steal a product and rebrand it as their own, they don't usually need the source, and having it would probably only be of fairly small help, if any. - As a customer, the idea of spending money on a product that I can't service myself if/when the company goes under or loses interest makes me nervous. Providing their source would given them a competetive advantage. - Even though providing source gets in the way of effective DRM (as if there even were such a thing), DRM itself gets in the way of sales. - Distributing in source form makes certain things possible that wouldn't otherwise be, like virtual template functions (in theory, even if not in actual D practice).
Aug 20 2010
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Nick Sabalausky wrote:
 "Walter Bright" <newshound2 digitalmars.com> wrote in message 
 news:i4nn6k$sos$2 digitalmars.com...
 SK wrote:
 Do you mean to say:
 Instead of shipping the intermediate code, always ship source code.
Yes.
 -or-
 Instead of caching the binary, just cache the source code.

 Neither of those guesses make general sense so I'm afraid I miss your 
 point.
Why doesn't it make sense?
The only possible complaints I can think of are: 1. Redoing the parsing and semantic analysis.
It's pretty darned fast.
 2. Being uncomfortable about releasing source.
You can turn java bytecodes back into reasonable source code easily enough. And nothing says you can't run the D code through a comment stripper first.
 

 poor reason because:
 
 - There's always reverse engineering.
 - There's always obfuscation.
 - IL may even provide better reverse-engineering results than machine code, 
 depending on the IL.
Yes, it can, as Java disassemblers prove.
 - All the companies sinking time and money into JS and PHP middleware don't 
 seem to have a problem with handing out their source.
 - If someone's gonna steal a product and rebrand it as their own, they don't 
 usually need the source, and having it would probably only be of fairly 
 small help, if any.
 - As a customer, the idea of spending money on a product that I can't 
 service myself if/when the company goes under or loses interest makes me 
 nervous. Providing their source would given them a competetive advantage.
 - Even though providing source gets in the way of effective DRM (as if there 
 even were such a thing), DRM itself gets in the way of sales.
Yup, no DRM on any products I've built.
 - Distributing in source form makes certain things possible that wouldn't 
 otherwise be, like virtual template functions (in theory, even if not in 
 actual D practice).
Yup again, Java can't do compile time polymorphism! BTW, I realized around 10 years ago that what you can do is lex D source and use the token stream as your "intermediate code". It should work great, be compact, and fast.
Aug 20 2010
parent reply "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:i4nqis$19fd$1 digitalmars.com...
 Nick Sabalausky wrote:

 - Distributing in source form makes certain things possible that wouldn't 
 otherwise be, like virtual template functions (in theory, even if not in 
 actual D practice).
Yup again, Java can't do compile time polymorphism!
I'm not sure whether we're talking about the same thing. I mean something like this: module A; class Base { void foo(T)(T x) { } } module B; class Derived : Base { void override foo(T)(T x) { } } I know D doesn't currently allow that. But my understanding is that if module A is available in source form, then there's no technical issue preventing it. So if a dev house wants to keep their source private, then they can't expose any API like that even if their language did normally allow such a thing. Therefore, keeping source private limits what can be done. Of course, even just exposing any old template requires at least some code to be exposed (or easily-reconstructable).
 BTW, I realized around 10 years ago that what you can do is lex D source 
 and use the token stream as your "intermediate code". It should work 
 great, be compact, and fast.
Would that be useful in any significant way? Wouldn't re-lexing be very quick too? And in my limited experience, lexing seems to be a little easer than parsing, too.
Aug 20 2010
next sibling parent "Nick Sabalausky" <a a.a> writes:
"Nick Sabalausky" <a a.a> wrote in message 
news:i4nrlb$1dp7$1 digitalmars.com...
 "Walter Bright" <newshound2 digitalmars.com> wrote in message 
 news:i4nqis$19fd$1 digitalmars.com...
 BTW, I realized around 10 years ago that what you can do is lex D source 
 and use the token stream as your "intermediate code". It should work 
 great, be compact, and fast.
Would that be useful in any significant way? Wouldn't re-lexing be very quick too? And in my limited experience, lexing seems to be a little easer than parsing, too.
If you ever saved it to disk or across a network, you'd have to be real careful about choosing a storage format...you could easily end up defeating the point :)
Aug 20 2010
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
Nick Sabalausky wrote:
 Would that be useful in any significant way? Wouldn't re-lexing be very 
 quick too? And in my limited experience, lexing seems to be a little easer 
 than parsing, too.
It's faster and more compact, if you're going to transmit things.
Aug 20 2010
prev sibling parent reply SK <sk metrokings.com> writes:
Hi Nick,

On Fri, Aug 20, 2010 at 10:37 PM, Nick Sabalausky <a a.a> wrote:

 poor reason because:

 - There's always reverse engineering.
 - There's always obfuscation.
 - IL may even provide better reverse-engineering results than machine code,
 depending on the IL.
 - All the companies sinking time and money into JS and PHP middleware don't
 seem to have a problem with handing out their source.
 - If someone's gonna steal a product and rebrand it as their own, they don't
 usually need the source, and having it would probably only be of fairly
 small help, if any.
 - As a customer, the idea of spending money on a product that I can't
 service myself if/when the company goes under or loses interest makes me
 nervous. Providing their source would given them a competetive advantage.
 - Even though providing source gets in the way of effective DRM (as if there
 even were such a thing), DRM itself gets in the way of sales.
 - Distributing in source form makes certain things possible that wouldn't
 otherwise be, like virtual template functions (in theory, even if not in
 actual D practice).
Yes, yes and yes - especially about not needing source to be a pirate. But your perspective is not shared by many big companies shipping software I care about. The open source movement has even turned up the contrast in this regard for closed source companies. Without conducting a thorough Fortune 500 survey, I will assert that shipping source is an emotionally burdened action at the management level, and this roadblock is avoided by "simply" running code through front end compilation. So, just do that and move on the to next problem.
Aug 20 2010
parent "Nick Sabalausky" <a a.a> writes:
"SK" <sk metrokings.com> wrote in message 
news:mailman.447.1282372871.13841.digitalmars-d puremagic.com...
 Hi Nick,

 On Fri, Aug 20, 2010 at 10:37 PM, Nick Sabalausky <a a.a> wrote:

 poor reason because:

 - There's always reverse engineering.
 - There's always obfuscation.
 - IL may even provide better reverse-engineering results than machine 
 code,
 depending on the IL.
 - All the companies sinking time and money into JS and PHP middleware 
 don't
 seem to have a problem with handing out their source.
 - If someone's gonna steal a product and rebrand it as their own, they 
 don't
 usually need the source, and having it would probably only be of fairly
 small help, if any.
 - As a customer, the idea of spending money on a product that I can't
 service myself if/when the company goes under or loses interest makes me
 nervous. Providing their source would given them a competetive advantage.
 - Even though providing source gets in the way of effective DRM (as if 
 there
 even were such a thing), DRM itself gets in the way of sales.
 - Distributing in source form makes certain things possible that wouldn't
 otherwise be, like virtual template functions (in theory, even if not in
 actual D practice).
Yes, yes and yes - especially about not needing source to be a pirate. But your perspective is not shared by many big companies shipping software I care about. The open source movement has even turned up the contrast in this regard for closed source companies. Without conducting a thorough Fortune 500 survey, I will assert that shipping source is an emotionally burdened action at the management level, and this roadblock is avoided by "simply" running code through front end compilation. So, just do that and move on the to next problem.
Oh right, I won't deny any of that. And that sort of situation can, unfortunately, create a faulty-but-real need to supply such things. You can't educate an MBA - but you can fleece them ;)
Aug 20 2010
prev sibling parent reply SK <sk metrokings.com> writes:
On Fri, Aug 20, 2010 at 10:11 PM, Walter Bright
<newshound2 digitalmars.com> wrote:
 SK wrote:
 Do you mean to say:
 Instead of shipping the intermediate code, always ship source code.
Yes. Why doesn't it make sense?
I love open source projects, but off the top of my head here are some reasons that's not a general substitute for TIMI for D: 1) What about closed source software? 2) From-source builds may be more complex or resource consuming than could be accommodated on the machine the customer used to launch, e.g. a hand-held device. 3) The source may have sizable irrelevant content for a particular product instantiation, compile time conditionals, etc I have no vested interest in the TIMI idea, but it feels good to me. We get platform independence, custom fit binaries, and require no change to the language. Not to mention it's proven to be a durable model at IBM.
Aug 20 2010
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
SK wrote:
 On Fri, Aug 20, 2010 at 10:11 PM, Walter Bright
 <newshound2 digitalmars.com> wrote:
 SK wrote:
 Do you mean to say:
 Instead of shipping the intermediate code, always ship source code.
Yes. Why doesn't it make sense?
I love open source projects, but off the top of my head here are some reasons that's not a general substitute for TIMI for D: 1) What about closed source software?
Won't work anyway. Java bytecodes are trivially turned back into source.
 2) From-source builds may be more complex or resource consuming than
 could be accommodated on the machine the customer used to launch, e.g.
 a hand-held device.
I've worked on a Java VM enough to know that won't be a problem.
 3) The source may have sizable irrelevant content for a particular
 product instantiation, compile time conditionals, etc
You can run it through a comment stripper first.
Aug 20 2010
parent reply SK <sk metrokings.com> writes:
On Fri, Aug 20, 2010 at 11:38 PM, Walter Bright
<newshound2 digitalmars.com> wrote:
 SK wrote:
 On Fri, Aug 20, 2010 at 10:11 PM, Walter Bright
 <newshound2 digitalmars.com> wrote:
 SK wrote:
 Do you mean to say:
 Instead of shipping the intermediate code, always ship source code.
Yes. Why doesn't it make sense?
I love open source projects, but off the top of my head here are some reasons that's not a general substitute for TIMI for D: 1) What about closed source software?
Won't work anyway. Java bytecodes are trivially turned back into source.
IMO, reverse engineering technology is not the issue.
 2) From-source builds may be more complex or resource consuming than
 could be accommodated on the machine the customer used to launch, e.g.
 a hand-held device.
I've worked on a Java VM enough to know that won't be a problem.
Why waste your batteries running deep and complex front-end optimizers that have nothing to do with the target platform?
 3) The source may have sizable irrelevant content for a particular
 product instantiation, compile time conditionals, etc
You can run it through a comment stripper first.
Why stop there? If you have to create some waypoint that isn't really the source and isn't the binary, why not finish off the platform independent lifting in the front end? I am getting zero admission from you that there is any goodness to be found in the TIMI thing. I don't understand that. Do you really just hate the idea through and through?
Aug 21 2010
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"SK" <sk metrokings.com> wrote in message 
news:mailman.448.1282374566.13841.digitalmars-d puremagic.com...
 On Fri, Aug 20, 2010 at 11:38 PM, Walter Bright
 <newshound2 digitalmars.com> wrote:
 SK wrote:
 I love open source projects, but off the top of my head here are some
 reasons that's not a general substitute for TIMI for D:
 1) What about closed source software?
Won't work anyway. Java bytecodes are trivially turned back into source.
IMO, reverse engineering technology is not the issue.
The *whole point* of closed-source is that the source isn't available. If Java bytecode is trivially turned back into meaningful source, then closed-source Java ain't closed-source anyway.
 2) From-source builds may be more complex or resource consuming than
 could be accommodated on the machine the customer used to launch, e.g.
 a hand-held device.
I've worked on a Java VM enough to know that won't be a problem.
Why waste your batteries running deep and complex front-end optimizers that have nothing to do with the target platform?
The compiler's not going to do any deep analysis of code that's versioned out for a different platform. Just lexing, maybe parsing, and that's it. AIUI, the real battery-eating processing is elsewhere, mainly in stuff that's also going to be done by any decent JIT engine. (Not that there wouldn't be at least *some* saved cycles.)
Aug 21 2010
next sibling parent Don <nospam nospam.com> writes:
Nick Sabalausky wrote:
 "SK" <sk metrokings.com> wrote in message 
 news:mailman.448.1282374566.13841.digitalmars-d puremagic.com...
 On Fri, Aug 20, 2010 at 11:38 PM, Walter Bright
 <newshound2 digitalmars.com> wrote:
 SK wrote:
 I love open source projects, but off the top of my head here are some
 reasons that's not a general substitute for TIMI for D:
 1) What about closed source software?
Won't work anyway. Java bytecodes are trivially turned back into source.
IMO, reverse engineering technology is not the issue.
The *whole point* of closed-source is that the source isn't available. If Java bytecode is trivially turned back into meaningful source, then closed-source Java ain't closed-source anyway.
 2) From-source builds may be more complex or resource consuming than
 could be accommodated on the machine the customer used to launch, e.g.
 a hand-held device.
I've worked on a Java VM enough to know that won't be a problem.
Why waste your batteries running deep and complex front-end optimizers that have nothing to do with the target platform?
The compiler's not going to do any deep analysis of code that's versioned out for a different platform. Just lexing, maybe parsing, and that's it.
It should do more than that. But...
 AIUI, the real battery-eating processing is elsewhere, mainly in stuff 
 that's also going to be done by any decent JIT engine. (Not that there 
 wouldn't be at least *some* saved cycles.)
this is probably correct: it's the optimisation steps that burn the most cycles. Personally I think the most interesting platform-dependent stuff happens in the front-end. That's the only place you can change algorithms based on the platform, and algorithmic changes are where the big gains are. And most of the advocacy literature for JIT compilation seems oblivious to that fact. You can do some really nice stuff if you compile at install time.
Aug 22 2010
prev sibling parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 21/08/2010 08:22, Nick Sabalausky wrote:
 "SK"<sk metrokings.com>  wrote in message
 news:mailman.448.1282374566.13841.digitalmars-d puremagic.com...
 On Fri, Aug 20, 2010 at 11:38 PM, Walter Bright
 <newshound2 digitalmars.com>  wrote:
 SK wrote:
 I love open source projects, but off the top of my head here are some
 reasons that's not a general substitute for TIMI for D:
 1) What about closed source software?
Won't work anyway. Java bytecodes are trivially turned back into source.
IMO, reverse engineering technology is not the issue.
The *whole point* of closed-source is that the source isn't available. If Java bytecode is trivially turned back into meaningful source, then closed-source Java ain't closed-source anyway.
Obfuscated Java code/bytecode is definitively not turned back into meaningful source. -- Bruno Medeiros - Software Engineer
Oct 01 2010
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
SK wrote:
 I've worked on a Java VM enough to know that won't be a problem.
Why waste your batteries running deep and complex front-end optimizers that have nothing to do with the target platform?
Why, indeed. You can just interpret the D code.
 3) The source may have sizable irrelevant content for a particular
 product instantiation, compile time conditionals, etc
You can run it through a comment stripper first.
Why stop there? If you have to create some waypoint that isn't really the source and isn't the binary, why not finish off the platform independent lifting in the front end? I am getting zero admission from you that there is any goodness to be found in the TIMI thing. I don't understand that. Do you really just hate the idea through and through?
I don't hate it, I'm just not sold on its advantages.
Aug 21 2010
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"SK" <sk metrokings.com> wrote in message 
news:mailman.445.1282371389.13841.digitalmars-d puremagic.com...
 On Fri, Aug 20, 2010 at 10:11 PM, Walter Bright
 <newshound2 digitalmars.com> wrote:
 SK wrote:
 Do you mean to say:
 Instead of shipping the intermediate code, always ship source code.
Yes. Why doesn't it make sense?
I love open source projects, but off the top of my head here are some reasons that's not a general substitute for TIMI for D:
I agree there are some benefits, but I suspect they may be smaller than they seem:
 1) What about closed source software?
True. Although IMO (see my post in another branch of this thread) closed-source is rarely, if ever, beneficial anyway, despite how it's often perceived.
 2) From-source builds may be more complex or resource consuming than
 could be accommodated on the machine the customer used to launch, e.g.
 a hand-held device.
I can't imagine it would be significantly more than something like JIT (or non-JIT interpreted code). Unless it's C++, of course.
 3) The source may have sizable irrelevant content for a particular
 product instantiation, compile time conditionals, etc
- Unless I'm just becoming a dinosaur, sizeable content is rarely code. More likely other binary assets. - Code can be huffman-compressed with significant size savings and quickly/easily unzipped on-the-fly. Even the GBA has some built-in on-the-fly unzipping ability that was often used in games (for assets though, not code). - I don't think I've ever seen a cross-platform program that had platform-specific code that made up any more than a small fraction of the total code. Of course, I'm not saying that using source as IL is universially, undeniably, better period, no matter what, or anything like that. I agree that binary IL has some nice aspects. I'm just not convinced that it's as much of an improvement over source-as-IL as it would initially seem.
Aug 20 2010
parent Walter Bright <newshound2 digitalmars.com> writes:
Nick Sabalausky wrote:
 Of course, I'm not saying that using source as IL is universially, 
 undeniably, better period, no matter what, or anything like that. I agree 
 that binary IL has some nice aspects. I'm just not convinced that it's as 
 much of an improvement over source-as-IL as it would initially seem.
If the "source code" consists of tokenized source code, I think it would compare very favorably with byte code in terms of size.
Aug 21 2010
prev sibling parent reply =?iso-8859-2?B?VG9tZWsgU293afFza2k=?= <just ask.me> writes:
Dnia 21-08-2010 o 05:36:28 SK <sk metrokings.com> napisa=B3(a):

 Then to make this more concrete, what if D had an option to suspend
 compilation after the front-end finished?  The resulting executable
 contains the abstract RTL blobs and the compiler backend, which
 finishes the job for the specific platform on which the executable is
 launched.  The final binary is cached for subsequent launches.  You
 get good machine independence and the approach provides performance
 wins for operations like vectorizing where you don't know in advance
 what kind of SSE support you'll find.

 Hypothetically, why not?

 I dug around and found this:
 http://www.itjungle.com/tfh/tfh082007-printer01.html

 From the article.
Without getting too technical, here's what happens on the OS/400 and i5/OS platform when you create applications, which explains the problem customers ran into in 1995 and which IBM wants them to avoid in 2008. A programmer writes an application in say, RPG. They run it through a compiler, either using the Original Program Model (OPM) or the Integrated Language Environment (ILE) compilers, and the code compiles so they can run it. Or, rather, that is what it looks like to=
 the programmer. What is really happening is that this application is
 compiled into an intermediate stage, which some IBMers have called RPG=
 templates (in the case of RPG applications). These templates have a
 property called observability, which in essence means they are
 compiled to the TIMI layer. These intermediate templates are then used=
 by the TIMI layer on an actual piece of hardware with a specific
 processor and instruction set to compile the application to run on
 that specific processor. TIMI compiles these RPG templates down to
 actual compiled code behind the scenes the first time an application
 runs, and because the code was originally compiled to the TIMI layer,
 there is no need to change the source code. Only the object code
 changes, which end users never had access to anyway because only TIMI
 can reach down there. This is the brilliant way that IBM has preserved=
 customers' vast investments in RPG, COBOL, and other applications over=
 the years
Interesting. Hypothesising, wouldn't it be better if compiling to metal = = happened already during installation? On the first run the compiler stil= l = has to hurry (1st impression counts), but downloading over network is so= = slow that while the user is watching the progress bar there'd be plenty = of = time for compiling the intermediate representation and expensive = optimizations without even being noticed by the user. Tomek
Aug 21 2010
parent SK <sk metrokings.com> writes:
2010/8/21 Tomek Sowi=C5=84ski <just ask.me>:
 Interesting. Hypothesising, wouldn't it be better if compiling to metal
 happened already during installation?
Nice idea for hand-helds for sure. In a big computing environments, the situation is trickier. For example, a network storage device may contain the intermediate code installation used by many and varied client machines. Each client may load and finish compilation differently over fast local networks. Your idea might still be a meaningful head start to the process even in that case.
Aug 22 2010
prev sibling next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"retard" <re tard.com.invalid> wrote in message 
news:i4mt6o$cam$2 digitalmars.com...
 Fri, 20 Aug 2010 17:37:18 -0400, Nick Sabalausky wrote:

 "retard" <re tard.com.invalid> wrote in message
 news:i4mrss$cam$1 digitalmars.com...
 If you turn off the
 syntax check, Eclipse works just as fast as any native application on a
 modern desktop.
I've tried eclipse with the fancy stuff off, and it's still slower than C::B or PN2 for me.
Of course it is. You're comparing apples and oranges.
Fer cryin out loud, which one is it? Is Eclipse supposed to be "just as fast as any native application", or is it "of course it's slower"? First you say one, then you say the other.
Aug 20 2010
next sibling parent reply Jonathan M Davis <jmdavisprog gmail.com> writes:
On Friday 20 August 2010 22:11:30 Nick Sabalausky wrote:
 Fer cryin out loud, which one is it? Is Eclipse supposed to be "just as
 fast as any native application", or is it "of course it's slower"? First
 you say one, then you say the other.
LOL. Personally, I have to say that Eclipse is a huge achievement and in many ways is a fantastic application. It's so modular and flexible that it's practically insane. But that comes with a cost, and while I think that what they've done with it is far more doable in Java than in C++, it's bound to be less efficient because it was done in Java. As far as general application development goes, I think that Eclipse is a very bad example simply because it's so flexible. Most of its features relating to modularity and plugins and such is totally unnecessary in your typical desktop application. I expect that your typical desktop application would do far better performance-wise when written in Java than Eclipse has done. But either because Java isn't generally good enough for application development or because people think that it isn't there don't seem to be very many desktop applications which are written in Java. So, it's hard to say. - Jonathan M Davis
Aug 20 2010
parent reply "Nick Sabalausky" <a a.a> writes:
"Jonathan M Davis" <jmdavisprog gmail.com> wrote in message 
news:mailman.444.1282368222.13841.digitalmars-d puremagic.com...
 I expect that your typical desktop application would do far better
 performance-wise when written in Java than Eclipse has done. But either 
 because
 Java isn't generally good enough for application development or because 
 people
 think that it isn't there don't seem to be very many desktop applications 
 which
 are written in Java. So, it's hard to say.
The best C/C++ <-> Java application comparison I can think of off-the-top-of-my-head would be uTorrent and Azureus (That's the actual Azureus, not Vuze - I don't care what the creators claim, Vuze is a *completely* different program.) uTorrent and Azureus are nearly-identical in purpose, features and UI. uTorrent is smooth as silk. Azureus is a bit sluggish (certainly not Eclipse sluggish, but no where near uTorrent). uTorrent is C/C++. Azureus is Java. And just overall, the majority of responsive, non-bloaty software I've used *has* been natively-compiled stuff. The majority of sluggish, bloated software I've used has been some form of interpreted code or VM, such as JVM or .NET. So even if we're comparing apples and oranges, if Farm A makes apples that are usually juicy and sweet, and Farm B makes oranges that usually aren't, I'm going to feel fairly confident in saying that Farm A kicks Farm B's ass.
Aug 20 2010
parent reply Jonathan M Davis <jmdavisprog gmail.com> writes:
On Friday 20 August 2010 22:52:37 Nick Sabalausky wrote:
 "Jonathan M Davis" <jmdavisprog gmail.com> wrote in message
 news:mailman.444.1282368222.13841.digitalmars-d puremagic.com...
 
 I expect that your typical desktop application would do far better
 performance-wise when written in Java than Eclipse has done. But either
 because
 Java isn't generally good enough for application development or because
 people
 think that it isn't there don't seem to be very many desktop applications
 which
 are written in Java. So, it's hard to say.
The best C/C++ <-> Java application comparison I can think of off-the-top-of-my-head would be uTorrent and Azureus (That's the actual Azureus, not Vuze - I don't care what the creators claim, Vuze is a *completely* different program.) uTorrent and Azureus are nearly-identical in purpose, features and UI. uTorrent is smooth as silk. Azureus is a bit sluggish (certainly not Eclipse sluggish, but no where near uTorrent). uTorrent is C/C++. Azureus is Java. And just overall, the majority of responsive, non-bloaty software I've used *has* been natively-compiled stuff. The majority of sluggish, bloated software I've used has been some form of interpreted code or VM, such as JVM or .NET. So even if we're comparing apples and oranges, if Farm A makes apples that are usually juicy and sweet, and Farm B makes oranges that usually aren't, I'm going to feel fairly confident in saying that Farm A kicks Farm B's ass.
Those seem to be reasonable comparisons. Of course, you don't choose Java or .NET because it gets you efficient code (though you'd probably like efficient code). You use them for reasons like fast development time and (for Java at least) portability. The gains in maintenance and development time are easily large enough to justify the loss in efficiency on many (perhaps even most) software projects. Of course, there are projects where Java and .NET don't cut it, but they often do. Hopefully D will prove to be a solution with development benefits on par with Java and .NET and efficiency benefits on par with C++. - Jonathan M Davis
Aug 21 2010
next sibling parent "Nick Sabalausky" <a a.a> writes:
"Jonathan M Davis" <jmdavisprog gmail.com> wrote in message 
news:mailman.449.1282374676.13841.digitalmars-d puremagic.com...
 On Friday 20 August 2010 22:52:37 Nick Sabalausky wrote:
 "Jonathan M Davis" <jmdavisprog gmail.com> wrote in message
 news:mailman.444.1282368222.13841.digitalmars-d puremagic.com...

 I expect that your typical desktop application would do far better
 performance-wise when written in Java than Eclipse has done. But either
 because
 Java isn't generally good enough for application development or because
 people
 think that it isn't there don't seem to be very many desktop 
 applications
 which
 are written in Java. So, it's hard to say.
The best C/C++ <-> Java application comparison I can think of off-the-top-of-my-head would be uTorrent and Azureus (That's the actual Azureus, not Vuze - I don't care what the creators claim, Vuze is a *completely* different program.) uTorrent and Azureus are nearly-identical in purpose, features and UI. uTorrent is smooth as silk. Azureus is a bit sluggish (certainly not Eclipse sluggish, but no where near uTorrent). uTorrent is C/C++. Azureus is Java. And just overall, the majority of responsive, non-bloaty software I've used *has* been natively-compiled stuff. The majority of sluggish, bloated software I've used has been some form of interpreted code or VM, such as JVM or .NET. So even if we're comparing apples and oranges, if Farm A makes apples that are usually juicy and sweet, and Farm B makes oranges that usually aren't, I'm going to feel fairly confident in saying that Farm A kicks Farm B's ass.
Those seem to be reasonable comparisons. Of course, you don't choose Java or .NET because it gets you efficient code (though you'd probably like efficient code). You use them for reasons like fast development time and (for Java at least) portability. The gains in maintenance and development time are easily large enough to justify the loss in efficiency on many (perhaps even most) software projects. Of course, there are projects where Java and .NET don't cut it, but they often do. Hopefully D will prove to be a solution with development benefits on par with Java and .NET and efficiency benefits on par with C++.
Yup. Good points. If it weren't for D, and I had to use C or C++ for native-code apps, I wouldn't hesitate at using a .NET or JVM language for whatever projects I could.
Aug 21 2010
prev sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Jonathan M Davis (jmdavisprog gmail.com)'s article
 Hopefully D will prove to be a solution with development benefits
 on par with Java and .NET and efficiency benefits on par with C++.
 - Jonathan M Davis
If "on par" with Java and .NET is all we're aiming for, then we've really set the bar low. I think that D's metaprogramming/generic system will set the bar much higher once D gets more library and tool support. I've thought about how I would implement an API with comparable ease of use and I don't think it's possible. The only other languages where I think a similarly easy to use API would be implementable are the (really slow) dynamic languages like Python and Ruby. For an example, try implementing an API comparable to std.range in flexibility and ease of use in any statically typed, efficient language besides D.
Aug 21 2010
parent reply Jonathan M Davis <jmdavisprog gmail.com> writes:
On Saturday 21 August 2010 08:03:18 dsimcha wrote:
 == Quote from Jonathan M Davis (jmdavisprog gmail.com)'s article
 
 Hopefully D will prove to be a solution with development benefits
 on par with Java and .NET and efficiency benefits on par with C++.
 - Jonathan M Davis
If "on par" with Java and .NET is all we're aiming for, then we've really set the bar low. I think that D's metaprogramming/generic system will set the bar much higher once D gets more library and tool support. I've thought about how I would implement an API with comparable ease of use and flexibility for some libraries I've written or used in D, using Java, think a similarly easy to use API would be implementable are the (really slow) dynamic languages like Python and Ruby. For an example, try implementing an API comparable to std.range in flexibility and ease of use in any statically typed, efficient language besides D.
I think that that it's always good to do better. However, the fact remains that generally in a language, you either get power and efficiency, or you get ease of use and speed of development. C++ tends to be powerful and efficient but not nowhere near as powerful as C++, but they avoid a lot of its problems which make them far easier to use and maintain, so development goes faster. D is looking to hit both of those camps. If it manages the efficiency of C++ and the ease and use Howevever, I am by no means saying that we should strive to be "only" as good as of development. Any gains that we can have over them would be fantastic. In particular, if D were to be considered much better in those areas by enough people, then it would likely really catch on - not to mention, as users of D, we want it to be as good as possible. Java at what they're good at in one language is big, and I really don't think that we're there yet. I don't know how efficient we are in comparison to C++, but I expect that there are a number of areas which need improvement (things like inlining, the garbage collector, etc.) if we want the average D program to match the average C++ program for efficiency. And we definitely don't match Java and for ease of use and maintainability at this point, but most of that is simply an issue of libraries and tools, both of which are being worked on. So, we're getting there, but I don't think that we're there yet. And certainly, once we do get there, there's no reason to stay only "on par" with them. We should always be looking to improve D and its libraries and tools. - Jonathan M Davis
Aug 21 2010
parent reply retard <re tard.com.invalid> writes:
Sat, 21 Aug 2010 18:31:16 -0700, Jonathan M Davis wrote:

 But still, being able to consistently match C++ at what it's good at and

 don't think that we're there yet. I don't know how efficient we are in
 comparison to C++, but I expect that there are a number of areas which
 need improvement (things like inlining, the garbage collector, etc.) if
 we want the average D program to match the average C++ program for

 and maintainability at this point, but most of that is simply an issue
 of libraries and tools, both of which are being worked on. So, we're
 getting there, but I don't think that we're there yet. And certainly,
 once we do get there, there's no reason to stay only "on par" with them.
 We should always be looking to improve D and its libraries and tools.
BitC, Factor, and Ur/Web ?
Aug 22 2010
next sibling parent Justin Johansson <no spam.com> writes:
On 22/08/10 21:01, retard wrote:
 Sat, 21 Aug 2010 18:31:16 -0700, Jonathan M Davis wrote:

 But still, being able to consistently match C++ at what it's good at and

 don't think that we're there yet. I don't know how efficient we are in
 comparison to C++, but I expect that there are a number of areas which
 need improvement (things like inlining, the garbage collector, etc.) if
 we want the average D program to match the average C++ program for

 and maintainability at this point, but most of that is simply an issue
 of libraries and tools, both of which are being worked on. So, we're
 getting there, but I don't think that we're there yet. And certainly,
 once we do get there, there's no reason to stay only "on par" with them.
 We should always be looking to improve D and its libraries and tools.
BitC, Factor, and Ur/Web ?
No, of course not, well, uhhm, at least going by the blurp at http://www.digitalmars.com/d/index.html where it says "It [D] is not governed by a corporate agenda or any overarching theory of programming. The needs and contributions of the D programming community form the direction it goes." Where is it written that the design of a PL should be governed by an overarching theory of programming (and/or a corporate agenda)? Axioms and logic and all that formal stuff are just so anti-democratic. Ad hoc PL design by newsgroup is so much more consensual and politically correct in this modern age of equity and diversity. :-)
Aug 22 2010
prev sibling parent "Nick Sabalausky" <a a.a> writes:
"retard" <re tard.com.invalid> wrote in message 
news:i4r1pk$rm8$1 digitalmars.com...
 Sat, 21 Aug 2010 18:31:16 -0700, Jonathan M Davis wrote:

 But still, being able to consistently match C++ at what it's good at and

 don't think that we're there yet. I don't know how efficient we are in
 comparison to C++, but I expect that there are a number of areas which
 need improvement (things like inlining, the garbage collector, etc.) if
 we want the average D program to match the average C++ program for

 and maintainability at this point, but most of that is simply an issue
 of libraries and tools, both of which are being worked on. So, we're
 getting there, but I don't think that we're there yet. And certainly,
 once we do get there, there's no reason to stay only "on par" with them.
 We should always be looking to improve D and its libraries and tools.
BitC, Factor, and Ur/Web ?
"More academic"? No, D tries to stick to goals that are actually *good* ones ;) That's kinda like asking if it should try to be more obfuscated than brainfuck, or try to have less enterprise-level library support than Logo.
Aug 22 2010
prev sibling parent retard <re tard.com.invalid> writes:
Sat, 21 Aug 2010 01:11:30 -0400, Nick Sabalausky wrote:

 "retard" <re tard.com.invalid> wrote in message
 news:i4mt6o$cam$2 digitalmars.com...
 Fri, 20 Aug 2010 17:37:18 -0400, Nick Sabalausky wrote:

 "retard" <re tard.com.invalid> wrote in message
 news:i4mrss$cam$1 digitalmars.com...
 If you turn off the
 syntax check, Eclipse works just as fast as any native application on
 a modern desktop.
I've tried eclipse with the fancy stuff off, and it's still slower than C::B or PN2 for me.
Of course it is. You're comparing apples and oranges.
Fer cryin out loud, which one is it? Is Eclipse supposed to be "just as fast as any native application", or is it "of course it's slower"? First you say one, then you say the other.
I meant to say that Eclipse (perceivably) works just as fast as any (feature-wise equivalent) native application on a modern desktop. I recommended turning off some of the functionality to make it behave more like C::B and similar lightweight applications. The idea behind desktop Java applications is that since the application would run 100..100000 times faster than it needs to on a modern PC, a 2..100 times slower Java version is totally acceptable. Heck, I even use applications (with plugins) written in Ruby. Ruby has a lot worse VM that does not even do JIT compilation. To summarize: - the JIT causes horrible interactive performance right after startup (use of the client VM might help a bit. Oracle might also be working on a hybrid client/server VM) - the JIT and the GC cause noticeable delays, but only occasionally (use of the new low-latency GC and tuning the VM options might help here) - the execution performance IS worse, but the perceivable performance is often acceptable - application platforms such as Eclipse or Netbeans provide a very high level of flexibility, portability, and memory safety
Aug 21 2010
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"retard" <re tard.com.invalid> wrote in message 
news:i4mt6o$cam$2 digitalmars.com...
 When comparing, you also need to understand how the JIT compiler works.
 Using the Sun server VM, you need to visit each menu and use each visual
 component a few times to make the VM recompile the code. After few hours
 of use everything should run as fast as it can. Upgrading the JVM might
 also help because there are more JIT optimizations available.
If I have to use a program written in Language-X for awhile before it stops being slow, then I'm going to feel perfectly justified in calling Language-X a slow language.
Aug 20 2010
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Nick Sabalausky wrote:
 If I have to use a program written in Language-X for awhile before it stops 
 being slow, then I'm going to feel perfectly justified in calling Language-X 
 a slow language.
Also consider that Java really doesn't give you much to work with if you want to take hand-tuning past a certain point.
Aug 20 2010
parent "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:i4nqnk$19fd$2 digitalmars.com...
 Nick Sabalausky wrote:
 If I have to use a program written in Language-X for awhile before it 
 stops being slow, then I'm going to feel perfectly justified in calling 
 Language-X a slow language.
Also consider that Java really doesn't give you much to work with if you want to take hand-tuning past a certain point.
Absolutely. In fact, that's why I take issue with all those old Java benchmarks that would compare Java code to *equivalent* C/C++ code (allegedly for the sake of a fair apples-to-apples): The C/C++ code can be further optimized, the Java can't. With .NET, you maybe can optimize to a certain extent, just because at least it *allows* pointers (although the type system gets in the way a lot - I once tried to convert a buffer to a hours trying to figure out how to do it without any copying/allocation/runtime-reflection before finally concluding "If it's possible, I no longer care how". In C/C++/D, I can do it with just a simple cast).
Aug 21 2010
prev sibling parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 20/08/2010 22:37, Nick Sabalausky wrote:
 "retard"<re tard.com.invalid>  wrote in message
 news:i4mrss$cam$1 digitalmars.com...
 Fri, 20 Aug 2010 19:04:41 +0200, Andrej Mitrovic wrote:

 What are these Java programs for the desktop that run fast? I haven't
 encountered any, but maybe that's just because I didn't try them all
 out. Eclipse takes at least 20 seconds to load on startup on my quad
 core, that's not very fast. On the other hand, CodeBlocks which is coded
 in C++ and has  a few dozen plugins installed runs in an instant.
Now that's a fair comparison! "Crysis runs so slowly but a hello world written in Go is SO fast. This must prove that Go is much faster than C+ +!" I think CodeBlocks is one of the most lightweight IDEs out there. Does it even have full semantic autocompletion? Eclipse, on the other hand, comes with almost everything you can imagine. If you turn off the syntax check, Eclipse works just as fast as any native application on a modern desktop.
I've tried eclipse with the fancy stuff off, and it's still slower than C::B or PN2 for me.
All these comments about Eclipse takes this time to load, or Eclipse is slow when used, etc., are really meaningless unless you tell us something about what actual plugins and features are installed and used. Unlike CodeBlocks which is "a free C++ IDE", Eclipse proper is the Eclipse Platform, which is a platform (duh) and doesn't do anything useful by itself. Particularly since there is not even a standard/single "Eclipse" download: http://www.eclipse.org/downloads/ , unlike Codeblocks. The days were JDT would be the main thing 95% of Eclipse users would use are long gone. So are you using JDT, CDT, Descent, something else? If JDT, do you have extra tools, like the J2EE Web Tools? (these add massive bloat) What about source control plugins, or plugins not provided by the Eclipse Foundation, etc? All of these are a wildcard that can affect performance. For example, I definitely note that sometimes my workspace chokes when I do certain SVN or file related operations (with Subclipse btw, not Subversive). I also noted, when Eclipse 3.6 came out, some sluggishness when working with JDT, even when just typing code (in this case it was very subtle, almost imperceptible, but I still felt it and it was quite annoying). I suspected not JDT, but Mylyn, so I uninstalled it, and now things are back to normal. (there might be a fix or workaround for that issue in Mylyn, but since I don't use it, I didn't bother) I would definitely be quite annoying if Eclipse was not responsive for the vast majority of coding tasks. As for startup time, I hardly care anything about that : http://www.digitalmars.com/d/archives/digitalmars/D/Re_Eclipse_startup_time_Was_questions_on_PhanTango_merger_was_Merging_Tangobos_into_Tango_60160.html#N60346 (except when I'm doing PDE development, but that's a different thing) -- Bruno Medeiros - Software Engineer
Oct 01 2010
parent reply retard <re tard.com.invalid> writes:
Fri, 01 Oct 2010 14:53:04 +0100, Bruno Medeiros wrote:

 On 20/08/2010 22:37, Nick Sabalausky wrote:
 "retard"<re tard.com.invalid>  wrote in message
 news:i4mrss$cam$1 digitalmars.com...
 Fri, 20 Aug 2010 19:04:41 +0200, Andrej Mitrovic wrote:

 What are these Java programs for the desktop that run fast? I haven't
 encountered any, but maybe that's just because I didn't try them all
 out. Eclipse takes at least 20 seconds to load on startup on my quad
 core, that's not very fast. On the other hand, CodeBlocks which is
 coded in C++ and has  a few dozen plugins installed runs in an
 instant.
Now that's a fair comparison! "Crysis runs so slowly but a hello world written in Go is SO fast. This must prove that Go is much faster than C+ +!" I think CodeBlocks is one of the most lightweight IDEs out there. Does it even have full semantic autocompletion? Eclipse, on the other hand, comes with almost everything you can imagine. If you turn off the syntax check, Eclipse works just as fast as any native application on a modern desktop.
I've tried eclipse with the fancy stuff off, and it's still slower than C::B or PN2 for me.
All these comments about Eclipse takes this time to load, or Eclipse is slow when used, etc., are really meaningless unless you tell us something about what actual plugins and features are installed and used. Unlike CodeBlocks which is "a free C++ IDE", Eclipse proper is the Eclipse Platform, which is a platform (duh) and doesn't do anything useful by itself. Particularly since there is not even a standard/single "Eclipse" download: http://www.eclipse.org/downloads/ , unlike Codeblocks. The days were JDT would be the main thing 95% of Eclipse users would use are long gone. So are you using JDT, CDT, Descent, something else? If JDT, do you have extra tools, like the J2EE Web Tools? (these add massive bloat) What about source control plugins, or plugins not provided by the Eclipse Foundation, etc? All of these are a wildcard that can affect performance. For example, I definitely note that sometimes my workspace chokes when I do certain SVN or file related operations (with Subclipse btw, not Subversive). I also noted, when Eclipse 3.6 came out, some sluggishness when working with JDT, even when just typing code (in this case it was very subtle, almost imperceptible, but I still felt it and it was quite annoying). I suspected not JDT, but Mylyn, so I uninstalled it, and now things are back to normal. (there might be a fix or workaround for that issue in Mylyn, but since I don't use it, I didn't bother) I would definitely be quite annoying if Eclipse was not responsive for the vast majority of coding tasks. As for startup time, I hardly care anything about that : http://www.digitalmars.com/d/archives/digitalmars/D/
Re_Eclipse_startup_time_Was_questions_on_PhanTango_merger_was_Merging_Tangobos_into_Tango_60160.html#N60346
 (except when I'm doing PDE development, but that's a different thing)
Back then the unhappy user was using a 1 GHz Pentium M notebook. I tried this again. Guess what, the latest Eclipse Helios (3.6.1) took 3.5 (!!!) seconds to start up the whole Java workspace, open few projects and fully initialize the editors etc for the most active project. Has the original complainer ever used Photoshop, CorelDraw, AutoCad, Maya/3DSMax, Maple/ MathCad/Mathematica, or some other Real World Programs (tm)? These are all fucking slow. That's how it is: If you need to get the job done, you must use slow programs. My hardware specs: Core i7, 24 GB of DDR3 RAM, Sun Java 7, x86-64 Linux 2.6 (a middle range home PC with some extra memory, that is)
Oct 02 2010
next sibling parent reply Don <nospam nospam.com> writes:
retard wrote:
 Fri, 01 Oct 2010 14:53:04 +0100, Bruno Medeiros wrote:
 
 On 20/08/2010 22:37, Nick Sabalausky wrote:
 "retard"<re tard.com.invalid>  wrote in message
 news:i4mrss$cam$1 digitalmars.com...
 Fri, 20 Aug 2010 19:04:41 +0200, Andrej Mitrovic wrote:

 What are these Java programs for the desktop that run fast? I haven't
 encountered any, but maybe that's just because I didn't try them all
 out. Eclipse takes at least 20 seconds to load on startup on my quad
 core, that's not very fast. On the other hand, CodeBlocks which is
 coded in C++ and has  a few dozen plugins installed runs in an
 instant.
Now that's a fair comparison! "Crysis runs so slowly but a hello world written in Go is SO fast. This must prove that Go is much faster than C+ +!" I think CodeBlocks is one of the most lightweight IDEs out there. Does it even have full semantic autocompletion? Eclipse, on the other hand, comes with almost everything you can imagine. If you turn off the syntax check, Eclipse works just as fast as any native application on a modern desktop.
I've tried eclipse with the fancy stuff off, and it's still slower than C::B or PN2 for me.
All these comments about Eclipse takes this time to load, or Eclipse is slow when used, etc., are really meaningless unless you tell us something about what actual plugins and features are installed and used. Unlike CodeBlocks which is "a free C++ IDE", Eclipse proper is the Eclipse Platform, which is a platform (duh) and doesn't do anything useful by itself. Particularly since there is not even a standard/single "Eclipse" download: http://www.eclipse.org/downloads/ , unlike Codeblocks. The days were JDT would be the main thing 95% of Eclipse users would use are long gone. So are you using JDT, CDT, Descent, something else? If JDT, do you have extra tools, like the J2EE Web Tools? (these add massive bloat) What about source control plugins, or plugins not provided by the Eclipse Foundation, etc? All of these are a wildcard that can affect performance. For example, I definitely note that sometimes my workspace chokes when I do certain SVN or file related operations (with Subclipse btw, not Subversive). I also noted, when Eclipse 3.6 came out, some sluggishness when working with JDT, even when just typing code (in this case it was very subtle, almost imperceptible, but I still felt it and it was quite annoying). I suspected not JDT, but Mylyn, so I uninstalled it, and now things are back to normal. (there might be a fix or workaround for that issue in Mylyn, but since I don't use it, I didn't bother) I would definitely be quite annoying if Eclipse was not responsive for the vast majority of coding tasks. As for startup time, I hardly care anything about that : http://www.digitalmars.com/d/archives/digitalmars/D/
Re_Eclipse_startup_time_Was_questions_on_PhanTango_merger_was_Merging_Tangobos_into_Tango_60160.html#N60346
 (except when I'm doing PDE development, but that's a different thing)
Back then the unhappy user was using a 1 GHz Pentium M notebook. I tried this again. Guess what, the latest Eclipse Helios (3.6.1) took 3.5 (!!!) seconds to start up the whole Java workspace, open few projects and fully initialize the editors etc for the most active project.
That's good news. Sounds as though they've fixed the startup performance bug. Has the original
 complainer ever used Photoshop, CorelDraw, AutoCad, Maya/3DSMax, Maple/
 MathCad/Mathematica, or some other Real World Programs (tm)? These are 
 all fucking slow. That's how it is: If you need to get the job done, you 
 must use slow programs.
That original poster was me. Yes, I've used all of those programs (though not a recent version of CorelDraw). The startup time was 80 seconds, on the most most minimal standard Eclipse setup I could find. MSVC was 3 seconds on the same system. I had expected the times to be roughly comparable. There was just something sloppy in Eclipse's startup code.
Oct 02 2010
next sibling parent reply retard <re tard.com.invalid> writes:
Sat, 02 Oct 2010 18:21:53 +0200, Don wrote:

 retard wrote:
 Back then the unhappy user was using a 1 GHz Pentium M notebook. I
 tried this again. Guess what, the latest Eclipse Helios (3.6.1) took
 3.5 (!!!) seconds to start up the whole Java workspace, open few
 projects and fully initialize the editors etc for the most active
 project.
That's good news. Sounds as though they've fixed the startup performance bug.
I meant that computers become more efficient. I've upgraded my system two times since this discussion last appeared here. If you wait 18 months, the 20 seconds becomes 10 seconds, in 36 months 5 seconds. It's the Moore's law, you know.
 
 Has the original
 complainer ever used Photoshop, CorelDraw, AutoCad, Maya/3DSMax, Maple/
 MathCad/Mathematica, or some other Real World Programs (tm)? These are
 all fucking slow. That's how it is: If you need to get the job done,
 you must use slow programs.
That original poster was me. Yes, I've used all of those programs (though not a recent version of CorelDraw). The startup time was 80 seconds, on the most most minimal standard Eclipse setup I could find. MSVC was 3 seconds on the same system. I had expected the times to be roughly comparable.
How long does it take to start up all those programs on your notebook? 15 minutes? I don't even consider Eclipse bloated compared to *these* applications.
 
 There was just something sloppy in Eclipse's startup code.
I don't recommend running Eclipse on any machine with less than 1 GB of RAM. It's a well known fact that Java programs require twice as much memory due to garbage collection. Also Eclipse is a rather complex framework. Luckily *all* systems, even the cheapest $100 netbooks have 1 GB of RAM!
Oct 02 2010
parent reply Don <nospam nospam.com> writes:
retard wrote:
 Sat, 02 Oct 2010 18:21:53 +0200, Don wrote:
 
 retard wrote:
 Back then the unhappy user was using a 1 GHz Pentium M notebook. I
 tried this again. Guess what, the latest Eclipse Helios (3.6.1) took
 3.5 (!!!) seconds to start up the whole Java workspace, open few
 projects and fully initialize the editors etc for the most active
 project.
That's good news. Sounds as though they've fixed the startup performance bug.
I meant that computers become more efficient. I've upgraded my system two times since this discussion last appeared here. If you wait 18 months, the 20 seconds becomes 10 seconds, in 36 months 5 seconds. It's the Moore's law, you know.
Sadly, software seems to be bloating at a rate which is faster than Moore's law. Part of my original post noted that it was much slower than my old 1MHz Commodore 64 took to boot my development environment from a cassette tape! So I still take it as a good sign that the rate of bloating is slower than Moore's law.
 Has the original
 complainer ever used Photoshop, CorelDraw, AutoCad, Maya/3DSMax, Maple/
 MathCad/Mathematica, or some other Real World Programs (tm)? These are
 all fucking slow. That's how it is: If you need to get the job done,
 you must use slow programs.
That original poster was me. Yes, I've used all of those programs (though not a recent version of CorelDraw). The startup time was 80 seconds, on the most most minimal standard Eclipse setup I could find. MSVC was 3 seconds on the same system. I had expected the times to be roughly comparable.
How long does it take to start up all those programs on your notebook? 15 minutes? I don't even consider Eclipse bloated compared to *these* applications.
Don't remember. I too have upgraded since then. I can say, though, that Eclipse was the worst I experienced (the others you mentioned were I think more in the 30 second range). Mind you, I never ran Labview on it. Labview would probably have been worse.
 There was just something sloppy in Eclipse's startup code.
I don't recommend running Eclipse on any machine with less than 1 GB of RAM. It's a well known fact that Java programs require twice as much memory due to garbage collection. Also Eclipse is a rather complex framework. Luckily *all* systems, even the cheapest $100 netbooks have 1 GB of RAM!
My laptop had 1GB, so I'm not sure we can blame that. Eclipse was perfectly fine once it had loaded. It was only the startup which was slow.
Oct 02 2010
next sibling parent retard <re tard.com.invalid> writes:
Sat, 02 Oct 2010 21:21:23 +0200, Don wrote:

 retard wrote:
 Sat, 02 Oct 2010 18:21:53 +0200, Don wrote:
 There was just something sloppy in Eclipse's startup code.
I don't recommend running Eclipse on any machine with less than 1 GB of RAM. It's a well known fact that Java programs require twice as much memory due to garbage collection. Also Eclipse is a rather complex framework. Luckily *all* systems, even the cheapest $100 netbooks have 1 GB of RAM!
My laptop had 1GB, so I'm not sure we can blame that. Eclipse was perfectly fine once it had loaded. It was only the startup which was slow.
Another thing that comes to mind is that the file system might have been badly fragmented and/or Eclipse loaded a boatload of dependencies at startup from a slow 2.5" disk. Many laptops have slow 5400 rpm disks with really bad I/O performance. For example my old IBM T23 or something like that managed to read files 8 MB/s from the C: drive. That's just terrible compared to modern SATA2 hard drives with 64 MB of cache (SSDs are even better -- a 64 GB 2.5" SSD drive costs about as much as a 3.5" SATA2 1.5 TB spinning disk). I can easily read data 90 MB/s, even when the file system is quite full. If you need to read 120 MB of data (size of the latest eclipse), there is simply no way to read it faster than in 15 seconds using the old disk.
Oct 02 2010
prev sibling next sibling parent reply Russel Winder <russel russel.org.uk> writes:
On Sat, 2010-10-02 at 21:21 +0200, Don wrote:
 retard wrote:
[ . . . ]
 I meant that computers become more efficient. I've upgraded my system t=
wo=20
 times since this discussion last appeared here. If you wait 18 months, =
the=20
 20 seconds becomes 10 seconds, in 36 months 5 seconds. It's the Moore's=
=20
 law, you know.
=20 Sadly, software seems to be bloating at a rate which is faster than=20 Moore's law. Part of my original post noted that it was much slower than=20 my old 1MHz Commodore 64 took to boot my development environment from a=20 cassette tape! So I still take it as a good sign that the rate of=20 bloating is slower than Moore's law.
Faster processor speeds in the period 1950--2005 was not actually anything to do with Moore's Law per se -- Moore's Law was about the number of transistors per chip, not the speed of operation of those transistors. Since around 2005 processor speeds have stopped increasing due to inability to deal with the heat generation. Instead Moore's Law (which for the moment still applies) is leading to more and more cores per chip all running at the same speed as previously -- around 2GHz. So the ability to improve performance of code by just waiting and buying new kit is over -- at least for now. If you do not turn your serial code into parallel code there will be no mechanism for improving performance of that code. A bit sad for inherently serial algorithms. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Oct 03 2010
next sibling parent Justin Johansson <no spam.com> writes:
On 3/10/2010 8:21 PM, Russel Winder wrote:
 On Sat, 2010-10-02 at 21:21 +0200, Don wrote:
 retard wrote:
[ . . . ]
 I meant that computers become more efficient. I've upgraded my system two
 times since this discussion last appeared here. If you wait 18 months, the
 20 seconds becomes 10 seconds, in 36 months 5 seconds. It's the Moore's
 law, you know.
Sadly, software seems to be bloating at a rate which is faster than Moore's law. Part of my original post noted that it was much slower than my old 1MHz Commodore 64 took to boot my development environment from a cassette tape! So I still take it as a good sign that the rate of bloating is slower than Moore's law.
Faster processor speeds in the period 1950--2005 was not actually anything to do with Moore's Law per se -- Moore's Law was about the number of transistors per chip, not the speed of operation of those transistors. Since around 2005 processor speeds have stopped increasing due to inability to deal with the heat generation. Instead Moore's Law (which for the moment still applies) is leading to more and more cores per chip all running at the same speed as previously -- around 2GHz. So the ability to improve performance of code by just waiting and buying new kit is over -- at least for now. If you do not turn your serial code into parallel code there will be no mechanism for improving performance of that code. A bit sad for inherently serial algorithms.
And yes, my observation is that is is not often to be able to buy new kit (aka PC) with more MIPS and chips (CPUs) that runs the O/S and applications faster than the veteran unit, perhaps apart from graphics acceleration. Why is it that Moores Law does not seem to make for better user experience as time goes by? Conspiracy theory: Seems to me that there is a middle man on the take all the time. :-) Cheers Justin Johansson
Oct 03 2010
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Russel Winder wrote:
 So the ability to improve performance of code by just waiting and buying
 new kit is over -- at least for now.  If you do not turn your serial
 code into parallel code there will be no mechanism for improving
 performance of that code.  A bit sad for inherently serial algorithms.
I'm not sad at all. I enjoy optimizing code for performance, and working on optimizers.
Oct 07 2010
parent "Simen kjaeraas" <simen.kjaras gmail.com> writes:
On Fri, 08 Oct 2010 06:55:12 +0200, Walter Bright  
<newshound2 digitalmars.com> wrote:

 Russel Winder wrote:
 So the ability to improve performance of code by just waiting and buying
 new kit is over -- at least for now.  If you do not turn your serial
 code into parallel code there will be no mechanism for improving
 performance of that code.  A bit sad for inherently serial algorithms.
I'm not sad at all. I enjoy optimizing code for performance, and working on optimizers.
This just in: Compiler writer actually inherently serial algorithm! -- Simen
Oct 08 2010
prev sibling parent Mafi <mafi example.org> writes:
Am 02.10.2010 21:21, schrieb Don:
 Sadly, software seems to be bloating at a rate which is faster than
 Moore's law.
That's Wirth's law. See http://en.wikipedia.org/wiki/Wirth%27s_law . Mafi
Oct 05 2010
prev sibling parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 02/10/2010 17:21, Don wrote:
 retard wrote:
 Fri, 01 Oct 2010 14:53:04 +0100, Bruno Medeiros wrote:

 On 20/08/2010 22:37, Nick Sabalausky wrote:
 "retard"<re tard.com.invalid> wrote in message
 news:i4mrss$cam$1 digitalmars.com...
 Fri, 20 Aug 2010 19:04:41 +0200, Andrej Mitrovic wrote:

 What are these Java programs for the desktop that run fast? I haven't
 encountered any, but maybe that's just because I didn't try them all
 out. Eclipse takes at least 20 seconds to load on startup on my quad
 core, that's not very fast. On the other hand, CodeBlocks which is
 coded in C++ and has a few dozen plugins installed runs in an
 instant.
Now that's a fair comparison! "Crysis runs so slowly but a hello world written in Go is SO fast. This must prove that Go is much faster than C+ +!" I think CodeBlocks is one of the most lightweight IDEs out there. Does it even have full semantic autocompletion? Eclipse, on the other hand, comes with almost everything you can imagine. If you turn off the syntax check, Eclipse works just as fast as any native application on a modern desktop.
I've tried eclipse with the fancy stuff off, and it's still slower than C::B or PN2 for me.
All these comments about Eclipse takes this time to load, or Eclipse is slow when used, etc., are really meaningless unless you tell us something about what actual plugins and features are installed and used. Unlike CodeBlocks which is "a free C++ IDE", Eclipse proper is the Eclipse Platform, which is a platform (duh) and doesn't do anything useful by itself. Particularly since there is not even a standard/single "Eclipse" download: http://www.eclipse.org/downloads/ , unlike Codeblocks. The days were JDT would be the main thing 95% of Eclipse users would use are long gone. So are you using JDT, CDT, Descent, something else? If JDT, do you have extra tools, like the J2EE Web Tools? (these add massive bloat) What about source control plugins, or plugins not provided by the Eclipse Foundation, etc? All of these are a wildcard that can affect performance. For example, I definitely note that sometimes my workspace chokes when I do certain SVN or file related operations (with Subclipse btw, not Subversive). I also noted, when Eclipse 3.6 came out, some sluggishness when working with JDT, even when just typing code (in this case it was very subtle, almost imperceptible, but I still felt it and it was quite annoying). I suspected not JDT, but Mylyn, so I uninstalled it, and now things are back to normal. (there might be a fix or workaround for that issue in Mylyn, but since I don't use it, I didn't bother) I would definitely be quite annoying if Eclipse was not responsive for the vast majority of coding tasks. As for startup time, I hardly care anything about that : http://www.digitalmars.com/d/archives/digitalmars/D/
Re_Eclipse_startup_time_Was_questions_on_PhanTango_merger_was_Merging_Tangobos_into_Tango_60160.html#N60346
 (except when I'm doing PDE development, but that's a different thing)
Back then the unhappy user was using a 1 GHz Pentium M notebook. I tried this again. Guess what, the latest Eclipse Helios (3.6.1) took 3.5 (!!!) seconds to start up the whole Java workspace, open few projects and fully initialize the editors etc for the most active project.
That's good news. Sounds as though they've fixed the startup performance bug.
Not necessarily. Again, you need to consider what is installed and loading in Eclipse. In retard's scenario he was loading a Java workspace (JDT) whereas your original post was a C++ one (with CDT, I'm guessing). Even disregarding the PC specs, it's comparing apples to oranges. Eclipse is not like Firefox where the main platform is 90% of the code/functionality, and the plugins are only like 5-10%.
 Has the original
 complainer ever used Photoshop, CorelDraw, AutoCad, Maya/3DSMax, Maple/
 MathCad/Mathematica, or some other Real World Programs (tm)? These are
 all fucking slow. That's how it is: If you need to get the job done,
 you must use slow programs.
That original poster was me. Yes, I've used all of those programs (though not a recent version of CorelDraw). The startup time was 80 seconds, on the most most minimal standard Eclipse setup I could find. MSVC was 3 seconds on the same system. I had expected the times to be roughly comparable. There was just something sloppy in Eclipse's startup code.
Any Eclipse IDE configuration/distribution is likely never going to start fast, at least as fast as comparable native IDEs like MS Visual Studio. Still, I do think that 80 seconds sounds excessive, even for those computer specs. But it's likely a CDT issue, not an Eclipse one. Again, this distinction has to be considered, you can't just say "There was just something sloppy in Eclipse's startup code" or "Eclipse developers don't care about performance issues _at all_". The projects that come bundled in offical Eclipse distributions are not only separate projects (JDT, CDT, WTP, PDT, Mylyn, etc.), like I mentioned in my original post, but they are made by completely separate teams, most of them from different companies (even for the more popular Eclipse projects). -- Bruno Medeiros - Software Engineer
Oct 05 2010
parent Jacob Carlborg <doob me.com> writes:
On 2010-10-05 17:04, Bruno Medeiros wrote:
 There was just something sloppy in Eclipse's startup code.
Any Eclipse IDE configuration/distribution is likely never going to start fast, at least as fast as comparable native IDEs like MS Visual Studio. Still, I do think that 80 seconds sounds excessive, even for those computer specs. But it's likely a CDT issue, not an Eclipse one. Again, this distinction has to be considered, you can't just say "There was just something sloppy in Eclipse's startup code" or "Eclipse developers don't care about performance issues _at all_". The projects that come bundled in offical Eclipse distributions are not only separate projects (JDT, CDT, WTP, PDT, Mylyn, etc.), like I mentioned in my original post, but they are made by completely separate teams, most of them from different companies (even for the more popular Eclipse projects).
I noticed quite a significant boost in the start up time for Eclipse when I updated to 3.6. I'm using Eclipse classic with the Descent plugin. -- /Jacob Carlborg
Oct 06 2010
prev sibling parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 02/10/2010 15:13, retard wrote:
 Fri, 01 Oct 2010 14:53:04 +0100, Bruno Medeiros wrote:

 On 20/08/2010 22:37, Nick Sabalausky wrote:
 "retard"<re tard.com.invalid>   wrote in message
 news:i4mrss$cam$1 digitalmars.com...
 Fri, 20 Aug 2010 19:04:41 +0200, Andrej Mitrovic wrote:

 What are these Java programs for the desktop that run fast? I haven't
 encountered any, but maybe that's just because I didn't try them all
 out. Eclipse takes at least 20 seconds to load on startup on my quad
 core, that's not very fast. On the other hand, CodeBlocks which is
 coded in C++ and has  a few dozen plugins installed runs in an
 instant.
Now that's a fair comparison! "Crysis runs so slowly but a hello world written in Go is SO fast. This must prove that Go is much faster than C+ +!" I think CodeBlocks is one of the most lightweight IDEs out there. Does it even have full semantic autocompletion? Eclipse, on the other hand, comes with almost everything you can imagine. If you turn off the syntax check, Eclipse works just as fast as any native application on a modern desktop.
I've tried eclipse with the fancy stuff off, and it's still slower than C::B or PN2 for me.
All these comments about Eclipse takes this time to load, or Eclipse is slow when used, etc., are really meaningless unless you tell us something about what actual plugins and features are installed and used. Unlike CodeBlocks which is "a free C++ IDE", Eclipse proper is the Eclipse Platform, which is a platform (duh) and doesn't do anything useful by itself. Particularly since there is not even a standard/single "Eclipse" download: http://www.eclipse.org/downloads/ , unlike Codeblocks. The days were JDT would be the main thing 95% of Eclipse users would use are long gone. So are you using JDT, CDT, Descent, something else? If JDT, do you have extra tools, like the J2EE Web Tools? (these add massive bloat) What about source control plugins, or plugins not provided by the Eclipse Foundation, etc? All of these are a wildcard that can affect performance. For example, I definitely note that sometimes my workspace chokes when I do certain SVN or file related operations (with Subclipse btw, not Subversive). I also noted, when Eclipse 3.6 came out, some sluggishness when working with JDT, even when just typing code (in this case it was very subtle, almost imperceptible, but I still felt it and it was quite annoying). I suspected not JDT, but Mylyn, so I uninstalled it, and now things are back to normal. (there might be a fix or workaround for that issue in Mylyn, but since I don't use it, I didn't bother) I would definitely be quite annoying if Eclipse was not responsive for the vast majority of coding tasks. As for startup time, I hardly care anything about that : http://www.digitalmars.com/d/archives/digitalmars/D/
Re_Eclipse_startup_time_Was_questions_on_PhanTango_merger_was_Merging_Tangobos_into_Tango_60160.html#N60346
 (except when I'm doing PDE development, but that's a different thing)
Back then the unhappy user was using a 1 GHz Pentium M notebook. I tried this again. Guess what, the latest Eclipse Helios (3.6.1) took 3.5 (!!!) seconds to start up the whole Java workspace, open few projects and fully initialize the editors etc for the most active project. Has the original complainer ever used Photoshop, CorelDraw, AutoCad, Maya/3DSMax, Maple/ MathCad/Mathematica, or some other Real World Programs (tm)? These are all fucking slow. That's how it is: If you need to get the job done, you must use slow programs.
I'm sure that the people who downright refuse to use Eclipse because it loads too slow use some other program for media development. Maybe its MS Paint (or a Linux equivalent) because it loads so fast! Or maybe its a vi/emacs plugin for image manipulation or 3D modelling. ;) -- Bruno Medeiros - Software Engineer
Oct 05 2010
parent reply retard <re tard.com.invalid> writes:
Tue, 05 Oct 2010 15:49:59 +0100, Bruno Medeiros wrote:

 On 02/10/2010 15:13, retard wrote:
 Fri, 01 Oct 2010 14:53:04 +0100, Bruno Medeiros wrote:

 On 20/08/2010 22:37, Nick Sabalausky wrote:
 "retard"<re tard.com.invalid>   wrote in message
 news:i4mrss$cam$1 digitalmars.com...
 Fri, 20 Aug 2010 19:04:41 +0200, Andrej Mitrovic wrote:

 What are these Java programs for the desktop that run fast? I
 haven't encountered any, but maybe that's just because I didn't try
 them all out. Eclipse takes at least 20 seconds to load on startup
 on my quad core, that's not very fast. On the other hand,
 CodeBlocks which is coded in C++ and has  a few dozen plugins
 installed runs in an instant.
Now that's a fair comparison! "Crysis runs so slowly but a hello world written in Go is SO fast. This must prove that Go is much faster than C+ +!" I think CodeBlocks is one of the most lightweight IDEs out there. Does it even have full semantic autocompletion? Eclipse, on the other hand, comes with almost everything you can imagine. If you turn off the syntax check, Eclipse works just as fast as any native application on a modern desktop.
I've tried eclipse with the fancy stuff off, and it's still slower than C::B or PN2 for me.
All these comments about Eclipse takes this time to load, or Eclipse is slow when used, etc., are really meaningless unless you tell us something about what actual plugins and features are installed and used. Unlike CodeBlocks which is "a free C++ IDE", Eclipse proper is the Eclipse Platform, which is a platform (duh) and doesn't do anything useful by itself. Particularly since there is not even a standard/single "Eclipse" download: http://www.eclipse.org/downloads/ , unlike Codeblocks. The days were JDT would be the main thing 95% of Eclipse users would use are long gone. So are you using JDT, CDT, Descent, something else? If JDT, do you have extra tools, like the J2EE Web Tools? (these add massive bloat) What about source control plugins, or plugins not provided by the Eclipse Foundation, etc? All of these are a wildcard that can affect performance. For example, I definitely note that sometimes my workspace chokes when I do certain SVN or file related operations (with Subclipse btw, not Subversive). I also noted, when Eclipse 3.6 came out, some sluggishness when working with JDT, even when just typing code (in this case it was very subtle, almost imperceptible, but I still felt it and it was quite annoying). I suspected not JDT, but Mylyn, so I uninstalled it, and now things are back to normal. (there might be a fix or workaround for that issue in Mylyn, but since I don't use it, I didn't bother) I would definitely be quite annoying if Eclipse was not responsive for the vast majority of coding tasks. As for startup time, I hardly care anything about that : http://www.digitalmars.com/d/archives/digitalmars/D/
Re_Eclipse_startup_time_Was_questions_on_PhanTango_merger_was_Merging_Tangobos_into_Tango_60160.html#N60346
 (except when I'm doing PDE development, but that's a different thing)
Back then the unhappy user was using a 1 GHz Pentium M notebook. I tried this again. Guess what, the latest Eclipse Helios (3.6.1) took 3.5 (!!!) seconds to start up the whole Java workspace, open few projects and fully initialize the editors etc for the most active project. Has the original complainer ever used Photoshop, CorelDraw, AutoCad, Maya/3DSMax, Maple/ MathCad/Mathematica, or some other Real World Programs (tm)? These are all fucking slow. That's how it is: If you need to get the job done, you must use slow programs.
I'm sure that the people who downright refuse to use Eclipse because it loads too slow use some other program for media development. Maybe its MS Paint (or a Linux equivalent) because it loads so fast! Or maybe its a vi/emacs plugin for image manipulation or 3D modelling. ;)
Well Don mentioned that he had used ALL of those programs and since no complaints about the slow loading times of those programs were mentioned, I assume all of them (the latest versions, of course) start in less than 3.5 seconds. I pondered this a bit and am now willing to buy Don's magic computer. I really do have need for a laptop that can launch those applications in less than 3.5 seconds.
Oct 05 2010
next sibling parent Juanjo Alvarez <fake fakeemail.com> writes:
On Tue, 5 Oct 2010 17:59:26 +0000 (UTC), retard <re tard.com.invalid> 
wrote:
 I assume all of them (the latest versions, of course) start in less 
than
 3.5 seconds. I pondered this a bit and am now willing to buy Don's 
magic
 computer. I really do have need for a laptop that can launch those 
 applications in less than 3.5 seconds.
Yes, my 2.40ghz i5 with 6gbs RAM takes about ten seconds to open Eclipse 64bits; I need to get one of those SSDs.
Oct 05 2010
prev sibling parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 05/10/2010 18:59, retard wrote:
 Tue, 05 Oct 2010 15:49:59 +0100, Bruno Medeiros wrote:

 On 02/10/2010 15:13, retard wrote:
 Fri, 01 Oct 2010 14:53:04 +0100, Bruno Medeiros wrote:

 On 20/08/2010 22:37, Nick Sabalausky wrote:
 "retard"<re tard.com.invalid>    wrote in message
 news:i4mrss$cam$1 digitalmars.com...
 Fri, 20 Aug 2010 19:04:41 +0200, Andrej Mitrovic wrote:

 What are these Java programs for the desktop that run fast? I
 haven't encountered any, but maybe that's just because I didn't try
 them all out. Eclipse takes at least 20 seconds to load on startup
 on my quad core, that's not very fast. On the other hand,
 CodeBlocks which is coded in C++ and has  a few dozen plugins
 installed runs in an instant.
Now that's a fair comparison! "Crysis runs so slowly but a hello world written in Go is SO fast. This must prove that Go is much faster than C+ +!" I think CodeBlocks is one of the most lightweight IDEs out there. Does it even have full semantic autocompletion? Eclipse, on the other hand, comes with almost everything you can imagine. If you turn off the syntax check, Eclipse works just as fast as any native application on a modern desktop.
I've tried eclipse with the fancy stuff off, and it's still slower than C::B or PN2 for me.
All these comments about Eclipse takes this time to load, or Eclipse is slow when used, etc., are really meaningless unless you tell us something about what actual plugins and features are installed and used. Unlike CodeBlocks which is "a free C++ IDE", Eclipse proper is the Eclipse Platform, which is a platform (duh) and doesn't do anything useful by itself. Particularly since there is not even a standard/single "Eclipse" download: http://www.eclipse.org/downloads/ , unlike Codeblocks. The days were JDT would be the main thing 95% of Eclipse users would use are long gone. So are you using JDT, CDT, Descent, something else? If JDT, do you have extra tools, like the J2EE Web Tools? (these add massive bloat) What about source control plugins, or plugins not provided by the Eclipse Foundation, etc? All of these are a wildcard that can affect performance. For example, I definitely note that sometimes my workspace chokes when I do certain SVN or file related operations (with Subclipse btw, not Subversive). I also noted, when Eclipse 3.6 came out, some sluggishness when working with JDT, even when just typing code (in this case it was very subtle, almost imperceptible, but I still felt it and it was quite annoying). I suspected not JDT, but Mylyn, so I uninstalled it, and now things are back to normal. (there might be a fix or workaround for that issue in Mylyn, but since I don't use it, I didn't bother) I would definitely be quite annoying if Eclipse was not responsive for the vast majority of coding tasks. As for startup time, I hardly care anything about that : http://www.digitalmars.com/d/archives/digitalmars/D/
Re_Eclipse_startup_time_Was_questions_on_PhanTango_merger_was_Merging_Tangobos_into_Tango_60160.html#N60346
 (except when I'm doing PDE development, but that's a different thing)
Back then the unhappy user was using a 1 GHz Pentium M notebook. I tried this again. Guess what, the latest Eclipse Helios (3.6.1) took 3.5 (!!!) seconds to start up the whole Java workspace, open few projects and fully initialize the editors etc for the most active project. Has the original complainer ever used Photoshop, CorelDraw, AutoCad, Maya/3DSMax, Maple/ MathCad/Mathematica, or some other Real World Programs (tm)? These are all fucking slow. That's how it is: If you need to get the job done, you must use slow programs.
I'm sure that the people who downright refuse to use Eclipse because it loads too slow use some other program for media development. Maybe its MS Paint (or a Linux equivalent) because it loads so fast! Or maybe its a vi/emacs plugin for image manipulation or 3D modelling. ;)
Well Don mentioned that he had used ALL of those programs and since no complaints about the slow loading times of those programs were mentioned, I assume all of them (the latest versions, of course) start in less than 3.5 seconds. I pondered this a bit and am now willing to buy Don's magic computer. I really do have need for a laptop that can launch those applications in less than 3.5 seconds.
Note: I wasn't talking about Don. Even though he mentioned those startup issues, I don't think Don is a person who "downright refuse[s] to use Eclipse because it loads too slow". One thing is complaining, the other thing is actually deciding not to use, and yes, there are some people in the later camp, people who don't want to use an IDE if it doesn't load in less than X seconds (where X is between 1-5 seconds). -- Bruno Medeiros - Software Engineer
Oct 06 2010
parent retard <re tard.com.invalid> writes:
Wed, 06 Oct 2010 14:26:18 +0100, Bruno Medeiros wrote:

 Note: I wasn't talking about Don. Even though he mentioned those startup
 issues, I don't think Don is a person who "downright refuse[s] to use
 Eclipse because it loads too slow".
 One thing is complaining, the other thing is actually deciding not to
 use, and yes, there are some people in the later camp, people who don't
 want to use an IDE if it doesn't load in less than X seconds (where X is
 between 1-5 seconds).
Roger that. I only wanted to point out that Eclipse isn't the only "professional" tool that takes "forever" to launch. For instance all three major Java IDEs (IDEA, Netbeans, Eclipse JDT) have very similar loading times. Those audio/video/cad/math tools also feel very bloated, but I'm sure there are good reasons for that. They're just packed with a huge set of features these days.
Oct 06 2010
prev sibling next sibling parent reply Jonathan M Davis <jmdavisprog gmail.com> writes:
On Friday, August 20, 2010 10:04:41 Andrej Mitrovic wrote:
 What are these Java programs for the desktop that run fast? I haven't
 encountered any, but maybe that's just because I didn't try them all
 out. Eclipse takes at least 20 seconds to load on startup on my quad
 core, that's not very fast. On the other hand, CodeBlocks which is
 coded in C++ and has  a few dozen plugins installed runs in an
 instant.
There's plenty of Java code which runs just as fast or faster than comperable C++ code. I've seen it. However, you're generally talking about small, computation-intensive programs. When you're talking about full-blown desktop applications, there's so much more going on than math operations that the game is entirely different. I'm not sure that there are any full-blown desktop Java applications which are all that efficient in comparison to similar C++ applications. They're definitely useable, but not necessarily as efficient. Of course, the difference between Ecilpse and CodeBlocks could be entirely a design issue rather than a language one. Once you're dealing with anything that complex, accurately comparing apps can be difficult. It could just as easily be that Eclipse's design isn't as efficient - be it due to features that it has which CodeBlocks doesn't or poor design, or whatever. - Jonathan M Davis
Aug 20 2010
parent dsimcha <dsimcha yahoo.com> writes:
== Quote from Jonathan M Davis (jmdavisprog gmail.com)'s article
 On Friday, August 20, 2010 10:04:41 Andrej Mitrovic wrote:
 What are these Java programs for the desktop that run fast? I haven't
 encountered any, but maybe that's just because I didn't try them all
 out. Eclipse takes at least 20 seconds to load on startup on my quad
 core, that's not very fast. On the other hand, CodeBlocks which is
 coded in C++ and has  a few dozen plugins installed runs in an
 instant.
There's plenty of Java code which runs just as fast or faster than comperable C++ code. I've seen it. However, you're generally talking about small, computation-intensive programs. When you're talking about full-blown desktop applications, there's so much more going on than math operations that the game is entirely different. I'm not sure that there are any full-blown desktop Java applications which are all that efficient in comparison to similar C++ applications. They're definitely useable, but not necessarily as efficient. Of course, the difference between Ecilpse and CodeBlocks could be entirely a design issue rather than a language one. Once you're dealing with anything that complex, accurately comparing apps can be difficult. It could just as easily be that Eclipse's design isn't as efficient - be it due to features that it has which CodeBlocks doesn't or poor design, or whatever. - Jonathan M Davis
The thing about Java is that, even if equivalently written Java code is as fast as C++ code, if you really care about performance you're gonna use dirty tricks in a few key hotspots. Java makes dirty tricks hard or impossible to use. For example, Java's stock GC is very good, but when you're trying to squeeze out that last drop of performance you might want to use some kind of region- or stack-based memory management. This is unimplementable in Java. You also can't use tricks like recycling the same buffer for multiple types. Another area where dirty tricks are sometimes useful is type punning to "cheat" and speed up algorithms. I actually sped my sorting functions in dstats up by ~20% on floating point numbers by performing a single pass over the array to pun the numbers to integers and bit twiddle them such that their ordering is unaffected by this punning, sorting them as integers, and then untwiddling and casting them back. Floating point comparisons have to watch out for things like NaNs and are thus slower than integer comparisons. These sorting routines, using DMD's crappy optimizer, are now faster than STL's sorting routines, using GCC's optimizer, for sorting floating point numbers. As another example, I'm pretty sure the fast inverse square root algorithm (http://en.wikipedia.org/wiki/Fast_inverse_square_root) is unimplementable in Java.
Aug 20 2010
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
Ok maybe I missunderstood OP's post, I saw the word "desktop" and
thought he was talking about full-blown desktop apps. :)

On Fri, Aug 20, 2010 at 7:30 PM, Jonathan M Davis <jmdavisprog gmail.com> w=
rote:
 On Friday, August 20, 2010 10:04:41 Andrej Mitrovic wrote:
 What are these Java programs for the desktop that run fast? I haven't
 encountered any, but maybe that's just because I didn't try them all
 out. Eclipse takes at least 20 seconds to load on startup on my quad
 core, that's not very fast. On the other hand, CodeBlocks which is
 coded in C++ and has =A0a few dozen plugins installed runs in an
 instant.
There's plenty of Java code which runs just as fast or faster than comper=
able
 C++ code. I've seen it. However, you're generally talking about small,
 computation-intensive programs. When you're talking about full-blown desk=
top
 applications, there's so much more going on than math operations that the=
game
 is entirely different. I'm not sure that there are any full-blown desktop=
Java
 applications which are all that efficient in comparison to similar C++
 applications. They're definitely useable, but not necessarily as efficien=
t. Of
 course, the difference between Ecilpse and CodeBlocks could be entirely a=
design
 issue rather than a language one. Once you're dealing with anything that
 complex, accurately comparing apps can be difficult. It could just as eas=
ily be
 that Eclipse's design isn't as efficient =A0- be it due to features that =
it has
 which CodeBlocks doesn't or poor design, or whatever.

 - Jonathan M Davis
Aug 20 2010
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
bearophile wrote:
 Three good blog posts about undefined behaviour in C and C++: 
 http://blog.regehr.org/archives/213 http://blog.regehr.org/archives/226 
 http://blog.regehr.org/archives/232
 
 In those posts (and elsewhere) the expert author gives several good bites to
 the ass of most compiler writers.
 
 Among other things in those three posts he talks about two programs as:
 
 import std.c.stdio: printf; void main() { printf("%d\n", -int.min); }
 
 import std.stdio: writeln; void main() { enum int N = (1L).sizeof * 8; auto
 max = (1L << (N - 1)) - 1; writeln(max); }
 
 I believe that D can't be considered a step forward in system language
 programming until it gives a much more serious consideration for
 integer-related overflows (and integer-related undefined behaviour).
 
 The good thing is that Java is a living example that even if you remove most
 integer-related undefined behaviours your Java code is still able to run as
 fast as C and sometimes faster (on normal desktops).
You're conflating two different things here - undefined behavior and behavior on overflow. The Java spec says that integer overflow is ignored, for example. In C++, overflow behavior is undefined because C++ still supports ones-complement arithmetic. Java and D specify integer arithmetic to be 2's complement. Java defined left shift to, not surprisingly, match what the x86 CPU does. This, of course, will conveniently not result in any penalty on the x86 for conforming to the spec.
Aug 20 2010
parent KennyTM~ <kennytm gmail.com> writes:
On Aug 21, 10 02:12, Walter Bright wrote:
[snip]
 You're conflating two different things here - undefined behavior and
 behavior on overflow. The Java spec says that integer overflow is
 ignored, for example.

 In C++, overflow behavior is undefined because C++ still supports
 ones-complement arithmetic. Java and D specify integer arithmetic to be
 2's complement. Java defined left shift to, not surprisingly, match what
 the x86 CPU does. This, of course, will conveniently not result in any
 penalty on the x86 for conforming to the spec.
I believe D requires signed integers be represented in 2's complement, but the specification didn't document it (grepping 'complement' in phobos/docsrc gives nothing related).
Aug 20 2010
prev sibling next sibling parent bearophile <bearophileHUGS lycos.com> writes:
dsimcha:

if you really care about performance you're gonna use dirty tricks in a few key
hotspots.<
I agree. The BIG problem is that currently you can't do that in D2. In D all code is unsafe (I am talking about numeric safety, like integer overflows, etc), not just few hospots where you have asked the compiler more freedom to use unsafe tricks and where you are taking extra care to avoid bugs (or you are using unsafe but long tested library code). Bye, bearophile
Aug 20 2010
prev sibling parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 20/08/2010 17:38, bearophile wrote:
 Three good blog posts about undefined behaviour in C and C++:
 http://blog.regehr.org/archives/213
 http://blog.regehr.org/archives/226
 http://blog.regehr.org/archives/232

 In those posts (and elsewhere) the expert author gives several good bites to
the ass of most compiler writers.

 Among other things in those three posts he talks about two programs as:

 import std.c.stdio: printf;
 void main() {
      printf("%d\n", -int.min);
 }

 import std.stdio: writeln;
 void main() {
      enum int N = (1L).sizeof * 8;
      auto max = (1L<<  (N - 1)) - 1;
      writeln(max);
 }

 I believe that D can't be considered a step forward in system language
programming until it gives a much more serious consideration for
integer-related overflows (and integer-related undefined behaviour).

 The good thing is that Java is a living example that even if you remove most
integer-related undefined behaviours your Java code is still able to run as
fast as C and sometimes faster (on normal desktops).

 Bye,
 bearophile
Interesting post. There is a important related issue here. It should be noted that, even though the article and the C FAQ say: " The C FAQ defines “undefined behavior” like this: Anything at all can happen; the Standard imposes no requirements. The program may fail to compile, or it may execute incorrectly (either crashing or silently generating incorrect results), or it may fortuitously do exactly what the programmer intended. " this definition of "undefined behavior" is not used consistently by C programmers, or even by more official sources such as books, or even the C standards. A trivial example: foo(printf("Hello"), printf("World")); Since the evaluation order of arguments in not defined in C, these two printfs can be executed in any of the two possible orders. The behavior is not specified, it is up to the implementation, to the compiler switches, etc.. Many C programmers would say that such code has/is/produces undefined behavior, however, that is clearly not “undefined behavior” as per the definition above. A correct compiler cannot cause the code above to execute incorrectly, crash, calculate PI, format you hard disk, whatever, like on the other cases. It has to do everything it is supposed to do, and the only "undefined" thing is the order of evaluation, but the code is not "invalid". I don't like this term "undefined behavior". It is an unfortunate C legacy that leads to unnecessary confusion and misunderstanding, not just in conversation, but often in coding as well. It would not be so bad if the programmers had the distinction clear at least in their minds, or in the context of their discussion. But that is often not the case. I've called before for this term to be avoided in D vocabulary, mainly because Walter often (ab)used the term as per the usual C legacy. The “undefined behavior” as per the C FAQ should be called something else, like "invalid behavior". Code that when given valid inputs causes invalid behavior should be called invalid code. (BTW, this maps directly to the concept of contract violations.) -- Bruno Medeiros - Software Engineer
Oct 06 2010
parent reply Stanislav Blinov <blinov loniir.ru> writes:
  06.10.2010 19:34, Bruno Medeiros ŠæŠøшŠµŃ‚:
 On 20/08/2010 17:38, bearophile wrote:
 Three good blog posts about undefined behaviour in C and C++:
 http://blog.regehr.org/archives/213
 http://blog.regehr.org/archives/226
 http://blog.regehr.org/archives/232

 In those posts (and elsewhere) the expert author gives several good 
 bites to the ass of most compiler writers.

 Among other things in those three posts he talks about two programs as:

 import std.c.stdio: printf;
 void main() {
 printf("%d\n", -int.min);
 }

 import std.stdio: writeln;
 void main() {
 enum int N = (1L).sizeof * 8;
 auto max = (1L<< (N - 1)) - 1;
 writeln(max);
 }

 I believe that D can't be considered a step forward in system 
 language programming until it gives a much more serious consideration 
 for integer-related overflows (and integer-related undefined behaviour).

 The good thing is that Java is a living example that even if you 
 remove most integer-related undefined behaviours your Java code is 
 still able to run as fast as C and sometimes faster (on normal 
 desktops).

 Bye,
 bearophile
Interesting post. There is a important related issue here. It should be noted that, even though the article and the C FAQ say: " The C FAQ defines ā€œundefined behaviorā€ like this: Anything at all can happen; the Standard imposes no requirements. The program may fail to compile, or it may execute incorrectly (either crashing or silently generating incorrect results), or it may fortuitously do exactly what the programmer intended. " this definition of "undefined behavior" is not used consistently by C programmers, or even by more official sources such as books, or even the C standards. A trivial example: foo(printf("Hello"), printf("World")); Since the evaluation order of arguments in not defined in C, these two printfs can be executed in any of the two possible orders. The behavior is not specified, it is up to the implementation, to the compiler switches, etc.. Many C programmers would say that such code has/is/produces undefined behavior, however, that is clearly not ā€œundefined behaviorā€ as per the definition above. A correct compiler cannot cause the code above to execute incorrectly, crash, calculate PI, format you hard disk, whatever, like on the other cases. It has to do everything it is supposed to do, and the only "undefined" thing is the order of evaluation, but the code is not "invalid". I don't like this term "undefined behavior". It is an unfortunate C legacy that leads to unnecessary confusion and misunderstanding, not just in conversation, but often in coding as well. It would not be so bad if the programmers had the distinction clear at least in their minds, or in the context of their discussion. But that is often not the case. I've called before for this term to be avoided in D vocabulary, mainly because Walter often (ab)used the term as per the usual C legacy. The ā€œundefined behaviorā€ as per the C FAQ should be called something else, like "invalid behavior". Code that when given valid inputs causes invalid behavior should be called invalid code. (BTW, this maps directly to the concept of contract violations.)
I always thought that the term itself came from language specification, i.e. the paper that *defines* behavior of the language and states that there are cases when behavior is not defined (i.e. in terms of the specification). From this point of view the term is understandable and, uh, valid. It's just that it got abused with time, especially this abuse is notable in discussions (e.g. "Don't do that, undefined behavior will result": one can sit and guess how exactly she will get something that is not defined). I don't think that "invalid behavior" covers that sense: it means that implementation should actually do something to make code perform 'invalid' things (what should be considered invalid, by the way?), rather than have the possibility to best adapt the behavior to system (e.g. segfault) or some error handling mechanism (e.g. throw an exception).
Oct 06 2010
next sibling parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 06/10/2010 16:59, Stanislav Blinov wrote:
 06.10.2010 19:34, Bruno Medeiros ŠæŠøшŠµŃ‚:
 On 20/08/2010 17:38, bearophile wrote:
 Three good blog posts about undefined behaviour in C and C++:
 http://blog.regehr.org/archives/213
 http://blog.regehr.org/archives/226
 http://blog.regehr.org/archives/232

 In those posts (and elsewhere) the expert author gives several good
 bites to the ass of most compiler writers.

 Among other things in those three posts he talks about two programs as:

 import std.c.stdio: printf;
 void main() {
 printf("%d\n", -int.min);
 }

 import std.stdio: writeln;
 void main() {
 enum int N = (1L).sizeof * 8;
 auto max = (1L<< (N - 1)) - 1;
 writeln(max);
 }

 I believe that D can't be considered a step forward in system
 language programming until it gives a much more serious consideration
 for integer-related overflows (and integer-related undefined behaviour).

 The good thing is that Java is a living example that even if you
 remove most integer-related undefined behaviours your Java code is
 still able to run as fast as C and sometimes faster (on normal
 desktops).

 Bye,
 bearophile
Interesting post. There is a important related issue here. It should be noted that, even though the article and the C FAQ say: " The C FAQ defines ā€œundefined behaviorā€ like this: Anything at all can happen; the Standard imposes no requirements. The program may fail to compile, or it may execute incorrectly (either crashing or silently generating incorrect results), or it may fortuitously do exactly what the programmer intended. " this definition of "undefined behavior" is not used consistently by C programmers, or even by more official sources such as books, or even the C standards. A trivial example: foo(printf("Hello"), printf("World")); Since the evaluation order of arguments in not defined in C, these two printfs can be executed in any of the two possible orders. The behavior is not specified, it is up to the implementation, to the compiler switches, etc.. Many C programmers would say that such code has/is/produces undefined behavior, however, that is clearly not ā€œundefined behaviorā€ as per the definition above. A correct compiler cannot cause the code above to execute incorrectly, crash, calculate PI, format you hard disk, whatever, like on the other cases. It has to do everything it is supposed to do, and the only "undefined" thing is the order of evaluation, but the code is not "invalid". I don't like this term "undefined behavior". It is an unfortunate C legacy that leads to unnecessary confusion and misunderstanding, not just in conversation, but often in coding as well. It would not be so bad if the programmers had the distinction clear at least in their minds, or in the context of their discussion. But that is often not the case. I've called before for this term to be avoided in D vocabulary, mainly because Walter often (ab)used the term as per the usual C legacy. The ā€œundefined behaviorā€ as per the C FAQ should be called something else, like "invalid behavior". Code that when given valid inputs causes invalid behavior should be called invalid code. (BTW, this maps directly to the concept of contract violations.)
I always thought that the term itself came from language specification, i.e. the paper that *defines* behavior of the language and states that there are cases when behavior is not defined (i.e. in terms of the specification). From this point of view the term is understandable and, uh, valid. It's just that it got abused with time, especially this abuse is notable in discussions (e.g. "Don't do that, undefined behavior will result": one can sit and guess how exactly she will get something that is not defined).
"the term itself came from language specification" -> yes that is correct. I read K&R's "The C Programming Language", second edition, and the term comes from there, at least as applied to C. But they don't define or use the term as the C FAQ above, or at least not as explicitly, if I recall correctly (im 98% sure I am). They just describe each particular language rule individually and tell you what to expect if you break the rule. Often they will say something like "this will cause undefined behavior" and it is clear that is is illegal. But other times they would say something like "X is undefined", where X could be "the order of execution", "the results of Y", "the contents of variable Z", and it is not clear whether that meant the program could exhibit undefined behavior or not. (or in other words if that was illegal or not) I don't know if this concept or related ones have actually been better formalized in newer revisions of the C standard.
 I don't think that "invalid behavior" covers that sense: it means that
 implementation should actually do something to make code perform
 'invalid' things (what should be considered invalid, by the way?),
 rather than have the possibility to best adapt the behavior to system
 (e.g. segfault) or some error handling mechanism (e.g. throw an exception).
In this case I don't know for sure what the best alternative term is, I just want to avoid confusion with "invalid behavior". I want to know when a program execution may actually be invalidated (crash, memory corruption, etc.), versus when it is just some particular and *isolated* aspect of behavior that is simply "undefined", but program execution is not invalidated. In other words, if it is illegal or not. -- Bruno Medeiros - Software Engineer
Oct 07 2010
parent reply Stanislav Blinov <blinov loniir.ru> writes:
  07.10.2010 14:38, Bruno Medeiros wrote:
 On 06/10/2010 16:59, Stanislav Blinov wrote:
 I always thought that the term itself came from language specification,
 i.e. the paper that *defines* behavior of the language and states that
 there are cases when behavior is not defined (i.e. in terms of the
 specification). From this point of view the term is understandable and,
 uh, valid. It's just that it got abused with time, especially this abuse
 is notable in discussions (e.g. "Don't do that, undefined behavior will
 result": one can sit and guess how exactly she will get something that
 is not defined).
"the term itself came from language specification" -> yes that is correct. I read K&R's "The C Programming Language", second edition, and the term comes from there, at least as applied to C. But they don't define or use the term as the C FAQ above, or at least not as explicitly, if I recall correctly (im 98% sure I am).
Now, I'm not on a solid grounds to argue (being not a native English speaker), but the need to actually define this term seems dubious to me, kind of another abuse if you please. The term means just exactly what it says: the behavior [of the code/program/compiler/system/you_name_it] is not defined, i.e. no restrictions or rules are attached to it. To me, attempts to define undefined seem like attempts on 'adding one briefcase into another' (from Win98 days).
 They just describe each particular language rule individually and tell 
 you what to expect if you break the rule. Often they will say 
 something like "this will cause undefined behavior" and it is clear 
 that is is illegal. 
Strictly speaking, it's not 'illegal'. If we talk about C (and we do ;) ), being a systems language, it cannot demand strict ways of handling every situation from an implementation. Some systems may choke on integer overflow, some may not. Some build made with single compiler may get you useful data when addressing seemingly out-of-bounds data, yet some builds (made with the same compiler) would grant you access violation. (i.e. debug/release builds of MSVC). Stating that behavior in some case is not defined, the spec leaves implementation with a possibility to best handle that case (or, of course, not handle it at all), or at least adapt it to some use (e.g. for debugging), which is great for systems language, because many such 'undefined' cases can be handled very differently (in terms of results/efficiency/whatnot) on different systems. Of course, it does not imply that writing code that behaves in an 'undefined' manner is a good thing to do, but being strict with what a compiler should do in all cases is neither. Of course, replacing 'undefined behavior' with 'implementation-specific behavior' would look better, but that again puts forward a demand (restriction) rather than possibility.
 But other times they would say something like "X is undefined", where 
 X could be "the order of execution", "the results of Y", "the contents 
 of variable Z", and it is not clear whether that meant the program 
 could exhibit undefined behavior or not. (or in other words if that 
 was illegal or not)
Well, this ambiguity is hardly relating to a programming language, but rather to natural one.
 I don't know if this concept or related ones have actually been better 
 formalized in newer revisions of the C standard.

 I don't think that "invalid behavior" covers that sense: it means that
 implementation should actually do something to make code perform
 'invalid' things (what should be considered invalid, by the way?),
 rather than have the possibility to best adapt the behavior to system
 (e.g. segfault) or some error handling mechanism (e.g. throw an 
 exception).
In this case I don't know for sure what the best alternative term is, I just want to avoid confusion with "invalid behavior". I want to know when a program execution may actually be invalidated (crash, memory corruption, etc.), versus when it is just some particular and *isolated* aspect of behavior that is simply "undefined", but program execution is not invalidated. In other words, if it is illegal or not.
Oct 07 2010
parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 07/10/2010 12:47, Stanislav Blinov wrote:
 07.10.2010 14:38, Bruno Medeiros wrote:
 On 06/10/2010 16:59, Stanislav Blinov wrote:
 I always thought that the term itself came from language specification,
 i.e. the paper that *defines* behavior of the language and states that
 there are cases when behavior is not defined (i.e. in terms of the
 specification). From this point of view the term is understandable and,
 uh, valid. It's just that it got abused with time, especially this abuse
 is notable in discussions (e.g. "Don't do that, undefined behavior will
 result": one can sit and guess how exactly she will get something that
 is not defined).
"the term itself came from language specification" -> yes that is correct. I read K&R's "The C Programming Language", second edition, and the term comes from there, at least as applied to C. But they don't define or use the term as the C FAQ above, or at least not as explicitly, if I recall correctly (im 98% sure I am).
Now, I'm not on a solid grounds to argue (being not a native English speaker), but the need to actually define this term seems dubious to me, kind of another abuse if you please. The term means just exactly what it says: the behavior [of the code/program/compiler/system/you_name_it] is not defined, i.e. no restrictions or rules are attached to it. To me, attempts to define undefined seem like attempts on 'adding one briefcase into another' (from Win98 days).
In the context of natural language, the term means exactly what it means, yes. But in the context of a particular programming language, we often give more precise meanings to certain natural language terms, because it is very useful and important. For example "object", "contract", "exception", "thread", "valid/invalid code", and many others. Without these terms it would be very complicated to communicate. "undefined behavior" was not given a precise meaning in K&R TCPL, but it is given a much more precise and stricter meaning in the C FAQ and the original article. These meanings look similar but they are slightly and critically different. And this ambiguity percolates through the C community.
 They just describe each particular language rule individually and tell
 you what to expect if you break the rule. Often they will say
 something like "this will cause undefined behavior" and it is clear
 that is is illegal.
Strictly speaking, it's not 'illegal'. If we talk about C (and we do ;) ), being a systems language, it cannot demand strict ways of handling every situation from an implementation. Some systems may choke on integer overflow, some may not. Some build made with single compiler may get you useful data when addressing seemingly out-of-bounds data, yet some builds (made with the same compiler) would grant you access violation. (i.e. debug/release builds of MSVC). Stating that behavior in some case is not defined, the spec leaves implementation with a possibility to best handle that case (or, of course, not handle it at all), or at least adapt it to some use (e.g. for debugging), which is great for systems language, because many such 'undefined' cases can be handled very differently (in terms of results/efficiency/whatnot) on different systems. Of course, it does not imply that writing code that behaves in an 'undefined' manner is a good thing to do, but being strict with what a compiler should do in all cases is neither. Of course, replacing 'undefined behavior' with 'implementation-specific behavior' would look better, but that again puts forward a demand (restriction) rather than possibility.
No, it is illegal behavior according to what I meant by "illegal behavior". :) You thought otherwise because you likely had a slightly different idea of what I meant by "illegal". And this is precisely my point, if you don't precisely define the terms, communication is harder and misunderstandings happen.
 But other times they would say something like "X is undefined", where
 X could be "the order of execution", "the results of Y", "the contents
 of variable Z", and it is not clear whether that meant the program
 could exhibit undefined behavior or not. (or in other words if that
 was illegal or not)
Well, this ambiguity is hardly relating to a programming language, but rather to natural one.
Yes, there is sometimes ambiguity in the natural language. That's why we have to be thoughful in how we define our terms in the context of programming languages. We want to reduce natural language ambiguities to a minimum. -- Bruno Medeiros - Software Engineer
Oct 08 2010
parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
I'm gonna summarize this up in two points:

A) We should define certain terms within the D language in a precise 
way, to help us communicate with each other more clearly, both in 
discussion as well as in documents and specifications.

B) We should strive that the way we define such terms be the best we can 
make it, in order to avoid ambiguity and confusion with other terms, and 
with natural language itself.
(just as there is good code and bad code, there could also be good and 
bad term definitions)


I googled a bit more, and actually found that D (even as far as D1) does 
indeed formally define "Undefined behavior", in an equivalent way as the 
C FAQ:

http://www.digitalmars.com/d/1.0/glossary.html
""
UB (Undefined Behavior)
     Undefined behavior happens when an illegal code construct is 
executed. Undefined behavior can include random, erratic results, 
crashes, faulting, etc.
""

And from TDPL, section 12.2.1:
""
Undefined behavior: The effect of executing a program fragment in a 
given state is not defined. This means that anything within the realm of 
physical possibility could happen.
""

This is good, it means we get (A). I don't think we have (B) though, I 
think the name of the term could be better, as I discussed in the 
previous posts.
Now that I know we have (A), I actually don't want to spent more time 
arguing (B) either, but I am gonna "police" the forums because I'm sure 
this terms has been used, and will be used again not according to the 
definition.

-- 
Bruno Medeiros - Software Engineer
Oct 08 2010
prev sibling parent BCS <none anon.com> writes:
Hello Stanislav,

 I don't think that "invalid behavior" covers that sense: it means that
 implementation should actually do something to make code perform
 'invalid' things (what should be considered invalid, by the way?),
 rather than have the possibility to best adapt the behavior to system
 (e.g. segfault) or some error handling mechanism (e.g. throw an
 exception).
 
Some where in the spec, D defines depending on order of evaluation to be invalid (placing any related bugs in your code, not the compiler, by fiat) but declines to requiter the compiler to enforce it (because it can't in many cases). Maybe some term for "invalid but un checked" should be used. -- ... <IXOYE><
Oct 07 2010