www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - D vs VM-based platforms

reply lubosh <lubosha gmail.com> writes:
Hi all,

I wonder what you all think about the future of programming platforms. I've got


Honestly, I feel quite refreshed to re-discover native compilation in D again.
It seems so much more lightweight than .NET framework or Java. Why there's so
much push on the market (Microsoft, Sun) for executing source code within
virtual machines? Do we really need yet another layer between hardware and our
code? What's your opinion? I wonder how much stir up would D cause if it would
have nice and powerful standardized library and really good IDE (like VS.NET)
Apr 30 2007
next sibling parent reply Ary Manzana <ary esperanto.org.ar> writes:
lubosh escribió:
 Hi all,
 
 I wonder what you all think about the future of programming platforms. I've

 
 Honestly, I feel quite refreshed to re-discover native compilation in D again.
Me too. :-) It seems so much more lightweight than .NET framework or Java. Why there's so much push on the market (Microsoft, Sun) for executing source code within virtual machines? Do we really need yet another layer between hardware and our code? What's your opinion? I wonder how much stir up would D cause if it would have nice and powerful standardized library and really good IDE (like VS.NET) I think the thought is: if in every machine there is a virtual machine with a 10Gb standard library, then, although it's slower than native code (but we're improving it each day!), writing software is much more easier and faster. Why? Because you already have most of the common functions and classes written for you: xml, streams, collections, network, etc. This also means that if your public method recieves a "List", because it's standard, everyone understands it quickly. Also the standard library can be improved, so each program improves as well. Further, you have reflection, which gives you a tremendous power to extend your code with plugins (like in the Eclipse framework). But... Everytime I open an app and it takes from one to two minutes I remember the good old native code, and that's I'd like D to become more popular. And I know the language itself isn't enough these days, and that a good standard library is a must (phobos and tango), as well as a really good IDE. There isn't a "really good" IDE yet, but it's only a matter of time. Take a look at what the next release of Descent will have: http://www.dsource.org/projects/descent/browser/trunk/descent.ui/screenshots/descent_ddbg.jpg?format=raw
Apr 30 2007
next sibling parent reply Jascha Wetzel <"[firstname]" mainia.de> writes:
Ary Manzana wrote:
 Why? Because you already have most of the common
 functions and classes written for you: xml, streams, collections,
 network, etc. This also means that if your public method recieves a
 "List", because it's standard, everyone understands it quickly. Also the
 standard library can be improved, so each program improves as well.
This point is actually only about standard libraries, not VMs. As i see it, VMs actually are only about portability. Portability in theory also means better (more individual) code optimzation. VMs also make compilers a lot simpler. the difficult, platform dependent part of code optimzation lies in the VM. versatility and eat a lot of resources. But little of that is actually dependent on the VM concept. Reflection can be done natively, as well (also see FlectioneD).
Apr 30 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Jascha Wetzel wrote:
 This point is actually only about standard libraries, not VMs.
 As i see it, VMs actually are only about portability. Portability in
 theory also means better (more individual) code optimzation.
 VMs also make compilers a lot simpler. the difficult, platform dependent
 part of code optimzation lies in the VM.
The thing is, you don't need a VM to get such portability. You need a language that doesn't have implementation defined or undefined behavior. It's *already* abstracted away from the target machine, why add another layer of abstraction? I just don't get the reason for a VM. It seems like a solution looking for a problem. As for the "makes building compilers easier", that is solved by defining an intermediate representation (don't need a VM), and building front ends to write to that intermediate representation, building separate optimizers and back ends to turn the intermediate representation into machine code. This is an old idea, and works fine (see gcc!).
Apr 30 2007
next sibling parent reply Jascha Wetzel <"[firstname]" mainia.de> writes:
 It's *already* abstracted away from the target machine, why add another
 layer of abstraction?
to have a format for distribution that's still abstract but not human readable. but i agree that VMs are rather obsolete. one could as well ship intermediate code and finish compilation at first start or installation. ideally one could consider optional processor units like SSE in that last phase of compilation. actually i think this or a multi-target binary format that allows for alternate code units on function level would be a very effective approach to these issues. the later could be implemented by having the compiler generate several versions of a function and let a detection unit decide at startup which version to link. Walter Bright wrote:
 Jascha Wetzel wrote:
 This point is actually only about standard libraries, not VMs.
 As i see it, VMs actually are only about portability. Portability in
 theory also means better (more individual) code optimzation.
 VMs also make compilers a lot simpler. the difficult, platform dependent
 part of code optimzation lies in the VM.
The thing is, you don't need a VM to get such portability. You need a language that doesn't have implementation defined or undefined behavior. It's *already* abstracted away from the target machine, why add another layer of abstraction? I just don't get the reason for a VM. It seems like a solution looking for a problem. As for the "makes building compilers easier", that is solved by defining an intermediate representation (don't need a VM), and building front ends to write to that intermediate representation, building separate optimizers and back ends to turn the intermediate representation into machine code. This is an old idea, and works fine (see gcc!).
Apr 30 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Jascha Wetzel wrote:
 It's *already* abstracted away from the target machine, why add another
 layer of abstraction?
to have a format for distribution that's still abstract but not human readable.
There's no point to that, since there are very good bytecode => java source translators. Running your source through a comment stripper would be about as good.
Apr 30 2007
next sibling parent Sean Kelly <sean f4.ca> writes:
Walter Bright wrote:
 Jascha Wetzel wrote:
 It's *already* abstracted away from the target machine, why add another
 layer of abstraction?
to have a format for distribution that's still abstract but not human readable.
There's no point to that, since there are very good bytecode => java source translators. Running your source through a comment stripper would be about as good.
What I find amazing is that a good bytecode => source translator for .NET will often produce the original source code... exactly. I assume the information is all present for reflection purposes, but I've never been able to get over it. The first time it was shown to me I expected to see some half readable mess, not a photographic duplicate of the original source code. Sean
Apr 30 2007
prev sibling parent Jascha Wetzel <"[firstname]" mainia.de> writes:
isn't that mainly because Java's .class files also contain declarations?
in general it shouldn't be so easy to translate intermediate code back
to source code, especially if general optimizations have already been
applied.

Walter Bright wrote:
 Jascha Wetzel wrote:
 It's *already* abstracted away from the target machine, why add another
 layer of abstraction?
to have a format for distribution that's still abstract but not human readable.
There's no point to that, since there are very good bytecode => java source translators. Running your source through a comment stripper would be about as good.
Apr 30 2007
prev sibling next sibling parent reply Jan Claeys <usenet janc.be> writes:
Op Mon, 30 Apr 2007 10:06:47 -0700
schreef Walter Bright <newshound1 digitalmars.com>:

 I just don't get the reason for a VM. It seems like a solution
 looking for a problem.
 
 As for the "makes building compilers easier", that is solved by
 defining an intermediate representation (don't need a VM), and
 building front ends to write to that intermediate representation,
 building separate optimizers and back ends to turn the intermediate
 representation into machine code. This is an old idea, and works fine
 (see gcc!).
But what people call a "VM" is in fact an interpreter or a (JIT) compiler for such an "intermediate representation"... ;-) And I think in the case of dynamic languages like Python, a JIT-compiler often can create much better code at run-time than a compiler could do when compiling it before run-time. -- JanC
Apr 30 2007
next sibling parent reply Sean Kelly <sean f4.ca> writes:
Jan Claeys wrote:
 
 And I think in the case of dynamic languages like Python, a JIT-compiler
 often can create much better code at run-time than a compiler could do
 when compiling it before run-time.
One issue with run-time optimization is its impact on performance. A traditional compiler can take as long as it wants to exhaustively optimize an application, while a JIT-compiler may only optimize in a way that does not hurt application responsiveness or performance. At SDWest last year, there was a presentation on C++ vs. Java performance, and one of the most significant factors was that most Java JIT-compilers perform little if any optimization, while C++ compilers optimize exhaustively. That said, JIT optimization is still a relatively new practice, and with more cores being added to computers these days it's entirely possible that a JIT optimizer could run on one or more background CPUs and do much better than today. Sean
Apr 30 2007
next sibling parent Jascha Wetzel <"[firstname]" mainia.de> writes:
even if JIT does equally well, it's basically O(n) vs. O(1), n being the
number of runs of the program. unless the advantage of dynamic
optimization outweighs the cost of runtime compilation it's unlikely to
be more efficient than pre-runtime compilation.

Sean Kelly wrote:
 Jan Claeys wrote:
 And I think in the case of dynamic languages like Python, a JIT-compiler
 often can create much better code at run-time than a compiler could do
 when compiling it before run-time.
One issue with run-time optimization is its impact on performance. A traditional compiler can take as long as it wants to exhaustively optimize an application, while a JIT-compiler may only optimize in a way that does not hurt application responsiveness or performance. At SDWest last year, there was a presentation on C++ vs. Java performance, and one of the most significant factors was that most Java JIT-compilers perform little if any optimization, while C++ compilers optimize exhaustively. That said, JIT optimization is still a relatively new practice, and with more cores being added to computers these days it's entirely possible that a JIT optimizer could run on one or more background CPUs and do much better than today. Sean
Apr 30 2007
prev sibling next sibling parent reply Jan Claeys <usenet janc.be> writes:
Op Mon, 30 Apr 2007 11:51:07 -0700
schreef Sean Kelly <sean f4.ca>:

 Jan Claeys wrote:
 
 And I think in the case of dynamic languages like Python, a
 JIT-compiler often can create much better code at run-time than a
 compiler could do when compiling it before run-time.
One issue with run-time optimization is its impact on performance. A traditional compiler can take as long as it wants to exhaustively optimize an application, while a JIT-compiler may only optimize in a way that does not hurt application responsiveness or performance. At SDWest last year, there was a presentation on C++ vs. Java performance, and one of the most significant factors was that most Java JIT-compilers perform little if any optimization, while C++ compilers optimize exhaustively. That said, JIT optimization is still a relatively new practice, and with more cores being added to computers these days it's entirely possible that a JIT optimizer could run on one or more background CPUs and do much better than today.
Well, in practice most Python code just runs on the Python bytecode interpreter (and in most other cases on the Java or .NET VMs), and with a good reason. Some code runs faster when using the third party 'psyco' JIT compiler (which only exists for x86 anyway), while other code gains nothing from it (and thus gets slower due to the additional compilation step). Fortunately you can also tell this JIT at runtime what you want to compile to native code and what not. OTOH I think every attempt to compile Python code into native machine code beforehand until now has resulted in code that runs up to 100x _slower_ than the interpreter(!). ;-) The "problem" with Python is that it's dynamic, and so there is *nothing* known about anything that touches something outside the current module... -- JanC
Apr 30 2007
parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
Jan Claeys wrote:
 Well, in practice most Python code just runs on the Python bytecode
 interpreter (and in most other cases on the Java or .NET VMs), and with
 a good reason.
 ...
The really interesting stuff on Python is happening over at the PyPy[1] project. They're basically trying to write a Python interpreter in a restricted subset of Python called RPython, which can then be translated into other formats like C or LLVM. One of the really weird things is that you can run various transformations over the RPython code to change how it works without ever having to rewrite any of the actual code. The classic example of this is integrating Stackless Python into the interpreter by basically throwing a switch. It's all very cool, and really hard to understand. :P -- Daniel [1] http://codespeak.net/pypy/ -- int getRandomNumber() { return 4; // chosen by fair dice roll. // guaranteed to be random. } http://xkcd.com/ v2sw5+8Yhw5ln4+5pr6OFPma8u6+7Lw4Tm6+7l6+7D i28a2Xs3MSr2e4/6+7t4TNSMb6HTOp5en5g6RAHCP http://hackerkey.com/
Apr 30 2007
parent reply Jan Claeys <usenet janc.be> writes:
Op Tue, 01 May 2007 11:25:06 +1000
schreef Daniel Keep <daniel.keep.lists gmail.com>:

 Jan Claeys wrote:
 Well, in practice most Python code just runs on the Python bytecode
 interpreter (and in most other cases on the Java or .NET VMs), and
 with a good reason.
 ...  
The really interesting stuff on Python is happening over at the PyPy[1] project.
It's a very interesting project, but RPython is not the same language as Python, and they left out some of the things that make compiling Python to native code so difficult...
 It's all very cool, and really hard to understand. :P
Right, I didn't try to understand the details yet. ;) -- JanC
May 01 2007
parent Paul Findlay <r.lph50+d gmail.com> writes:
Jan Claeys wrote:
 It's a very interesting project, but RPython is not the same language as
 Python, and they left out some of the things that make compiling Python
 to native code so difficult...
AFAIK, just the PyPy compiler for python (amongst other tools) is written in RPython. It can compile all normal (well all of it that is currently supported) Python code. This is the same approach as Squeak does for Smalltalk [1] and Rubinius [2] does for Ruby. What you described more accurately reflects Shedskin [3], a Python-to-C++ compiler that only supports a subset of Python  - Paul 1: http://www.squeak.org/Features/TheSqueakVM/ 2: http://en.wikipedia.org/wiki/Rubinius 3: http://mark.dufour.googlepages.com/home
May 02 2007
prev sibling parent reply gareis <dhasenan gmail.com> writes:
Sean Kelly wrote:
...
 That said, JIT optimization is still a relatively new practice, and with 
 more cores being added to computers these days it's entirely possible 
 that a JIT optimizer could run on one or more background CPUs and do 
 much better than today.
And then you're sacrificing two or three cores to run one thread, rather than sacrificing most of the developer's computational power at compile time and running as efficiently (or more so) with at most one core per thread. Now, if the compiler cached its optimizations on disk, you could potentially get similar optimizations after some large number of runs. However, in the interim the program would run slower than the optimized precompiled code, and would start slower than the JIT code that didn't cache its optimizations (because that caching takes disk time, and that's one of the most expensive resources). Of course, your runtime compiler can optimize for your user's current CPU, even if that changes. I suppose you could create binaries optimized for each CPU and have a script determine which is appropriate for the current CPU, but that's spending disk space (also quite scarce) in exchange for CPU time (relatively abundant). I don't know how to solve this problem, but it's an interesting one.
Apr 30 2007
parent reply Dave <Dave_member pathlink.com> writes:
gareis wrote:
 Sean Kelly wrote:
 ...
 That said, JIT optimization is still a relatively new practice, and 
But the optimizations are the same (basically), and the best and brightest have been at it for years. I'd venture a guess that more has been / still is being spent on VM research rather than static compiler research. Over roughly the past 10 years, I've seen several articles promising Java would exceed C and Fortran in 'a year or two'. A couple of years later I also recall finding some pretty large performance regressions between major releases of their Java VM. I still think 1.3 does some things better than 1.6 and it's been, what, 5 years? Interestingly, Sun is still improving their static compiler tools though.
 with more cores being added to computers these days it's entirely 
 possible that a JIT optimizer could run on one or more background CPUs 
 and do much better than today.
And then you're sacrificing two or three cores to run one thread, rather than sacrificing most of the developer's computational power at compile time and running as efficiently (or more so) with at most one core per thread. Now, if the compiler cached its optimizations on disk, you could potentially get similar optimizations after some large number of runs.
Sun's Hotspot does this (but it's not explicitly cached on disk).
 However, in the interim the program would run slower than the optimized 
 precompiled code, and would start slower than the JIT code that didn't 
That's why Sun has both a 'client' and a 'server' VM.
 cache its optimizations (because that caching takes disk time, and 
 that's one of the most expensive resources).
 
 Of course, your runtime compiler can optimize for your user's current 
 CPU, even if that changes. I suppose you could create binaries optimized 
 for each CPU and have a script determine which is appropriate for the 
I know Intel and (IIRC) to a lesser extent MS VS2005 C/C++ as well as Sun and HP compilers will compile this right into the binary for you (and then the best code is picked at runtime). Seems to work pretty well from what I've seen.
 current CPU, but that's spending disk space (also quite scarce) in 
 exchange for CPU time (relatively abundant).
 
 I don't know how to solve this problem, but it's an interesting one.
Apr 30 2007
parent Sean Kelly <sean f4.ca> writes:
Dave wrote:
 gareis wrote:
 Sean Kelly wrote:
 ...
 That said, JIT optimization is still a relatively new practice, and 
But the optimizations are the same (basically), and the best and brightest have been at it for years. I'd venture a guess that more has been / still is being spent on VM research rather than static compiler research. Over roughly the past 10 years, I've seen several articles promising Java would exceed C and Fortran in 'a year or two'. A couple of years later I also recall finding some pretty large performance regressions between major releases of their Java VM. I still think 1.3 does some things better than 1.6 and it's been, what, 5 years? Interestingly, Sun is still improving their static compiler tools though.
That's because (I suspect) most of Sun's big customers use their static compilers. On Java speed... I still debug in emacs instead of using Sun Studio because the latter is irritatingly slow. Java may have the potential to produce fast code, but I wish that were more evident in the performance of Java UI apps I've used. This may be entirely a problem with Swing or whatever, but appearances count. Sean
May 01 2007
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Jan Claeys wrote:
 And I think in the case of dynamic languages like Python, a JIT-compiler
 often can create much better code at run-time than a compiler could do
 when compiling it before run-time.
That's the theory. In practice, Python programmers who need performance will develop a hybrid Python/C++ app, with the slow stuff recoded in C++.
Apr 30 2007
next sibling parent Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 Jan Claeys wrote:
 And I think in the case of dynamic languages like Python, a JIT-compiler
 often can create much better code at run-time than a compiler could do
 when compiling it before run-time.
That's the theory. In practice, Python programmers who need performance will develop a hybrid Python/C++ app, with the slow stuff recoded in C++.
In practice with Python I think what happens is more like: 1) make sure you're not doing something stupid. If not ... 2) try psycho (a kind of JIT) (http://psyco.sourceforge.net). If that doesn't help (it never has for me)... 3) rewrite slow parts in pyrex (http://www.cosc.canterbury.ac.nz/greg.ewing/python/Pyrex/). If that's not feasible then... 4) rewrite it as a native code module with Boost::Python or SWIG or just using the raw C API. Or write a native shared library and use ctypes (http://python.net/crew/theller/ctypes/) to access it. If you're doing numerical code then there are a couple of things you can try before resorting to rewriting. numexpr (http://www.scipy.org/SciPyPackages/NumExpr) and scipy.weave (http://www.scipy.org/Weave). And now of course you also have the option of rewriting the slow parts in D, thanks to Kirk. --bb
Apr 30 2007
prev sibling parent reply Jan Claeys <usenet janc.be> writes:
Op Mon, 30 Apr 2007 13:44:05 -0700
schreef Walter Bright <newshound1 digitalmars.com>:

 Jan Claeys wrote:
 And I think in the case of dynamic languages like Python, a
 JIT-compiler often can create much better code at run-time than a
 compiler could do when compiling it before run-time.  
That's the theory. In practice, Python programmers who need performance will develop a hybrid Python/C++ app, with the slow stuff recoded in C++.
Just like some people write libraries in Fortran or assembler or some vector processor language because C and C++ and D are "too slow". ;-) There is one commonly used JIT-compiler for Python ('psyco') and it is actually useful in some cases, while I haven't seen one single Python-to-native-code compiler that makes code that's actually faster than the interpreter in most cases... Python's strength is its "dynamism" and ability to adapt to "unexpected" changes at run-time. And the fact that Python developers write extensions in other languages if speed is really important and 'psyco' doesn't help proves that compiling Python to native code before it's run is not really a useful option. -- JanC
Apr 30 2007
parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
Jan Claeys wrote:
 Just like some people write libraries in Fortran or assembler or some
 vector processor language because C and C++ and D are "too slow".   ;-)
Or out of sheer bloody-mindedness. Funny thing, turns out SSE is actually *slower* for doing a dot product than regular old x87 code!
 There is one commonly used JIT-compiler for Python ('psyco') and it is
 actually useful in some cases, while I haven't seen one single
 Python-to-native-code compiler that makes code that's actually faster
 than the interpreter in most cases...
 
 Python's strength is its "dynamism" and ability to adapt to
 "unexpected" changes at run-time.  And the fact that Python developers
 write extensions in other languages if speed is really important and
 'psyco' doesn't help proves that compiling Python to native code before
 it's run is not really a useful option.
That's what I like about Python; it's a massively expressive language that doesn't get in your way if you need the speed. Incidentally, it's called "dynamicysm". *DRINK* -- Daniel -- int getRandomNumber() { return 4; // chosen by fair dice roll. // guaranteed to be random. } http://xkcd.com/ v2sw5+8Yhw5ln4+5pr6OFPma8u6+7Lw4Tm6+7l6+7D i28a2Xs3MSr2e4/6+7t4TNSMb6HTOp5en5g6RAHCP http://hackerkey.com/
Apr 30 2007
parent reply Stephen Waits <steve waits.net> writes:
Daniel Keep wrote:
 
 Funny thing, turns out SSE is actually *slower* for doing a dot product
 than regular old x87 code!
You should qualify this - I'm guessing you mean for a single dot product? If so, this is the case in most vector coprocessors, as load/store overhead can easily outweigh the gains in vectorization. --Steve
May 01 2007
parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
Stephen Waits wrote:
 Daniel Keep wrote:
 Funny thing, turns out SSE is actually *slower* for doing a dot product
 than regular old x87 code!
You should qualify this - I'm guessing you mean for a single dot product? If so, this is the case in most vector coprocessors, as load/store overhead can easily outweigh the gains in vectorization. --Steve
Sorry; yes, you're right: it's for a single dot product. I'm surprised at this because of the sheer number of articles I ran across touting "faster" dot product functions using SSE. I have a feeling these people have never bothered to actually *benchmark* their "faster" functions :P -- Daniel -- int getRandomNumber() { return 4; // chosen by fair dice roll. // guaranteed to be random. } http://xkcd.com/ v2sw5+8Yhw5ln4+5pr6OFPma8u6+7Lw4Tm6+7l6+7D i28a2Xs3MSr2e4/6+7t4TNSMb6HTOp5en5g6RAHCP http://hackerkey.com/
May 01 2007
parent reply Benji Smith <dlanguage benjismith.net> writes:
Daniel Keep wrote:
 Sorry; yes, you're right: it's for a single dot product.
 
 I'm surprised at this because of the sheer number of articles I ran
 across touting "faster" dot product functions using SSE.  I have a
 feeling these people have never bothered to actually *benchmark* their
 "faster" functions :P
 
 	-- Daniel
I'm also assuming that's for some low-dimensionality vector? I'd likewise guess that there's some sweet spot where dot product calculation is faster with SSE, even for a single pair of vectors, if the vectors are of sufficient dimensionality. --benji
May 01 2007
parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
Benji Smith wrote:
 Daniel Keep wrote:
 Sorry; yes, you're right: it's for a single dot product.

 I'm surprised at this because of the sheer number of articles I ran
 across touting "faster" dot product functions using SSE.  I have a
 feeling these people have never bothered to actually *benchmark* their
 "faster" functions :P

     -- Daniel
I'm also assuming that's for some low-dimensionality vector? I'd likewise guess that there's some sweet spot where dot product calculation is faster with SSE, even for a single pair of vectors, if the vectors are of sufficient dimensionality. --benji
3D single-precision. The problem seems to be a combination of unaligned loads, and the trickery you have to resort to in order to sum the XMM register horizontally. There's a dot product instruction in SSE4, but I don't have a CPU that supports it. :P It also doesn't help that the compiler will inline the FPU functions, but won't inline the SSE ones. -- Daniel -- int getRandomNumber() { return 4; // chosen by fair dice roll. // guaranteed to be random. } http://xkcd.com/ v2sw5+8Yhw5ln4+5pr6OFPma8u6+7Lw4Tm6+7l6+7D i28a2Xs3MSr2e4/6+7t4TNSMb6HTOp5en5g6RAHCP http://hackerkey.com/
May 01 2007
parent 0ffh <spam frankhirsch.net> writes:
Daniel Keep wrote:
 It also doesn't help that the compiler will inline the FPU functions,
 but won't inline the SSE ones.
Anyways, I admit I miss the "inline" keyword - fortunately it can be roughly emulated with mixin templates. :) Regards, Frank
May 01 2007
prev sibling parent reply Benji Smith <dlanguage benjismith.net> writes:
Walter Bright wrote:
 I just don't get the reason for a VM. It seems like a solution looking 
 for a problem.
Some of the benefits of using a VM platform: 1) Dynamic classloading. Linking is greatly simplified, and my code doesn't need to be written differently depending on whether I'm linking dynamically or statically. 2) Better tools for profiling, debugging, reflection, and runtime instrumentation than are typically available for natively-compiled languages. 3) Better memory management: with the memory manager located in the VM, rather than in the application code, the collection of garbage is much more well-defined. Since all classes are loaded into the same VM instance, there's only a single heap. Consequently, there's never an issue of what happens when an object passes from one module to another (as can be the case when a native library passes an object into the main application, or vice versa). 4) Better security/sandboxing. If you write a pluggable application in C++, how will you restrict plugin authors from monkeying with your application data structures? In the JVM or the CLR, the VM provides security mechanisms to restrict the functionality of sandboxed code. A particular CLR assembly might, for example, be restricted from accessing the file system or the network connection. You can't do that with native code. Sure, it's possible for natively-compiled languages to offer most of the same bells and whistles as dynamic languages or VM-based platforms. But, in the real world, those abstractions are usually difficult to implement in native code, so they become available much more readily in virtual machine. --benji
Apr 30 2007
next sibling parent reply Brad Roberts <braddr puremagic.com> writes:
Benji Smith wrote:
 Walter Bright wrote:
 I just don't get the reason for a VM. It seems like a solution looking 
 for a problem.
Some of the benefits of using a VM platform: 1) Dynamic classloading. Linking is greatly simplified, and my code doesn't need to be written differently depending on whether I'm linking dynamically or statically. 2) Better tools for profiling, debugging, reflection, and runtime instrumentation than are typically available for natively-compiled languages. 3) Better memory management: with the memory manager located in the VM, rather than in the application code, the collection of garbage is much more well-defined. Since all classes are loaded into the same VM instance, there's only a single heap. Consequently, there's never an issue of what happens when an object passes from one module to another (as can be the case when a native library passes an object into the main application, or vice versa). 4) Better security/sandboxing. If you write a pluggable application in C++, how will you restrict plugin authors from monkeying with your application data structures? In the JVM or the CLR, the VM provides security mechanisms to restrict the functionality of sandboxed code. A particular CLR assembly might, for example, be restricted from accessing the file system or the network connection. You can't do that with native code. Sure, it's possible for natively-compiled languages to offer most of the same bells and whistles as dynamic languages or VM-based platforms. But, in the real world, those abstractions are usually difficult to implement in native code, so they become available much more readily in virtual machine. --benji
Interesting. Paraphrasing your reply: These are benefits of VM's, but no they're not. They're above list of 'benefits' are some things that current VM implementations and the languages that sit on top of them and the provided libraries that sit on top of them all add up to provide. They're very much not attributes of the VM underneath nor of VM's in general. Please be careful when attributing causal effects. A favorite phrase: correlation is not causation. Later, Brad
Apr 30 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Brad Roberts wrote:
 Benji Smith wrote:
 Walter Bright wrote:
 I just don't get the reason for a VM. It seems like a solution 
 looking for a problem.
Some of the benefits of using a VM platform: 1) Dynamic classloading. Linking is greatly simplified, and my code doesn't need to be written differently depending on whether I'm linking dynamically or statically.
That's an attribute of the language, not the VM. COM does the same thing with natively compiled languages.
 2) Better tools for profiling, debugging, reflection, and runtime 
 instrumentation than are typically available for natively-compiled 
 languages.
I attribute this to two things, none of which are a characteristic of a VM: 1) Java is a very easy language to parse, with well defined semantics. This makes it easy to develop such tools for it. C++, on the other hand, is disastrously difficult to parse. 2) The two VMs out there have billions and billions of dollars sunk into them to create tools, no matter how easy/hard that might be.
 3) Better memory management: with the memory manager located in the 
 VM, rather than in the application code, the collection of garbage is 
 much more well-defined. Since all classes are loaded into the same VM 
 instance, there's only a single heap. Consequently, there's never an 
 issue of what happens when an object passes from one module to another 
 (as can be the case when a native library passes an object into the 
 main application, or vice versa).
1) I wrote a GC for Java, back in the day. Doing a good GC is dependent on the right language semantics, having a VM has nothing to do with it. D works with add on DLLs by sharing a single instance of the GC.
 4) Better security/sandboxing. If you write a pluggable application in 
 C++, how will you restrict plugin authors from monkeying with your 
 application data structures? In the JVM or the CLR, the VM provides 
 security mechanisms to restrict the functionality of sandboxed code. A 
 particular CLR assembly might, for example, be restricted from 
 accessing the file system or the network connection. You can't do that 
 with native code.
Every single VM based system, from javascript to Word macros, has turned into a vector for compromising a system. That's why I run email with javascript, etc., all turned off. It's why I don't use Word. I know about the promises of security, but I don't believe it holds up in practice.
 Sure, it's possible for natively-compiled languages to offer most of 
 the same bells and whistles as dynamic languages or VM-based 
 platforms. But, in the real world, those abstractions are usually 
 difficult to implement in native code, so they become available much 
 more readily in virtual machine.
I believe you are seeing the effects of billions of dollars being invested in those VMs, not any fundamental advantage.
Apr 30 2007
parent reply Benji Smith <dlanguage benjismith.net> writes:
Walter Bright wrote:
 Brad Roberts wrote:
 Benji Smith wrote:
 1) Dynamic classloading. Linking is greatly simplified, and my code 
 doesn't need to be written differently depending on whether I'm 
 linking dynamically or statically.
That's an attribute of the language, not the VM. COM does the same thing with natively compiled languages.
Actually, COM illustrates my point quite nicely. If you're going to write a COM-compatible library, you have to plan for it from the start, inheriting from IUnknown, creating GUIDs, and setting up reference counting functionality. Likewise, a consumer of a COM library has to know it uses COM semantics, since the application code will have to query the interface using COM-specific functions. Even in D, if I want to write code in a DLL (or call code from a DLL), my ***CODE*** has to be aware of the existence of the DLL. In Java, the code I write is identical, whether I'm calling methods on my own classes, calling methods on classes packaged up in a 3rd party library, or packaging up my own library for distribution to other API consumers. Since the VM provides all of the classloading functionality, the application code and the library code is completely agnostic of calling & linking conventions. Of course, the disadvantage of this is that there's no such thing as static linking. *Everything* is linked dynamically. But at least I don't have to rewrite my code just to create (or consume) a library.
 2) Better tools for profiling, debugging, reflection, and runtime 
 instrumentation than are typically available for natively-compiled 
 languages.
I attribute this to two things, none of which are a characteristic of a VM: 1) Java is a very easy language to parse, with well defined semantics. This makes it easy to develop such tools for it. C++, on the other hand, is disastrously difficult to parse. 2) The two VMs out there have billions and billions of dollars sunk into them to create tools, no matter how easy/hard that might be.
You can use the "billions of dollars" excuse if you like, but the "easy to parse" excuse doesn't hold water. Notice, I'm not talking about refactoring tools or code-coverage tools, or anything like that. I'm talking about profiling, debugging, reflection, and runtime instrumentation. Take debugging, for example... It's possible to hook a debugger to an already-running instance of the JVM on a remote machine. And you can do that without a special debug build of the application. The application binaries always contain the necessary symbols for debugging, so it's always possible to debug applications. The JVM has a debugging API which provides methods for suspending and resuming execution, walking the objects on the heap, querying objects on the stack, evaluating expressions, setting normal and conditional breakpoints, replacing or redefining entire class definitions in-place (without restarting the application). Essentially, the JVM already includes the complete functionality of a full-featured debugger. The debugging API is just a mechanism for controlling the debugger from a 3rd-party application, like a debugging GUI. Without a VM, I don't know how you could get a debugger implemented just by connecting some GUI code to a debugging API. The x86 doesn't have a debugger built-in. The same thing is true of profiling, reflection, and instrumentation. It has *nothing* to do with the semantics of the language, or with the syntax being "easy to parse". It has everything to do with the fact that a VM can provide hooks for looking inside itself. A non-virtual machine doesn't do that.
 3) Better memory management: with the memory manager located in the 
 VM, rather than in the application code, the collection of garbage is 
 much more well-defined. Since all classes are loaded into the same VM 
 instance, there's only a single heap. Consequently, there's never an 
 issue of what happens when an object passes from one module to 
 another (as can be the case when a native library passes an object 
 into the main application, or vice versa).
1) I wrote a GC for Java, back in the day. Doing a good GC is dependent on the right language semantics, having a VM has nothing to do with it. D works with add on DLLs by sharing a single instance of the GC.
I won't argue this one, since I don't know much about D's shared GC implementation.
 4) Better security/sandboxing. If you write a pluggable application 
 in C++, how will you restrict plugin authors from monkeying with your 
 application data structures? In the JVM or the CLR, the VM provides 
 security mechanisms to restrict the functionality of sandboxed code. 
 A particular CLR assembly might, for example, be restricted from 
 accessing the file system or the network connection. You can't do 
 that with native code.
Every single VM based system, from javascript to Word macros, has turned into a vector for compromising a system. That's why I run email with javascript, etc., all turned off. It's why I don't use Word. I know about the promises of security, but I don't believe it holds up in practice.
You may argue that certain VMs (I suppose JavaScript and VBA) have implemented their security functionality poorly. But the core concept, if implemented correctly (as in the JVM and the CLR) allows a hosting application to load a plugin and restrict the functionality of the executable code within that plugin, preventing it from accessing certain platform features or resources. Natively-compiled code can't even *hope* to enforce that kind of isolation. --benji
May 01 2007
next sibling parent reply BCS <ao pathlink.com> writes:
Reply to Benji,

[...]

Most of your rebuttal basically says that programs in VM can do X without 
the coder having to do something different than they would if they didn't 
do X. There is (lots of big problems aside) a simple solution to this problem 
in native code: Don't allow the coder to NOT do X, requiter that all classes 
be COM objects, always compile in debugging and profiling symbols, heck maybe 
even a full net-centric debugger. It seams to me (and I could be wrong) that 
the way that the VM languages get all of these advantages of these options 
is by not making them options, they are requirements.

The only case that all of that doesn't cover is sand boxing. Well, something 
is going to have to run in native code, so why not make native code safe? 
Allow a process to span off a thread that is native code but sand boxed: 
some OS API's don't work, it has a Read Only or no access to some part of 
ram that the rest of the process can access. In short If security is such 
a big deal, why is the VM doing it instead of the OS?
May 01 2007
parent Benji Smith <dlanguage benjismith.net> writes:
BCS wrote:
 Reply to Benji,
 
 [...]
 
 Most of your rebuttal basically says that programs in VM can do X 
 without the coder having to do something different than they would if 
 they didn't do X. There is (lots of big problems aside) a simple 
 solution to this problem in native code: Don't allow the coder to NOT do 
 X, requiter that all classes be COM objects, always compile in debugging 
 and profiling symbols, heck maybe even a full net-centric debugger. It 
 seams to me (and I could be wrong) that the way that the VM languages 
 get all of these advantages of these options is by not making them 
 options, they are requirements.
 
 The only case that all of that doesn't cover is sand boxing. Well, 
 something is going to have to run in native code, so why not make native 
 code safe? Allow a process to span off a thread that is native code but 
 sand boxed: some OS API's don't work, it has a Read Only or no access to 
 some part of ram that the rest of the process can access. In short If 
 security is such a big deal, why is the VM doing it instead of the OS?
Sure. Fair enough. You *could* maybe do all of that stuff with native code, if only someone had ever implemented it. ...Shrug... Rather than speculating on what's theoretically possible in a natively compiled platform, I'm pointing out some of the advantages that exist *today* in VM-based platforms. I never claimed those advantages outweighed the considerable advantages of native compilation. I'm just saying...there are some features that are *currently* being routinely provided in VM platforms that don't yet exist when you're compiling code to a native platform. Jeez. Talk about throwing stones in glass houses... --benji
May 01 2007
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Benji Smith wrote:
 Walter Bright wrote:
 Brad Roberts wrote:
 Benji Smith wrote:
 1) Dynamic classloading. Linking is greatly simplified, and my code 
 doesn't need to be written differently depending on whether I'm 
 linking dynamically or statically.
That's an attribute of the language, not the VM. COM does the same thing with natively compiled languages.
Actually, COM illustrates my point quite nicely. If you're going to write a COM-compatible library, you have to plan for it from the start, inheriting from IUnknown, creating GUIDs, and setting up reference counting functionality.
Not if the language is designed to be COM-compatible from the start. You don't need a VM to inherit from IUnknown, create GUIDs, or do reference counting.
 
 Likewise, a consumer of a COM library has to know it uses COM semantics, 
 since the application code will have to query the interface using 
 COM-specific functions.
 
 Even in D, if I want to write code in a DLL (or call code from a DLL), 
 my ***CODE*** has to be aware of the existence of the DLL.
 
 In Java, the code I write is identical, whether I'm calling methods on 
 my own classes, calling methods on classes packaged up in a 3rd party 
 library, or packaging up my own library for distribution to other API 
 consumers.
 
 Since the VM provides all of the classloading functionality, the 
 application code and the library code is completely agnostic of calling 
 & linking conventions.

 Of course, the disadvantage of this is that there's no such thing as 
 static linking. *Everything* is linked dynamically. But at least I don't 
 have to rewrite my code just to create (or consume) a library.
If you designed a language around COM, you'd get all that stuff for free, too. I agree that using COM in C++ is a bit clunky, but after all, C++ was designed before there was COM.
 Take debugging, for example...
 
 It's possible to hook a debugger to an already-running instance of the 
 JVM on a remote machine. And you can do that without a special debug 
 build of the application. The application binaries always contain the 
 necessary symbols for debugging, so it's always possible to debug 
 applications.
If you want, you can always compile your native app with debug symbols on.
 The JVM has a debugging API which provides methods for suspending and 
 resuming execution, walking the objects on the heap, querying objects on 
 the stack, evaluating expressions, setting normal and conditional 
 breakpoints, replacing or redefining entire class definitions in-place 
 (without restarting the application).
 
 Essentially, the JVM already includes the complete functionality of a 
 full-featured debugger. The debugging API is just a mechanism for 
 controlling the debugger from a 3rd-party application, like a debugging 
 GUI.
 
 Without a VM, I don't know how you could get a debugger implemented just 
 by connecting some GUI code to a debugging API. The x86 doesn't have a 
 debugger built-in.
Most debuggers are able to attach themselves to running processes. The CPU itself does contain specific hardware to support debugging.
 The same thing is true of profiling, reflection, and instrumentation. It 
 has *nothing* to do with the semantics of the language, or with the 
 syntax being "easy to parse". It has everything to do with the fact that 
 a VM can provide hooks for looking inside itself. A non-virtual machine 
 doesn't do that.
Many profilers are able to hook into executables that have symbolic debug info present (Intel's comes to mind). Reflection can be done natively - D will get there. Instrumentation - depends on what instrumentation is done. You can't do line-by-line code coverage analysis without recompiling with such turned on, even with Java, because the bytecode simply doesn't contain that information.
 Every single VM based system, from javascript to Word macros, has 
 turned into a vector for compromising a system. That's why I run email 
 with javascript, etc., all turned off. It's why I don't use Word. I 
 know about the promises of security, but I don't believe it holds up 
 in practice.
You may argue that certain VMs (I suppose JavaScript and VBA) have implemented their security functionality poorly. But the core concept, if implemented correctly (as in the JVM and the CLR) allows a hosting application to load a plugin and restrict the functionality of the executable code within that plugin, preventing it from accessing certain platform features or resources. Natively-compiled code can't even *hope* to enforce that kind of isolation.
The x86 processors have 4 rings of hardware protection built in. The idea is to do the isolation in hardware, not software, and it does work (one process crashing can't bring down another process). Where it fails is where Windows runs all processes at ring 0. This is a terrible design mistake. The CPU *is* designed to provide the sandboxing that a VM can provide. Also, as VMware has demonstrated, the virtualization of hardware can provide complete sandbox capability. Another example of this sort of hardware sandboxing is if you run 16 bit DOS code under Windows. The virtualization software sets up a "DOS box" which is completely controlled by hardware, so any interrupts, I/O port instructions, etc., are intercepted by the hardware and transferred to software that decides what to do, whether to allow/deny, etc. These capabilities are all there in the hardware. The fact that systems software often fails to use it is no more of a fundamental flaw than the fact that all the VM systems are so routinely compromised that people run their mail and browsers with scripting disabled.
May 01 2007
parent reply Benji Smith <dlanguage benjismith.net> writes:
Walter Bright wrote:
 Benji Smith wrote:
 Actually, COM illustrates my point quite nicely...
Not if the language is designed to be COM-compatible from the start. You don't need a VM to inherit from IUnknown, create GUIDs, or do reference counting. If you designed a language around COM, you'd get all that stuff for free, too. I agree that using COM in C++ is a bit clunky, but after all, C++ was designed before there was COM.
Aha. Very interesting point. I hadn't thought of that. Is there such a language? Or is this just hypothetical?
 Without a VM, I don't know how you could get a debugger implemented 
 just by connecting some GUI code to a debugging API. The x86 doesn't 
 have a debugger built-in.
Most debuggers are able to attach themselves to running processes. The CPU itself does contain specific hardware to support debugging.
Cool. I didn't know that.
 Many profilers are able to hook into executables that have symbolic 
 debug info present (Intel's comes to mind). Reflection can be done 
 natively - D will get there. Instrumentation - depends on what 
 instrumentation is done. You can't do line-by-line code coverage 
 analysis without recompiling with such turned on, even with Java, 
 because the bytecode simply doesn't contain that information.

 Natively-compiled code can't even *hope* to enforce that kind of 
 isolation.
The x86 processors have 4 rings of hardware protection built in. The idea is to do the isolation in hardware, not software, and it does work (one process crashing can't bring down another process). Where it fails is where Windows runs all processes at ring 0. This is a terrible design mistake. The CPU *is* designed to provide the sandboxing that a VM can provide. Also, as VMware has demonstrated, the virtualization of hardware can provide complete sandbox capability.
Lots of great info. Thanks. I didn't know that the x86 had support for profiling, debugging, sandboxing, etc. I'd actually argue, though, that these kinds of features are actually VM features, even if they have actually been implemented on silicon. Since these kinds of functions provide an outside observer with a view into the machine's internals, I think they're more naturally implemented in a virtual machine (and VMs will, no doubt, be the environments where the most interesting research is conducted into new techniques for profiling, debugging, instrumentation, etc). If you want these kinds of meta-platform features baked into silicon, or solidified in your platform, you either need to wait twenty years for the market to prove their viability, or you can get them in next year's VM technologies. --benji PS: Keep in mind, I'm playing devil's advocate here, not because I have anything against compilation for a native platform, but because I think there are lots of interesting innovation in the VM universe that could be useful to D.
May 01 2007
next sibling parent reply Sean Kelly <sean f4.ca> writes:
Benji Smith wrote:
 
 Lots of great info. Thanks. I didn't know that the x86 had support for 
 profiling, debugging, sandboxing, etc.
 
 I'd actually argue, though, that these kinds of features are actually VM 
 features, even if they have actually been implemented on silicon.
Since the name "virtual machine" implies the virtualization of a machine, it seems reasonable that a good VM would provide all the features normally found in a non-virtual (ie. real) machine. Why should these features be offered only in software? Particularly at a time where hardware support for VMs is being explicitly added to hardware to improve performance? Sean
May 01 2007
parent Benji Smith <dlanguage benjismith.net> writes:
Sean Kelly wrote:
 Benji Smith wrote:
 Lots of great info. Thanks. I didn't know that the x86 had support for 
 profiling, debugging, sandboxing, etc.

 I'd actually argue, though, that these kinds of features are actually 
 VM features, even if they have actually been implemented on silicon.
Since the name "virtual machine" implies the virtualization of a machine, it seems reasonable that a good VM would provide all the features normally found in a non-virtual (ie. real) machine. Why should these features be offered only in software? Particularly at a time where hardware support for VMs is being explicitly added to hardware to improve performance? Sean
I agree. A good virtual machine will provide all of the features of a real machine. The opposite, though, is not necessarily true. A real machine doesn't necessarily provide all of the features of a typical virtual machine. --benji
May 01 2007
prev sibling next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Benji Smith wrote:
 Walter Bright wrote:
 Benji Smith wrote:
 If you designed a language around COM, you'd get all that stuff for 
 free, too. I agree that using COM in C++ is a bit clunky, but after 
 all, C++ was designed before there was COM.
Aha. Very interesting point. I hadn't thought of that. Is there such a language? Or is this just hypothetical?
were to build a native compiler for them, that can be done. I didn't design D to map directly onto COM because COM is a dying technology.
 The x86 processors have 4 rings of hardware protection built in. The 
 idea is to do the isolation in hardware, not software, and it does 
 work (one process crashing can't bring down another process). Where it 
 fails is where Windows runs all processes at ring 0. This is a 
 terrible design mistake. The CPU *is* designed to provide the 
 sandboxing that a VM can provide. Also, as VMware has demonstrated, 
 the virtualization of hardware can provide complete sandbox capability.
Lots of great info. Thanks. I didn't know that the x86 had support for profiling, debugging, sandboxing, etc.
The problem is it's simply easier to just write a VM. But when you've got a billion dollars to spend, there's no need to take the easy route.
 I'd actually argue, though, that these kinds of features are actually VM 
 features, even if they have actually been implemented on silicon. Since 
 these kinds of functions provide an outside observer with a view into 
 the machine's internals, I think they're more naturally implemented in a 
 virtual machine (and VMs will, no doubt, be the environments where the 
 most interesting research is conducted into new techniques for 
 profiling, debugging, instrumentation, etc).
These features existed in the x86 since the mid 1980's, a decade before the Java VM and 15 years before the CLR. Mainframe hardware virtualization has existed for much longer.
 If you want these kinds of meta-platform features baked into silicon, or 
 solidified in your platform, you either need to wait twenty years for 
 the market to prove their viability, or you can get them in next year's 
 VM technologies.
Hardware sandboxing on the x86 has been around at least since the infamous 286 "penalty box". The 286 was Intel's first try at hardware virtualization, and a lot of mistakes were made. The 386 got it right, and the first fruits of that came in Windows-386, which provided multiple virtual DOS sessions. The original 8086 had no virtualization capability, and as a result, it was a *terrible* platform for software development. Any errant program could pull down the whole system. With the 286 came 'protected mode', where errant pointers were trapped by the hardware. It was the first sandboxing for x86.
 PS: Keep in mind, I'm playing devil's advocate here, not because I have 
 anything against compilation for a native platform, but because I think 
 there are lots of interesting innovation in the VM universe that could 
 be useful to D.
Software VM features can certainly drive forward adoption of hardware features. They always have <g>.
May 01 2007
prev sibling parent Fredrik Olsson <peylow gmail.com> writes:
Benji Smith skrev:
 Walter Bright wrote:
 Benji Smith wrote:
<smip>
 If you designed a language around COM, you'd get all that stuff for 
 free, too. I agree that using COM in C++ is a bit clunky, but after 
 all, C++ was designed before there was COM.
Aha. Very interesting point. I hadn't thought of that. Is there such a language? Or is this just hypothetical?
Visual Basic. The Visual Basic versions over time is pretty much a mirror of the capabilities of COM as implemented by Microsoft over time. Including inheriting the limitations; the reasons you can not inherit a class from another class in Visual basic is simply because you can not inherit a component from another component in COM. And the interfaces and classes (components) you create in Visual Basic are usable COM interfaces and components from C++, or what you like. // Fredrik
May 05 2007
prev sibling parent reply Tom <tom nospam.com> writes:
You people can list a million of (mostly) theoretical benefits in having a VM.
Java/.NET apps will 
continue to be damn slow despite of these statements (Java the most). That is
the simple and 
self-evident truth. Aside from, the idea of having a CPU core for the exclusive
use of a VM is a 
*total* waste. I don't trust in hardware solutions for software problems.

Just my opinion. :)

Tom;

Benji Smith escribió:
 Walter Bright wrote:
 I just don't get the reason for a VM. It seems like a solution looking 
 for a problem.
Some of the benefits of using a VM platform: 1) Dynamic classloading. Linking is greatly simplified, and my code doesn't need to be written differently depending on whether I'm linking dynamically or statically. 2) Better tools for profiling, debugging, reflection, and runtime instrumentation than are typically available for natively-compiled languages. 3) Better memory management: with the memory manager located in the VM, rather than in the application code, the collection of garbage is much more well-defined. Since all classes are loaded into the same VM instance, there's only a single heap. Consequently, there's never an issue of what happens when an object passes from one module to another (as can be the case when a native library passes an object into the main application, or vice versa). 4) Better security/sandboxing. If you write a pluggable application in C++, how will you restrict plugin authors from monkeying with your application data structures? In the JVM or the CLR, the VM provides security mechanisms to restrict the functionality of sandboxed code. A particular CLR assembly might, for example, be restricted from accessing the file system or the network connection. You can't do that with native code. Sure, it's possible for natively-compiled languages to offer most of the same bells and whistles as dynamic languages or VM-based platforms. But, in the real world, those abstractions are usually difficult to implement in native code, so they become available much more readily in virtual machine. --benji
Apr 30 2007
next sibling parent reply Mike Parker <aldacron71 yahoo.com> writes:
Tom wrote:
 You people can list a million of (mostly) theoretical benefits in having 
 a VM. Java/.NET apps will continue to be damn slow despite of these 
 statements (Java the most). That is the simple and self-evident truth. 
 Aside from, the idea of having a CPU core for the exclusive use of a VM 
 is a *total* waste. I don't trust in hardware solutions for software 
 problems.
of the games out there being developed in both languages? This is an argument that will last into infinity, I'm sure. There are people who doing so. The fact that you don't doesn't make it less true that they do. I've used Java for a variety of applications. I have a good feel for what I think it is and isn't suitable for. What is and isn't beneficial is highly subjective. And really, someone who has never taken the time to roll their sleeves up and dive into a language can really only speculate about it. How many times have we seen C++ programmers dis D after glancing at the feature comparison list without ever writing a line of D code? When you have actually used a language in anger, you have a much better perspective as to what its strengths and weaknesses are. The benefits they see are not theoretical. To most Java programmers I know, speed is rarely a concern (though it does pop up occasionally, particularly with trig functions). If they weren't satisfied with the performance characteristics they wouldn't be using it. They are more often concerned with distribution, or the market penetration of a particular JRE version. Java and .NET both have a place. The benefits users see from them may or may not be related to the existence of a VM, but those who do use the languages usually do see benefits of some kind. Otherwise they'd all be using C or C++.
May 01 2007
parent reply Tom <tom nospam.com> writes:
Mike Parker escribió:
 Tom wrote:
 You people can list a million of (mostly) theoretical benefits in 
 having a VM. Java/.NET apps will continue to be damn slow despite of 
 these statements (Java the most). That is the simple and self-evident 
 truth. Aside from, the idea of having a CPU core for the exclusive use 
 of a VM is a *total* waste. I don't trust in hardware solutions for 
 software problems.
of the games out there being developed in both languages? This is an argument that will last into infinity, I'm sure. There are people who doing so. The fact that you don't doesn't make it less true that they do. I've used Java for a variety of applications.
this ground). Though I've seen *A LOT* of server/client apps done in Java. The speed *IS* a concern, believe me. They ARE definitely slow in comparison to C/C++ apps. On the other hand, I remember a great game that was written in a mix of C++/Python, and was REALLY GOOD and fast: Blade of darkness was its name IIRC. Though, the speed code was C++, so...
 I have a good feel for 
 what I think it is and isn't suitable for. What is and isn't beneficial 
 is highly subjective.
 
 And really, someone who has never taken the time to roll their sleeves 
 up and dive into a language can really only speculate about it. How many 
 times have we seen C++ programmers dis D after glancing at the feature 
 comparison list without ever writing a line of D code? When you have 
 actually used a language in anger, you have a much better perspective as 
 to what its strengths and weaknesses are. 
Ehm, I work with Java/Perl the better part of the time. So, I think I've roll my sleeves a lot with it. :)
 The benefits they see are not 
 theoretical. To most Java programmers I know, speed is rarely a concern 
 (though it does pop up occasionally, particularly with trig functions). 
 If they weren't satisfied with the performance characteristics they 
 wouldn't be using it. They are more often concerned with distribution, 
 or the market penetration of a particular JRE version.
I can't deny the benefits, and they're not ALL theoretical. Though, Java has a lot of drawbacks in the performance market. It's really good (yet slow but good) for server side apps.
 Java and .NET both have a place. The benefits users see from them may or 
 may not be related to the existence of a VM, but those who do use the 
 languages usually do see benefits of some kind. Otherwise they'd all be 
 using C or C++.
Of course, and coming from the C++ world, that's why I like D so much.
May 01 2007
parent reply Dave <Dave_member pathlink.com> writes:
Tom wrote:
 Mike Parker escribió:
 Tom wrote:
 You people can list a million of (mostly) theoretical benefits in 
 having a VM. Java/.NET apps will continue to be damn slow despite of 
 these statements (Java the most). That is the simple and self-evident 
 truth. Aside from, the idea of having a CPU core for the exclusive 
 use of a VM is a *total* waste. I don't trust in hardware solutions 
 for software problems.
some of the games out there being developed in both languages? This is an argument that will last into infinity, I'm sure. There are people in doing so. The fact that you don't doesn't make it less true that they do. I've used Java for a variety of applications.
position on this ground). Though I've seen *A LOT* of server/client apps done in Java. The speed *IS* a concern, believe me. They ARE definitely slow in comparison to C/C++ apps.
Ok, I know Perl is specialized for this type of thing (with many of the libs. written in C), but for small programs handling large chunks of data, Java has rarely been a consideration in the shops I've recently worked at. And believe me, it's not for lack of trying because it's easier to find decent Java hackers than good C or Perl hackers, IME. I remember actually scripting something like: if(file_size > X) java -server -XmsY -XmxZ App else java -client App and having to experiment to set X, Y and Z, and Perl still worked better. What a PITA. More of the same w/ .NET (speed critical stuff in native C++), although the .NET GC is very good and generally hard to beat with hand-crafted mem. mgmt. (again IME).
 On the other hand, I remember a great game that was written in a mix of 
 C++/Python, and was REALLY GOOD and fast: Blade of darkness was its name 
 IIRC. Though, the speed code was C++, so...
 
 I have a good feel for what I think it is and isn't suitable for. What 
 is and isn't beneficial is highly subjective.

 And really, someone who has never taken the time to roll their sleeves 
 up and dive into a language can really only speculate about it. How 
 many times have we seen C++ programmers dis D after glancing at the 
 feature comparison list without ever writing a line of D code? When 
 you have actually used a language in anger, you have a much better 
 perspective as to what its strengths and weaknesses are. 
Ehm, I work with Java/Perl the better part of the time. So, I think I've roll my sleeves a lot with it. :)
 The benefits they see are not theoretical. To most Java programmers I 
 know, speed is rarely a concern (though it does pop up occasionally, 
 particularly with trig functions). If they weren't satisfied with the 
 performance characteristics they wouldn't be using it. They are more 
 often concerned with distribution, or the market penetration of a 
 particular JRE version.
I can't deny the benefits, and they're not ALL theoretical. Though, Java has a lot of drawbacks in the performance market. It's really good (yet slow but good) for server side apps.
 Java and .NET both have a place. The benefits users see from them may 
 or may not be related to the existence of a VM, but those who do use 
 the languages usually do see benefits of some kind. Otherwise they'd 
 all be using C or C++.
Of course, and coming from the C++ world, that's why I like D so much.
May 01 2007
parent Tom <tom nospam.com> writes:
Dave escribió:
 Tom wrote:
 Mike Parker escribió:
 Tom wrote:
[...]

 position on this ground). Though I've seen *A LOT* of server/client 
 apps done in Java. The speed *IS* a concern, believe me. They ARE 
 definitely slow in comparison to C/C++ apps.
Ok, I know Perl is specialized for this type of thing (with many of the libs. written in C), but for small programs handling large chunks of data, Java has rarely been a consideration in the shops I've recently worked at. And believe me, it's not for lack of trying because it's easier to find decent Java hackers than good C or Perl hackers, IME. I remember actually scripting something like: if(file_size > X) java -server -XmsY -XmxZ App else java -client App and having to experiment to set X, Y and Z, and Perl still worked better. What a PITA. More of the same w/ .NET (speed critical stuff in native C++), although the .NET GC is very good and generally hard to beat with hand-crafted mem. mgmt. (again IME).
I love Perl, but once the project surpasses X lines of code (i.e. gets bigger enough), dynamic typing is just prohibitive. yet. Then, if I had etc.), I would choose D without hesitation. ;) Tom; (Tomás Rossi)
May 01 2007
prev sibling parent Jan Claeys <usenet janc.be> writes:
Op Tue, 01 May 2007 02:55:44 -0300
schreef Tom <tom nospam.com>:

 You people can list a million of (mostly) theoretical benefits in
 having a VM.
"Virtual machines" (implemented in software) & "real machines" (implemented in hardware, aka "CPUs") are both "machines". VMs have the advantage that they are easier & faster to change and also cheaper to (re)produce, that's also why every modern CPU starts life as a VM during its design & development. -- JanC
May 01 2007
prev sibling parent reply Don Clugston <dac nospam.com.au> writes:
Ary Manzana wrote:
 lubosh escribió:
 Hi all,

 I wonder what you all think about the future of programming platforms. 

 about D.

 Honestly, I feel quite refreshed to re-discover native compilation in 
 D again.
Me too. :-) It seems so much more lightweight than .NET framework or Java. Why there's so much push on the market (Microsoft, Sun) for executing source code within virtual machines? Do we really need yet another layer between hardware and our code?
I think a big reason for .NET was the Itanium. It was going to make it possible to write x86 apps which would run without modification when we all switched to Itanium. We needed a virtual machine to isolate us from the thing which was likely to change (the CPU). Java had a VM so it could run on SPARC (now dead), Alpha (now dead), Itanium (never really alive), PowerPC, and x86. Instead, x86 asm now runs natively on the latest Macs.
Apr 30 2007
parent Walter Bright <newshound1 digitalmars.com> writes:
Don Clugston wrote:
 Java had a VM so it could run on SPARC (now dead), Alpha (now dead), 
 Itanium (never really alive), PowerPC, and x86.
Java originally was intended to be for embedded systems with very tight memory requirements, and having an interpreter is an easy way to squeeze more functionality into it. It's also hard to write a back end, so writing an interpreter instead is quicker and gets you to market faster. Also, early Javas were interpreter only. JITs didn't come until much later, and the first one wasn't developed by Sun, it was developed by Symantec.
Apr 30 2007
prev sibling next sibling parent Sean Kelly <sean f4.ca> writes:
lubosh wrote:
 Hi all,
 
 I wonder what you all think about the future of programming platforms. I've

 
 Honestly, I feel quite refreshed to re-discover native compilation in D again.
It seems so much more lightweight than .NET framework or Java. Why there's so
much push on the market (Microsoft, Sun) for executing source code within
virtual machines? Do we really need yet another layer between hardware and our
code? What's your opinion? I wonder how much stir up would D cause if it would
have nice and powerful standardized library and really good IDE (like VS.NET)
Java runs on a VM largely because it allows proprietary applications to be run on any platform with a supporting VM. The alternative would be to distribute code in source form and have the user build locally, or to pre-build for every target platform (which is not always feasible). By contrast, the primary reason for .NET running in a VM is language interoperability (since .NET is a COM replacement). I would say that a VM-based D would be useful in the same situations, though I don't have a need for this myself. Sean
Apr 30 2007
prev sibling next sibling parent reply lubosh <lubosha gmail.com> writes:
Sean Kelly Wrote:

 Java runs on a VM largely because it allows proprietary applications to 
 be run on any platform with a supporting VM. The alternative would be 
 to distribute code in source form and have the user build locally, or to 
 pre-build for every target platform (which is not always feasible).
I don't mind to do build for every target platform. There's a lot of I/O and CPU overhead initializing JIT compiler and compiling source runtime. Users are constantly complaining about start-up times. That's why Microsoft provided utility called NGEN which is producing native binaries of .NET bytecode so JIT compilation won't be needed. Whole .NET framework is practically NGENed during installation. I understand JIT compilation is not going away, especially for dynamic languages such as Python but I'm not sure if we can really squeeze that much more from JIT compilation of statically-typed languages. If there are not going to be significant performance gains in comparison to running pre-compiled programs, then I suppose we're just adding one unnecessary layer and JAVA+.NET are going wrong direction. I'm just looking for answers if JIT compilation for statically-typed languages is doomed or has any hope. Lubos
Apr 30 2007
next sibling parent reply Don Clugston <dac nospam.com.au> writes:
lubosh wrote:
 Sean Kelly Wrote:
 
 Java runs on a VM largely because it allows proprietary applications to 
 be run on any platform with a supporting VM. The alternative would be 
 to distribute code in source form and have the user build locally, or to 
 pre-build for every target platform (which is not always feasible).
I don't mind to do build for every target platform. There's a lot of I/O and CPU overhead initializing JIT compiler and compiling source runtime. Users are constantly complaining about start-up times. That's why Microsoft provided utility called NGEN which is producing native binaries of .NET bytecode so JIT compilation won't be needed. Whole .NET framework is practically NGENed during installation. I understand JIT compilation is not going away, especially for dynamic languages such as Python but I'm not sure if we can really squeeze that much more from JIT compilation of statically-typed languages. If there are not going to be significant performance gains in comparison to running pre-compiled programs, then I suppose we're just adding one unnecessary layer and JAVA+.NET are going wrong direction. I'm just looking for answers if JIT compilation for statically-typed languages is doomed or has any hope.
I don't think JIT as performed by Java and .NET makes any sense; it's performed far too late. However, the fast fourier transform code in www.fftw.org is a stunning example of an alternative. It compiles several algorithms, and profiles each of them. Then it links in the fastest one. You have to be able to JIT the algorithm; JITing the code generation step is useless.
Apr 30 2007
parent Mike Parker <aldacron71 yahoo.com> writes:
Don Clugston wrote:

 I don't think JIT as performed by Java and .NET makes any sense; it's 
 performed far too late.
You might be interested in this article: http://www-128.ibm.com/developerworks/java/library/j-rtj2/index.html
May 01 2007
prev sibling parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
lubosh wrote:
 Sean Kelly Wrote:
 
 That's why Microsoft provided utility called NGEN which is producing native
binaries of .NET bytecode so JIT compilation won't be needed. Whole .NET
framework is practically NGENed during installation.
 
Ah, interesting, so that's why the installtion of the the .NET framework takes a rather long time, mistery explained. -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
May 03 2007
parent reply Pragma <ericanderton yahoo.removeme.com> writes:
Bruno Medeiros wrote:
 lubosh wrote:
 Sean Kelly Wrote:

 That's why Microsoft provided utility called NGEN which is producing 
 native binaries of .NET bytecode so JIT compilation won't be needed. 
 Whole .NET framework is practically NGENed during installation.
Ah, interesting, so that's why the installtion of the the .NET framework takes a rather long time, mistery explained.
But what's truly ridiculous is that .NET has exactly *one* target platform. -- - EricAnderton at yahoo
May 03 2007
next sibling parent reply Jari-Matti =?ISO-8859-1?Q?M=E4kel=E4?= <jmjmak utu.fi.invalid> writes:
Pragma wrote:

 Bruno Medeiros wrote:
 lubosh wrote:
 Sean Kelly Wrote:

 That's why Microsoft provided utility called NGEN which is producing
 native binaries of .NET bytecode so JIT compilation won't be needed.
 Whole .NET framework is practically NGENed during installation.
Ah, interesting, so that's why the installtion of the the .NET framework takes a rather long time, mistery explained.
But what's truly ridiculous is that .NET has exactly *one* target platform.
Hehe, on slashdot a 'you must be new here' reply would be modded +5 informative :P Yeah, of course it makes sense. Let's abstract away the underlying hardware & operating system and lock people on this new highly portable platform with IP stuff, patents and DMCA. Problem solved. It's interesting to see how much effort MS has put into .NET platform and language research (well, except Java for some unknown reason :P) lately. I don't think they will be giving it all away for free.
May 03 2007
next sibling parent reply Sean Kelly <sean f4.ca> writes:
Jari-Matti Mäkelä wrote:
 
 Yeah, of course it makes sense. Let's abstract away the underlying hardware
 & operating system and lock people on this new highly portable platform
 with IP stuff, patents and DMCA. Problem solved.
To be fair, Ms does target ARM as well, for its handheld devices. Though I wonder if those devices have a full .NET VM. In any case, pre-generating binary code is obviously more efficient, so why not use it for a VM? The original point of .NET is a COM replacement anyway, regardless of how things have been spun.
 It's interesting to see how much effort MS has put into .NET platform and
 language research (well, except Java for some unknown reason :P) lately. I
 don't think they will be giving it all away for free.
They have to. The CLI is an open standard. They may choose to sell their implementation of it of course, but they can't forbid anyone from implementing a compatible VM. Sean
May 03 2007
next sibling parent Jari-Matti =?ISO-8859-1?Q?M=E4kel=E4?= <jmjmak utu.fi.invalid> writes:
Sean Kelly wrote:
 Jari-Matti Mäkelä wrote:
 
 Yeah, of course it makes sense. Let's abstract away the underlying
 hardware & operating system and lock people on this new highly portable
 platform with IP stuff, patents and DMCA. Problem solved.
To be fair, Ms does target ARM as well, for its handheld devices. Though I wonder if those devices have a full .NET VM. In any case, pre-generating binary code is obviously more efficient, so why not use it for a VM? The original point of .NET is a COM replacement anyway, regardless of how things have been spun.
I was a bit over dramatic. It might also be the only way to get rid of legacy x86 support, if ever possible.
May 03 2007
prev sibling next sibling parent "Anders Bergh" <anders1 gmail.com> writes:
Don't forget about PowerPC and IA-64. XNA lets you write games for the


I think .NET is more of an effort to kill Java rather than replacing COM though.

On 5/3/07, Sean Kelly <sean f4.ca> wrote:
 To be fair, Ms does target ARM as well, for its handheld devices.
 Though I wonder if those devices have a full .NET VM.  In any case,
 pre-generating binary code is obviously more efficient, so why not use
 it for a VM?  The original point of .NET is a COM replacement anyway,
 regardless of how things have been spun.
-- Anders
May 03 2007
prev sibling parent James Dennett <jdennett acm.org> writes:
Sean Kelly wrote:
 Jari-Matti Mäkelä wrote:
 Yeah, of course it makes sense. Let's abstract away the underlying
 hardware
 & operating system and lock people on this new highly portable platform
 with IP stuff, patents and DMCA. Problem solved.
To be fair, Ms does target ARM as well, for its handheld devices. Though I wonder if those devices have a full .NET VM. In any case, pre-generating binary code is obviously more efficient, so why not use it for a VM? The original point of .NET is a COM replacement anyway, regardless of how things have been spun.
 It's interesting to see how much effort MS has put into .NET platform and
 language research (well, except Java for some unknown reason :P)
 lately. I
 don't think they will be giving it all away for free.
They have to. The CLI is an open standard. They may choose to sell their implementation of it of course, but they can't forbid anyone from implementing a compatible VM.
Being a standard doesn't mean that it's free of patent problems, so it may not be freely implementable. Patents *do* allow you a monopoly on devices implementing their claims. (Though recent US Supreme Court rulings might help to reduce the lunacy that has been ruling the software industry of late.) -- James
May 03 2007
prev sibling parent reply Pragma <ericanderton yahoo.removeme.com> writes:
Jari-Matti Mäkelä wrote:
 Pragma wrote:
 
 Bruno Medeiros wrote:
 lubosh wrote:
 Sean Kelly Wrote:

 That's why Microsoft provided utility called NGEN which is producing
 native binaries of .NET bytecode so JIT compilation won't be needed.
 Whole .NET framework is practically NGENed during installation.
Ah, interesting, so that's why the installtion of the the .NET framework takes a rather long time, mistery explained.
But what's truly ridiculous is that .NET has exactly *one* target platform.
Hehe, on slashdot a 'you must be new here' reply would be modded +5 informative :P Yeah, of course it makes sense. Let's abstract away the underlying hardware & operating system and lock people on this new highly portable platform with IP stuff, patents and DMCA. Problem solved. It's interesting to see how much effort MS has put into .NET platform and language research (well, except Java for some unknown reason :P) lately. I don't think they will be giving it all away for free.
True enough. Perhaps this is your reply sailing right over my head, but I was commenting more about how the .NET installer spends all this effort NGEN-ing the CLI distribution on install (supposedly anyway). If they're deploying to just one target platform, why wouldn't they just pre-compile before release? But you have a point - they're obviously not trying to solve any portability problems. -- - EricAnderton at yahoo
May 04 2007
parent Jari-Matti =?ISO-8859-1?Q?M=E4kel=E4?= <jmjmak utu.fi.invalid> writes:
Pragma wrote:
 Perhaps this is your reply sailing right over my head, but I
 was commenting more about how the .NET
 installer spends all this effort NGEN-ing the CLI distribution on install
 (supposedly anyway).  If they're deploying to just one target platform,
 why wouldn't they just pre-compile before release?
Oh, that. I've probably spent one year too many compiling Gentoo, it didn't even come to my head until some time after pressing the 'Send'. :)
May 04 2007
prev sibling parent reply Joel Lucsy <jjlucsy gmail.com> writes:
Pragma wrote:
 But what's truly ridiculous is that .NET has exactly *one* target platform.
Oh? So you're saying the optimized code coming out of the NGEN sequence for a P4 CPU will be identical to the code for a P3 CPU? And what about Itaniums (or any other CPU) running 64 bit? I'd pretty sure Microsoft is counting those variations as "platforms". -- Joel Lucsy "The dinosaurs became extinct because they didn't have a space program." -- Larry Niven
May 03 2007
parent reply Pragma <ericanderton yahoo.removeme.com> writes:
Joel Lucsy wrote:
 Pragma wrote:
 But what's truly ridiculous is that .NET has exactly *one* target 
 platform.
Oh? So you're saying the optimized code coming out of the NGEN sequence for a P4 CPU will be identical to the code for a P3 CPU? And what about Itaniums (or any other CPU) running 64 bit? I'd pretty sure Microsoft is counting those variations as "platforms".
Good point. -1 for me for not recalling what started this particular portion of the thread. ;) -- - EricAnderton at yahoo
May 04 2007
parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Pragma wrote:
 Joel Lucsy wrote:
 Pragma wrote:
 But what's truly ridiculous is that .NET has exactly *one* target 
 platform.
Oh? So you're saying the optimized code coming out of the NGEN sequence for a P4 CPU will be identical to the code for a P3 CPU? And what about Itaniums (or any other CPU) running 64 bit? I'd pretty sure Microsoft is counting those variations as "platforms".
Good point. -1 for me for not recalling what started this particular portion of the thread. ;)
Yup, that's was I was going to say, platform != CPU configuration. ^^ -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
May 06 2007
prev sibling parent reply Boris Kolar <boris.kolar globera.com> writes:
lubosh Wrote:
 Do we really need yet another layer between hardware and our code? 
Both JVM and CLR (.NET) are badly designed. Both platforms are too tightly tied guarantees or performance gains. Fundamentally, we don't need another layer between hardware and code. But since design of our typical hardware (like x86) is not very good either, VM can actually improve both performance (hardware sandboxing, for example, does not perform very well and doesn't allow enough granularity) and security (native code is extremely difficult to analyze from security point of view). VM then basically becomes what your hardware should be. I'm generally in favor of lightweight VMs that hide hardware deficiencies and differences. Such VM can improve code compactness, allow for more aggressive inlining, provide security and reliability guarantees,... Another significant advantage is that it would greatly reduce complexity of generating code at runtime (and generally promote a more layered approach to computation, like Lisp-like features).
May 02 2007
parent Trish Jones <trishjo gmail.com> writes:
Have you checked out the work of Ian Piumarta? He has done some very intersting
work on 'live' compilation of dynamic languages to native code. He is currently
working with Alan Kay on their next generation smalltalk, but the technology
seems to be applicable to most languages.

Links to a lot of info can be found in this blog post:
http://www.equi4.com/jcw/files/bcf5635ccbc5b6ab916a38ef7aaa844b-139.html

Boris Kolar Wrote:
 Both JVM and CLR (.NET) are badly designed. Both platforms are too tightly

security guarantees or performance gains.
 
 Fundamentally, we don't need another layer between hardware and code. But
since design of our typical hardware (like x86) is not very good either, VM can
actually improve both performance (hardware sandboxing, for example, does not
perform very well and doesn't allow enough granularity) and security (native
code is extremely difficult to analyze from security point of view). VM then
basically becomes what your hardware should be.
 
 I'm generally in favor of lightweight VMs that hide hardware deficiencies and
differences. Such VM can improve code compactness, allow for more aggressive
inlining, provide security and reliability guarantees,... Another significant
advantage is that it would greatly reduce complexity of generating code at
runtime (and generally promote a more layered approach to computation, like
Lisp-like features). 
May 02 2007