www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Re: D vs. C#

reply Jussi Jumppanen <jussij zeusedit.com> writes:
Yigal Chripun Wrote:

 3) i don't see enough commitment from MS to the .net platform. 

Just give it a few years. I think Microsoft's longer term vision is to have .NET everywhere and I mean everywhere.
Oct 21 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Jussi Jumppanen wrote:
 I think Microsoft's longer term vision is to have .NET everywhere
 and I mean everywhere.

I've never been able to discover what the fundamental advantage of a VM is.
Oct 21 2007
next sibling parent reply Derek Parnell <derek nomail.afraid.org> writes:
On Sun, 21 Oct 2007 19:19:39 -0700, Walter Bright wrote:

 Jussi Jumppanen wrote:
 I think Microsoft's longer term vision is to have .NET everywhere
 and I mean everywhere.

I've never been able to discover what the fundamental advantage of a VM is.

It's easier to change the functionality built into a VM than it is for hard-coded silicon. -- Derek (skype: derek.j.parnell) Melbourne, Australia 22/10/2007 12:55:43 PM
Oct 21 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Derek Parnell wrote:
 On Sun, 21 Oct 2007 19:19:39 -0700, Walter Bright wrote:
 
 Jussi Jumppanen wrote:
 I think Microsoft's longer term vision is to have .NET everywhere
 and I mean everywhere.


It's easier to change the functionality built into a VM than it is for hard-coded silicon.

Since the VM ultimately runs on that silicon, it's hard to see how.
Oct 21 2007
parent reply Derek Parnell <derek nomail.afraid.org> writes:
On Sun, 21 Oct 2007 22:06:44 -0700, Walter Bright wrote:

 Derek Parnell wrote:
 On Sun, 21 Oct 2007 19:19:39 -0700, Walter Bright wrote:
 
 Jussi Jumppanen wrote:
 I think Microsoft's longer term vision is to have .NET everywhere
 and I mean everywhere.


It's easier to change the functionality built into a VM than it is for hard-coded silicon.

Since the VM ultimately runs on that silicon, it's hard to see how.

I suspect that we are talking about different things. When I say "VM" I'm referring to a Virtual Machine, that is, a CPU instruction set that is emulated by software. Because it is a software based emulation, it is easier/cheaper/faster to modify that silicon chips. The fact that a VM (the software) runs on a real machine is totally irrelevant to the reasons for having the VM. For example, I might have a VM that enables me to run Commodore-64 executable files on my Intel PC. Or another VM that runs Knuth's MIX instruction set. In many cases a VM is an idealized machine being emulated, and compilers can create object code for the idealized machine. This is then run on real machines of totally different architectures. If the idealized machine is enhanced, only the VM is updated and the silicon chips running the VM don't have to be replaced. A real boon if you are selling software for the various proprietary CPUs embedded in devices to the mass consumer market. -- Derek (skype: derek.j.parnell) Melbourne, Australia 22/10/2007 5:14:37 PM
Oct 22 2007
parent Walter Bright <newshound1 digitalmars.com> writes:
Derek Parnell wrote:
 On Sun, 21 Oct 2007 22:06:44 -0700, Walter Bright wrote:
 
 Derek Parnell wrote:
 On Sun, 21 Oct 2007 19:19:39 -0700, Walter Bright wrote:

 Jussi Jumppanen wrote:
 I think Microsoft's longer term vision is to have .NET everywhere
 and I mean everywhere.


hard-coded silicon.


I suspect that we are talking about different things. When I say "VM" I'm referring to a Virtual Machine, that is, a CPU instruction set that is emulated by software. Because it is a software based emulation, it is easier/cheaper/faster to modify that silicon chips. The fact that a VM (the software) runs on a real machine is totally irrelevant to the reasons for having the VM.

I mean a VM like the Java VM or .net VM.
 For example, I might have a VM that enables me to run Commodore-64
 executable files on my Intel PC. Or another VM that runs Knuth's MIX
 instruction set. In many cases a VM is an idealized machine being emulated,
 and compilers can create object code for the idealized machine. This is
 then run on real machines of totally different architectures. If the
 idealized machine is enhanced, only the VM is updated and the silicon chips
 running the VM don't have to be replaced. A real boon if you are selling
 software for the various proprietary CPUs embedded in devices to the mass
 consumer market.

If the source code is portable, i.e. there is no undefined or implementation defined behavior, there's no reason that the VM object code should be more portable than the source. (And remember all the troubles with Java VMs behaving differently?)
Oct 22 2007
prev sibling next sibling parent reply Robert Fraser <fraserofthenight gmail.com> writes:
Walter Bright Wrote:

 Jussi Jumppanen wrote:
 I think Microsoft's longer term vision is to have .NET everywhere
 and I mean everywhere.

I've never been able to discover what the fundamental advantage of a VM is.

I'm sure there are a lot of advantages, but here's one I can think of off the top of my head: say you're designing an application for mobile devices/smartphones. There are a lot of different phones out there, probably all with different APIs (or no native API accessible outside the company that made the phone), but if your software is running in a VM it'll run the same everywhere. Now say you're a cell phone manufacturer introducing a new smartphone -- adding the VM makes all the software written to use that platform instantly compatible with your device.
Oct 21 2007
next sibling parent reply "Dave" <Dave_member pathlink.com> writes:
"Robert Fraser" <fraserofthenight gmail.com> wrote in message 
news:ffh727$1trc$1 digitalmars.com...
 Walter Bright Wrote:

 Jussi Jumppanen wrote:
 I think Microsoft's longer term vision is to have .NET everywhere
 and I mean everywhere.

I've never been able to discover what the fundamental advantage of a VM is.

I'm sure there are a lot of advantages, but here's one I can think of off the top of my head: say you're designing an application for mobile devices/smartphones. There are a lot of different phones out there, probably all with different APIs (or no native API accessible outside the company that made the phone), but if your software is running in a VM it'll run the same everywhere. Now say you're a cell phone manufacturer introducing a new smartphone -- adding the VM makes all the software written to use that platform instantly compatible with your device.

Other than some runtime reflection that can be done with a VM (and not a static binary), I think the only concrete advantage is to compile one set of bytecode for all platforms instead of one binary for each platform. But then bytecode can be more easily "decompiled" and copied too. Either way someone has to develop either a VM or a compiler for each platform. API's are really more a function of a library than a VM, IMO. Runtime reflection aside, I can't think of anything a VM can do that a static compiler couldn't with the possible (but largely unproven) exception of sometimes generating better code because of access to runtime info. For example, static compilers / libraries can do most of that too if needed (and they can do it w/o the extra runtime overhead of profiling and re-compiling) by compiling some heuristics right in to the binary.
Oct 21 2007
next sibling parent reply "Dave" <Dave_member pathlink.com> writes:
"David Brown" <dlang davidb.org> wrote in message 
news:mailman.497.1193030905.16939.digitalmars-d puremagic.com...
 On Sun, Oct 21, 2007 at 11:21:37PM -0500, Dave wrote:

 Runtime reflection aside, I can't think of anything a VM can do that a 
 static compiler couldn't with the possible (but largely unproven) 
 exception of sometimes generating better code because of access to 
 runtime info.

I believe most already do this kind of analysis. I'm not sure it helps, since there is plenty of other overhead to using a VM, so it probably just makes the VM use less costly.

What I meant by 'largely unproven' is that when truly runtime-only info. (like machine load) is taken into account, it is hard to prove that using it to generate different machine code actually makes a difference but IIRC I've seen claims to that effect. For the more reproducable kind of runtime info. (like model of x86 CPU), one static compiler that can compile binaries to make use of that is Intel. It has a switch that will compile several sets of code and will run "the best" set depending on the chip.
 David 

Oct 21 2007
parent Walter Bright <newshound1 digitalmars.com> writes:
Dave wrote:
 For the more reproducable kind of runtime info. (like model of x86 CPU), 
 one static compiler that can compile binaries to make use of that is 
 Intel. It has a switch that will compile several sets of code and will 
 run "the best" set depending on the chip.

I've been doing that since the 80's (generated code would have different paths for floating point depending on the hardware).
Oct 22 2007
prev sibling parent reply Christopher Wright <dhasenan gmail.com> writes:
Dave wrote:
 
 "Robert Fraser" <fraserofthenight gmail.com> wrote in message 
 news:ffh727$1trc$1 digitalmars.com...
 Walter Bright Wrote:

 Jussi Jumppanen wrote:
 I think Microsoft's longer term vision is to have .NET everywhere
 and I mean everywhere.

I've never been able to discover what the fundamental advantage of a VM is.

I'm sure there are a lot of advantages, but here's one I can think of off the top of my head: say you're designing an application for mobile devices/smartphones. There are a lot of different phones out there, probably all with different APIs (or no native API accessible outside the company that made the phone), but if your software is running in a VM it'll run the same everywhere. Now say you're a cell phone manufacturer introducing a new smartphone -- adding the VM makes all the software written to use that platform instantly compatible with your device.

Other than some runtime reflection that can be done with a VM (and not a static binary), I think the only concrete advantage is to compile one set of bytecode for all platforms instead of one binary for each platform. But then bytecode can be more easily "decompiled" and copied too. Either way someone has to develop either a VM or a compiler for each platform. API's are really more a function of a library than a VM, IMO. Runtime reflection aside, I can't think of anything a VM can do that a static compiler couldn't with the possible (but largely unproven) exception of sometimes generating better code because of access to runtime info. For example, static compilers / libraries can do most of that too if needed (and they can do it w/o the extra runtime overhead of profiling and re-compiling) by compiling some heuristics right in to the binary.

One possibility is to do profiling while the application is running and do further optimizations based on that. The questions are, is the VM performance hit worse than the optimizations, and is there a compelling reason not to do those optimizations always?
Oct 22 2007
parent reply "Dave" <Dave_member pathlink.com> writes:
"Christopher Wright" <dhasenan gmail.com> wrote in message 
news:ffi6lh$1cn5$1 digitalmars.com...
 Dave wrote:
 "Robert Fraser" <fraserofthenight gmail.com> wrote in message 
 news:ffh727$1trc$1 digitalmars.com...
 Walter Bright Wrote:

 Jussi Jumppanen wrote:
 I think Microsoft's longer term vision is to have .NET everywhere
 and I mean everywhere.

I've never been able to discover what the fundamental advantage of a VM is.

I'm sure there are a lot of advantages, but here's one I can think of off the top of my head: say you're designing an application for mobile devices/smartphones. There are a lot of different phones out there, probably all with different APIs (or no native API accessible outside the company that made the phone), but if your software is running in a VM it'll run the same everywhere. Now say you're a cell phone manufacturer introducing a new smartphone -- adding the VM makes all the software written to use that platform instantly compatible with your device.

Other than some runtime reflection that can be done with a VM (and not a static binary), I think the only concrete advantage is to compile one set of bytecode for all platforms instead of one binary for each platform. But then bytecode can be more easily "decompiled" and copied too. Either way someone has to develop either a VM or a compiler for each platform. API's are really more a function of a library than a VM, IMO. Runtime reflection aside, I can't think of anything a VM can do that a static compiler couldn't with the possible (but largely unproven) exception of sometimes generating better code because of access to runtime info. For example, static compilers / libraries can do most of that too if needed (and they can do it w/o the extra runtime overhead of profiling and re-compiling) by compiling some heuristics right in to the binary.

One possibility is to do profiling while the application is running and do further optimizations based on that. The questions are, is the VM performance hit worse than the optimizations, and is there a compelling reason not to do those optimizations always?

That's what Sun Hotspot does, but I've rarely seen where the results are better than what a static compiler w/ the "-O2" switch can do and often seen where they are worse. IIRC (for example) the Jet "Ahead of Time" Java compiler can often outperform the Sun VM. Not that all this really matters for *most* code however, where just compiling to native code is a big enough win. But I have seen the old 80-20 rule at work -- cases where 80% of the time is spent on 20% of the code trying to make it run faster -- so it's not a moot point either.
Oct 22 2007
parent reply Robert Fraser <fraserofthenight gmail.com> writes:
Dave Wrote:

 One possibility is to do profiling while the application is running and do 
 further optimizations based on that. The questions are, is the VM 
 performance hit worse than the optimizations, and is there a compelling 
 reason not to do those optimizations always?

That's what Sun Hotspot does, but I've rarely seen where the results are better than what a static compiler w/ the "-O2" switch can do and often seen where they are worse. IIRC (for example) the Jet "Ahead of Time" Java compiler can often outperform the Sun VM. Not that all this really matters for *most* code however, where just compiling to native code is a big enough win. But I have seen the old 80-20 rule at work -- cases where 80% of the time is spent on 20% of the code trying to make it run faster -- so it's not a moot point either.

Right now in-flight optimization rarely makes code that runs faster, but it's a new technology. In 10 years, I'm guessing that most code will run equally fast under a VM as native, and another 10 and the VM will be superior. Especially as multi-core architectures become more popular, I think this will be a big issue (since the VM can automatically parallelize loops, etc.).
Oct 22 2007
next sibling parent "Dave" <Dave_member pathlink.com> writes:
"Robert Fraser" <fraserofthenight gmail.com> wrote in message 
news:ffj0pl$iuq$1 digitalmars.com...
 Dave Wrote:

 One possibility is to do profiling while the application is running and 
 do
 further optimizations based on that. The questions are, is the VM
 performance hit worse than the optimizations, and is there a compelling
 reason not to do those optimizations always?

That's what Sun Hotspot does, but I've rarely seen where the results are better than what a static compiler w/ the "-O2" switch can do and often seen where they are worse. IIRC (for example) the Jet "Ahead of Time" Java compiler can often outperform the Sun VM. Not that all this really matters for *most* code however, where just compiling to native code is a big enough win. But I have seen the old 80-20 rule at work -- cases where 80% of the time is spent on 20% of the code trying to make it run faster -- so it's not a moot point either.

Right now in-flight optimization rarely makes code that runs faster, but it's a new technology. In 10 years, I'm guessing that most code will run equally fast under a VM as native, and another 10 and the VM will be superior. Especially as multi-core architectures become more popular, I think this will be a big issue (since the VM can automatically parallelize loops, etc.).

I've (literally) heard that same thing for the last 10 years. Sun's Hotspot has been in constant development for about that long too, not to mention probably several times the amount of research $ on VM's rather than static compilers. Same w/ .NET which started out life as Visual J++. For static multi-core/multi-thread optimization there is OpenMP and also the Intel and AMD MT and math libs. Sun has had Java and multi-CPU machines in mind since day one, back when they were one of the few large vendors of those types of systems. Vendors like Sun are probably a decade ahead of commodity Intel machines when it comes to hardware and operating system architecture, and they're the ones developing the high-end VM's. I think it's probably at the point now where just about any improvement made to VM's could be matched by the same improvement in static compilers and/or static libraries.
Oct 22 2007
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Robert Fraser wrote:
 Right now in-flight optimization rarely makes code that runs faster,
 but it's a new technology. In 10 years, I'm guessing that most code
 will run equally fast under a VM as native, and another 10 and the VM
 will be superior. Especially as multi-core architectures become more
 popular, I think this will be a big issue (since the VM can
 automatically parallelize loops, etc.).

2 years ago, I attended a Java seminar by a Java expert who predicted that in 10 years, Java code would run as fast as C code. Since it's still 10 years out, it must be like chasing a mirage <g>.
Oct 22 2007
next sibling parent Don Clugston <dac nospam.com.au> writes:
Walter Bright wrote:
 Robert Fraser wrote:
 Right now in-flight optimization rarely makes code that runs faster,
 but it's a new technology. In 10 years, I'm guessing that most code
 will run equally fast under a VM as native, and another 10 and the VM
 will be superior. Especially as multi-core architectures become more
 popular, I think this will be a big issue (since the VM can
 automatically parallelize loops, etc.).

2 years ago, I attended a Java seminar by a Java expert who predicted that in 10 years, Java code would run as fast as C code. Since it's still 10 years out, it must be like chasing a mirage <g>.

Deja moo! I've heard that bull before. <g> I think the whole thing's a fallacy. The only real advantage I can see that a JIT compiler has, is in being able to inline dynamically loaded functions. It may also have some minor advantages in cache efficiency. (In both cases, this is actually an advantage of JIT linking, not JIT compilation). In reality, speed optimization only matters inside the innermost loops, and you get the big speed gains by algorithm changes (even small ones). A JIT compiler would seem to have an inherent disadvantage whenever the bytecode contains less information than was created in the compiler's semantic analysis. This is certainly true of the Java/.NET bytecode, which is far too low level.
Oct 23 2007
prev sibling parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Walter Bright wrote:
 Robert Fraser wrote:
 Right now in-flight optimization rarely makes code that runs faster,
 but it's a new technology. In 10 years, I'm guessing that most code
 will run equally fast under a VM as native, and another 10 and the VM
 will be superior. Especially as multi-core architectures become more
 popular, I think this will be a big issue (since the VM can
 automatically parallelize loops, etc.).

2 years ago, I attended a Java seminar by a Java expert who predicted that in 10 years, Java code would run as fast as C code. Since it's still 10 years out, it must be like chasing a mirage <g>.

Maybe he meant that in 10 years, Java code would run as fast as C code does *now*. :P And that is certainly expectable. -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Oct 24 2007
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Robert Fraser wrote:
 Walter Bright Wrote:
 I've never been able to discover what the fundamental advantage of
 a VM is.

I'm sure there are a lot of advantages, but here's one I can think of off the top of my head: say you're designing an application for mobile devices/smartphones. There are a lot of different phones out there, probably all with different APIs (or no native API accessible outside the company that made the phone), but if your software is running in a VM it'll run the same everywhere. Now say you're a cell phone manufacturer introducing a new smartphone -- adding the VM makes all the software written to use that platform instantly compatible with your device.

That isn't an advantage of the VM. It's an advantage of a language that has no implementation-defined or undefined behavior. Given that, the same portability results are achieved.
Oct 21 2007
next sibling parent reply Roberto Mariottini <rmariottini mail.com> writes:
David Brown wrote:
 On Sun, Oct 21, 2007 at 10:08:26PM -0700, Walter Bright wrote:

 That isn't an advantage of the VM. It's an advantage of a language 
 that has no implementation-defined or undefined behavior. Given that, 
 the same portability results are achieved.

It's still a VM advantage. It helps the model where there are many developers who only distribute binaries. If they are distributing for a VM, they only have to distribute a single binary. Otherwise, they still would have to recompile for every possible target.

And not only that: if my product is compiled for Java-CLDC it will work on any cell phone that support CLDC, based on any kind of processor/architecture, included those I don't know of, included even those that today don't exist and will be made in the future. Ciao
Oct 22 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Roberto Mariottini wrote:
 David Brown wrote:
 On Sun, Oct 21, 2007 at 10:08:26PM -0700, Walter Bright wrote:

 That isn't an advantage of the VM. It's an advantage of a language 
 that has no implementation-defined or undefined behavior. Given that, 
 the same portability results are achieved.

It's still a VM advantage. It helps the model where there are many developers who only distribute binaries. If they are distributing for a VM, they only have to distribute a single binary. Otherwise, they still would have to recompile for every possible target.

And not only that: if my product is compiled for Java-CLDC it will work on any cell phone that support CLDC, based on any kind of processor/architecture, included those I don't know of, included even those that today don't exist and will be made in the future.

Javascript is distributed in source code, and executes on a variety of machines. A VM is not necessary to achieve portability to machines unknown. What is necessary is a portable language design.
Oct 22 2007
parent reply Roberto Mariottini <rmariottini mail.com> writes:
Walter Bright wrote:
 Roberto Mariottini wrote:

 And not only that: if my product is compiled for Java-CLDC it will 
 work on any cell phone that support CLDC, based on any kind of 
 processor/architecture, included those I don't know of, included even 
 those that today don't exist and will be made in the future.

Javascript is distributed in source code, and executes on a variety of machines. A VM is not necessary to achieve portability to machines unknown. What is necessary is a portable language design.

Obviously, this is valid only if you want to distribute the sources, and sometimes you can't (i.e. royalties, sublicensing and the like). We are still talking only of the implementation, not the language itself. I consider the Javascript environment as a high level VM, and I still think that a compiled Javascript would be unusable. Even if technically the big difference between portable and non-portable resides on the language and the standard libraries, I think that not considering the hundreds of working VMs that exist today is narrow-thinking. To force developers to distribute their sources excludes a big part of the software world as it is today. Making D compilable for the Java VM today would make it immediately portable to tens of platforms (and hundreds of cell phones models), today. Ciao
Oct 22 2007
next sibling parent reply "Dave" <Dave_member pathlink.com> writes:
"Roberto Mariottini" <rmariottini mail.com> wrote in message 
news:ffi95a$1ihb$1 digitalmars.com...
 Making D compilable for the Java VM today would make it immediately 
 portable to tens of platforms (and hundreds of cell phones models), today.

Making a D to C translater for that might actually make more sense, given the design of D and that all of the platforms would likely have a C compiler available. Then all that would be missing would be ease of distributing a single set of bytecode. Then again, binaries couldn't be reverse engineered as easily as bytecode either. In any case, GDC may have quite a few of those chips covered before either a D bytecode compiler or C2D was done <g>. Do the standard Java GUI libraries work the same for all cell phones, or in general does each cell phone vendor have their own specialized library? Walter had a great point earlier as well -- Is Java really "write once, run anywhere" especially where GUI's are concerned? I recall a lot of complaints where some things tended to work differently depending on the VM / platform but maybe those cases are rare now-a-days.
 Ciao 

Oct 22 2007
parent Roberto Mariottini <rmariottini mail.com> writes:
Dave wrote:
 
 "Roberto Mariottini" <rmariottini mail.com> wrote in message 
 news:ffi95a$1ihb$1 digitalmars.com...
 Making D compilable for the Java VM today would make it immediately 
 portable to tens of platforms (and hundreds of cell phones models), 
 today.

Making a D to C translater for that might actually make more sense, given the design of D and that all of the platforms would likely have a C compiler available. Then all that would be missing would be ease of distributing a single set of bytecode. Then again, binaries couldn't be reverse engineered as easily as bytecode either. In any case, GDC may have quite a few of those chips covered before either a D bytecode compiler or C2D was done <g>.

I know of no cell phone with a C compiler today.
 Do the standard Java GUI libraries work the same for all cell phones, or 
 in general does each cell phone vendor have their own specialized 
 library?

MIDP and CDC are strict standards to which cell phones producer adhere. There are some vendor extensions, but they have few success: the scope of Java ME programming is to make your application/game work on any cell phone, so is in the developer interest to strictly apply the standard.
 Walter had a great point earlier as well -- Is Java really 
 "write once, run anywhere" especially where GUI's are concerned? I 
 recall a lot of complaints where some things tended to work differently 
 depending on the VM / platform but maybe those cases are rare now-a-days.

Java is really "write once, run anywhere", I've never found a GUI portability problem. The problems are the programmers that don't write portable code (this is independent from Java: you can write non-portable code in any language). Ciao
Oct 23 2007
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Roberto Mariottini wrote:
 Walter Bright wrote:
 Javascript is distributed in source code, and executes on a variety of 
 machines. A VM is not necessary to achieve portability to machines 
 unknown. What is necessary is a portable language design.

Obviously, this is valid only if you want to distribute the sources, and sometimes you can't (i.e. royalties, sublicensing and the like).

Distributing Java class files is not secure, as good decompilers exist for them. Might as well distribute source.
 We are still talking only of the implementation, not the language 
 itself. I consider the Javascript environment as a high level VM, and I 
 still think that a compiled Javascript would be unusable.

It's true that Javascript uses a VM, but it doesn't use a standardized VM for which one distributes precompiled binaries too. Javascript is always compiled/interpreted directly from source code, and source code is how its distributed.
 Even if technically the big difference between portable and non-portable 
 resides on the language and the standard libraries, I think that not 
 considering the hundreds of working VMs that exist today is 
 narrow-thinking.

Considering that a C compiler exists for a far broader range of devices than VMs do, all the motivation that is needed is the language needs to be a) popular or b) have huge resources from a company like Sun to finance development of all those VMs. Sun could just as easily have provided a generic back end & library.
 To force developers to distribute their sources excludes a big part of 
 the software world as it is today.

Not the Java world - decompilers are common and effective.
 Making D compilable for the Java VM today would make it immediately 
 portable to tens of platforms (and hundreds of cell phones models), today.

The Java VM is insufficiently powerful to use as a back end for D. It can't even do C.
Oct 22 2007
next sibling parent reply Ary Manzana <ary esperanto.org.ar> writes:
Walter Bright wrote:
 To force developers to distribute their sources excludes a big part of 
 the software world as it is today.

Not the Java world - decompilers are common and effective.

You can obfuscate the bytecode, which makes it very difficult to analyze and change it. (check for example ProGuard)
Oct 22 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Ary Manzana wrote:
 Walter Bright wrote:
 To force developers to distribute their sources excludes a big part 
 of the software world as it is today.

Not the Java world - decompilers are common and effective.

You can obfuscate the bytecode, which makes it very difficult to analyze and change it. (check for example ProGuard)

ProGuard just renames the identifiers. A source code obfuscator can do the same thing.
Oct 22 2007
parent Roberto Mariottini <rmariottini mail.com> writes:
Walter Bright wrote:
 Ary Manzana wrote:
 Walter Bright wrote:
 To force developers to distribute their sources excludes a big part 
 of the software world as it is today.

Not the Java world - decompilers are common and effective.

You can obfuscate the bytecode, which makes it very difficult to analyze and change it. (check for example ProGuard)

ProGuard just renames the identifiers. A source code obfuscator can do the same thing.

Well, ProGuard is a bit more advanced, but obviously a source code obfuscator can _always_ do more. A source code obfuscator, by the way, is much more complex than ProGuard. Ciao
Oct 23 2007
prev sibling next sibling parent reply Robert Fraser <fraserofthenight gmail.com> writes:
Walter Bright Wrote:

 Considering that a C compiler exists for a far broader range of devices 
 than VMs do, all the motivation that is needed is the language needs to 
 be a) popular or b) have huge resources from a company like Sun to 
 finance development of all those VMs. Sun could just as easily have 
 provided a generic back end & library.

I'm not o sure about that. For example, I did some development for BlackBerry devices, which don't have a native code generator (or spec) available outside RIM. All external BlackBerry applications must be deployed in Java. This has the added advantage of security and reliability, since there's no way an errant application can break the entire device, and allows RIM to change the instruction set architecture at any time. Of course, that distribute-binaries-as-source thing would work, too, but imagine sticking a whole lexer/parser/semantic/code generator on a mobile device... that processing power is better spent actually executing the application.
Oct 22 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Robert Fraser wrote:
 Walter Bright Wrote:
 
 Considering that a C compiler exists for a far broader range of
 devices than VMs do, all the motivation that is needed is the
 language needs to be a) popular or b) have huge resources from a
 company like Sun to finance development of all those VMs. Sun could
 just as easily have provided a generic back end & library.

I'm not o sure about that. For example, I did some development for BlackBerry devices, which don't have a native code generator (or spec) available outside RIM. All external BlackBerry applications must be deployed in Java.

Why couldn't RIM provide a back end as easily as a Java VM? Like I said, a simple back end could be as easy as: push operand push operand call ADD pop result Notice how close that looks to java bytecode! But it'll still execute much faster.
 This has the added advantage of security
 and reliability, since there's no way an errant application can break
 the entire device, and allows RIM to change the instruction set
 architecture at any time.

If the language has no pointers, and RIM provides the compiler for it, that is just as secure.
 Of course, that distribute-binaries-as-source thing would work, too,
 but imagine sticking a whole lexer/parser/semantic/code generator on
 a mobile device... that processing power is better spent actually
 executing the application.

It's about 500K of rom needed. And the code will run several times faster, even with a simplistic code generator, which will make up for it.
Oct 22 2007
next sibling parent Sean Kelly <sean f4.ca> writes:
Walter Bright wrote:
 Robert Fraser wrote:
 Walter Bright Wrote:

 Considering that a C compiler exists for a far broader range of
 devices than VMs do, all the motivation that is needed is the
 language needs to be a) popular or b) have huge resources from a
 company like Sun to finance development of all those VMs. Sun could
 just as easily have provided a generic back end & library.

I'm not o sure about that. For example, I did some development for BlackBerry devices, which don't have a native code generator (or spec) available outside RIM. All external BlackBerry applications must be deployed in Java.

Why couldn't RIM provide a back end as easily as a Java VM? Like I said, a simple back end could be as easy as: push operand push operand call ADD pop result Notice how close that looks to java bytecode! But it'll still execute much faster.

The only advantage I've been able to think of for a VM is language interoperability. The advantage of a VM over just an established calling convention and such being that it has better and more "native" support for garbage collected languages. This was the point of .NET so far as I'm aware (ie. it was a COM replacement). Sean
Oct 22 2007
prev sibling next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
David Brown wrote:
 On Mon, Oct 22, 2007 at 03:53:33PM -0700, Walter Bright wrote:
 
 It's about 500K of rom needed. And the code will run several times 
 faster, even with a simplistic code generator, which will make up for it.

Doubtful that it would be faster, since the processor that they use directly executes Java bytecodes.

It is a native compiler if it directly executes Java bytecodes!
Oct 22 2007
prev sibling parent reply Robert Fraser <fraserofthenight gmail.com> writes:
David Brown Wrote:

 On Mon, Oct 22, 2007 at 03:53:33PM -0700, Walter Bright wrote:
 
 It's about 500K of rom needed. And the code will run several times faster, 
 even with a simplistic code generator, which will make up for it.

Doubtful that it would be faster, since the processor that they use directly executes Java bytecodes.

It doesn't, but I think it might be stretching my NDA to explain how it actually works.
Oct 22 2007
parent Robert Fraser <fraserofthenight gmail.com> writes:
David Brown wrote:
 On Tue, Oct 23, 2007 at 12:43:08AM -0400, Robert Fraser wrote:
 David Brown Wrote:

 On Mon, Oct 22, 2007 at 03:53:33PM -0700, Walter Bright wrote:

 It's about 500K of rom needed. And the code will run several times 

for it. Doubtful that it would be faster, since the processor that they use directly executes Java bytecodes.

It doesn't, but I think it might be stretching my NDA to explain how it actually works.

Ok, vast oversimplification, but if it is arm based, it is probably Jazelle based, which, depending on implementation can either directly execute bytecodes or has a lot of support for JIT. <http://www.arm.com/products/esd/jazelle_home.html> which has plenty of non-NDA stuff people can read. David

BlackBerry doesn't use ARM... But this is getting quite off-topic.
Oct 23 2007
prev sibling parent reply Roberto Mariottini <rmariottini mail.com> writes:
Walter Bright wrote:
 Roberto Mariottini wrote:
 Walter Bright wrote:
 Javascript is distributed in source code, and executes on a variety 
 of machines. A VM is not necessary to achieve portability to machines 
 unknown. What is necessary is a portable language design.

Obviously, this is valid only if you want to distribute the sources, and sometimes you can't (i.e. royalties, sublicensing and the like).

Distributing Java class files is not secure, as good decompilers exist for them. Might as well distribute source.

This is something lawyers don't know. I have seen a couple of non-source Java library licensing.
 We are still talking only of the implementation, not the language 
 itself. I consider the Javascript environment as a high level VM, and 
 I still think that a compiled Javascript would be unusable.

It's true that Javascript uses a VM, but it doesn't use a standardized VM for which one distributes precompiled binaries too. Javascript is always compiled/interpreted directly from source code, and source code is how its distributed.

That's why I've said "High Level" VM. I see Javascript and Java as two equivalent VMs (+ standard libraries).
 Even if technically the big difference between portable and 
 non-portable resides on the language and the standard libraries, I 
 think that not considering the hundreds of working VMs that exist 
 today is narrow-thinking.

Considering that a C compiler exists for a far broader range of devices than VMs do, all the motivation that is needed is the language needs to be a) popular or b) have huge resources from a company like Sun to finance development of all those VMs. Sun could just as easily have provided a generic back end & library.

I've never said that VMs are better than C. I'm saying that VMs are there today, and they work, today. They work the "Compile-Once-Run-Everywhere" way.
 To force developers to distribute their sources excludes a big part of 
 the software world as it is today.

Not the Java world - decompilers are common and effective.

Don't say it to your attorney.
 Making D compilable for the Java VM today would make it immediately 
 portable to tens of platforms (and hundreds of cell phones models), 
 today.

The Java VM is insufficiently powerful to use as a back end for D. It can't even do C.

I thought that the Java VM was Turing-complete :-) I'm not an expert of compilers and VMs, so I believe you. It's a pity that I can't use D on those cell phones :-( Ciao
Oct 23 2007
parent reply Reiner Pope <some address.com> writes:
Roberto Mariottini wrote:
 Walter Bright wrote:
 The Java VM is insufficiently powerful to use as a back end for D. It 
 can't even do C.

I thought that the Java VM was Turing-complete :-) I'm not an expert of compilers and VMs, so I believe you. It's a pity that I can't use D on those cell phones :-( Ciao

D can segfault; Java can't. Thus D is more powerful. :-) -- Reiner
Oct 23 2007
parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Reiner Pope wrote:
 
 D can segfault; Java can't. Thus D is more powerful. :-)
 
    -- Reiner

Lol nice! I'm gonna quote you on that one :P -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Oct 24 2007
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
David Brown wrote:
 It's still a VM advantage.  It helps the model where there are many
 developers who only distribute binaries.  If they are distributing for a
 VM, they only have to distribute a single binary.  Otherwise, they still
 would have to recompile for every possible target.

With a portable language, it is not necessary to distribute binaries. You can distribute the *source* code! Then, the user can just recompile it on the fly (this can be automated so the user never has to actually invoke the compiler). Just like how Javascript is distributed as source.
Oct 22 2007
next sibling parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Walter Bright wrote:

 It's still a VM advantage.  It helps the model where there are many
 developers who only distribute binaries.  If they are distributing for a
 VM, they only have to distribute a single binary.  Otherwise, they still
 would have to recompile for every possible target.

With a portable language, it is not necessary to distribute binaries. You can distribute the *source* code! Then, the user can just recompile it on the fly (this can be automated so the user never has to actually invoke the compiler). Just like how Javascript is distributed as source.

Too bad that D isn't such a language then ? One "version" for each platform, and no autoconf or other helpers to cope with differences... As much as I do like D, the C language is *much* more portable - at least between the different GNU platforms (i.e. including MinGW too). --anders
Oct 22 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Anders F Björklund wrote:
 Walter Bright wrote:
 With a portable language, it is not necessary to distribute binaries. 
 You can distribute the *source* code! Then, the user can just 
 recompile it on the fly (this can be automated so the user never has 
 to actually invoke the compiler). Just like how Javascript is 
 distributed as source.

Too bad that D isn't such a language then ? One "version" for each platform, and no autoconf or other helpers to cope with differences...

These are not problems with language, but with the relative lack of resources applied to the dev tools.
 As much as I do like D, the C language is *much* more portable - at
 least between the different GNU platforms (i.e. including MinGW too).

If D had its own VM, the same issue would exist, because you'd have to have staff to port the VM to all those platforms and debug them.
Oct 22 2007
parent =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Walter Bright wrote:

 Too bad that D isn't such a language then ? One "version" for each 
 platform, and no autoconf or other helpers to cope with differences...

These are not problems with language, but with the relative lack of resources applied to the dev tools.

Agreed, not in the (extended) implementation of the language itself - just in the language specification and standard library. Same result. I just wish there had been a better solution to the linux/Unix/Posix. --anders
Oct 22 2007
prev sibling parent reply Joel Lucsy <jjlucsy gmail.com> writes:
Walter Bright wrote:
 With a portable language, it is not necessary to distribute binaries. 
 You can distribute the *source* code! Then, the user can just recompile 
 it on the fly (this can be automated so the user never has to actually 
 invoke the compiler). Just like how Javascript is distributed as source.

.Net does not run in a VM, it is JIT compiled down to machine code. Assemblies *are* essentially source code. And, I belive in most cases, Javascript is either run on a VM, or JIT compiled just like .Net. And I suspect most browsers currently don't do JIT. -- Joel Lucsy "The dinosaurs became extinct because they didn't have a space program." -- Larry Niven
Oct 22 2007
parent reply Radu <radu.racariu void.space> writes:
Joel Lucsy wrote:
 Walter Bright wrote:
 With a portable language, it is not necessary to distribute binaries. 
 You can distribute the *source* code! Then, the user can just 
 recompile it on the fly (this can be automated so the user never has 
 to actually invoke the compiler). Just like how Javascript is 
 distributed as source.

.Net does not run in a VM, it is JIT compiled down to machine code. Assemblies *are* essentially source code. And, I belive in most cases, Javascript is either run on a VM, or JIT compiled just like .Net. And I suspect most browsers currently don't do JIT.

instruction set to a concrete one is a virtual machine: http://en.wikipedia.org/wiki/Virtual_machine (Process virtual machine). Leaving behind the MS propaganda, the implementation of such an system can be done as an interpreter, JIT, a combination of both + runtime profiler or as AOT + JIT (+ Interpreter). Currently MS's .Net uses a JIT and sometimes an AOT(ngen) implementation (they are working together), while the Sun Java implementation uses a combination of an interpreter, a JIT and a runtime profiler. Java has a larger set of implementations including AOT + JIT (JET Compiler), only AOT or AOT + Interpreter (GCJ), Interpreter (SableVM), JIT (Cacao). *AOT : http://en.wikipedia.org/wiki/AOT_compiler *JIT: http://en.wikipedia.org/wiki/Just-in-time_compilation
Oct 22 2007
parent reply Christopher Wright <dhasenan gmail.com> writes:
Radu wrote:
 Joel Lucsy wrote:
 Walter Bright wrote:
 With a portable language, it is not necessary to distribute binaries. 
 You can distribute the *source* code! Then, the user can just 
 recompile it on the fly (this can be automated so the user never has 
 to actually invoke the compiler). Just like how Javascript is 
 distributed as source.

.Net does not run in a VM, it is JIT compiled down to machine code. Assemblies *are* essentially source code. And, I belive in most cases, Javascript is either run on a VM, or JIT compiled just like .Net. And I suspect most browsers currently don't do JIT.

instruction set to a concrete one is a virtual machine:

<nitpick> Rather, it is a virtual machine if it executes abstract instructions, or interprets them for immediate execution. Compilers aren't VMs. </nitpick>
Oct 22 2007
parent reply Joel Lucsy <jjlucsy gmail.com> writes:
Christopher Wright wrote:
 <nitpick>
 Rather, it is a virtual machine if it executes abstract instructions, or 
 interprets them for immediate execution. Compilers aren't VMs.
 </nitpick>

<grumbling> Bah, in that case DMD is a AOT VM as it compiles abstract instructions (the D language). Maybe its just me, but I really don't see the distinction between where it gets compiled. Either you do it before distribution, or like the .Net runtime, it does it on the client side. Without an interpreter, the compiled code is run directly. The .Net runtime from MS talks directly to the Win32 dlls. The CAS will block certain calls, thereby looking like its a VM, but really its not. There is no "virtualization" going on. And if you think it does, I task you to show me where. </grumbling> -- Joel Lucsy "The dinosaurs became extinct because they didn't have a space program." -- Larry Niven
Oct 22 2007
parent Radu <radu.racariu void.space> writes:
Joel Lucsy wrote:
 Christopher Wright wrote:
 <nitpick>
 Rather, it is a virtual machine if it executes abstract instructions, 
 or interprets them for immediate execution. Compilers aren't VMs.
 </nitpick>

<grumbling> Bah, in that case DMD is a AOT VM as it compiles abstract instructions (the D language). Maybe its just me, but I really don't see the distinction between where it gets compiled. Either you do it before distribution, or like the .Net runtime, it does it on the client side. Without an interpreter, the compiled code is run directly. The .Net runtime from MS talks directly to the Win32 dlls. The CAS will block certain calls, thereby looking like its a VM, but really its not. There is no "virtualization" going on. And if you think it does, I task you to show me where. </grumbling>

into a machine readable one, you really can't blur that line. What a JIT VM does is that it translates one form of machine readable code into another concrete one in "realtime" at the point of execution. And JIT is an implementation detail of a Process Virtual Machine, the virtualization is placed in that JIT and runtime as it verifys CIL and it parses/compiles/optimizes into x86 opcode and applies different policies on how that code runs. Any process VM talks directly with the host OS and permits access to/from the controlled execution environment to the host one (with the required security checks); hell, even the machine VMs (vmware, paralles) do that now with network shares, drang&grop and unity. If you really want to pretend that .Net is some kind of a compiler backed, than you must admit that C compilers are also JITs in the case of how Ubuntu does it's application distribution. You have packages with abstract code (C, C++ mostly) an AOT VM (GCC) and there you go, and its one hell of an AOT VM :)
Oct 23 2007
prev sibling next sibling parent Charles D Hixson <charleshixsn earthlink.net> writes:
Walter Bright wrote:
 Robert Fraser wrote:
 Walter Bright Wrote:
 I've never been able to discover what the fundamental advantage of
 a VM is.

I'm sure there are a lot of advantages, but here's one I can think of ...

That isn't an advantage of the VM. It's an advantage of a language that has no implementation-defined or undefined behavior. Given that, the same portability results are achieved.

I'm not sure what the reason it, but programs in languages running on VM's (or otherwise interpreted) seem to be much better at introspection. This isn't just Java. LISP had to work quite hard to become a compiled language, but interpreters were available quickly, and the ability to do introspection easily during interpretation was a large chunk of the reason. N.B.: This doesn't mean that compiled languages can't introspect. After all, if you analyze everything down to assembler, it's all the same instructions. But it appears to be a lot more difficult.
Oct 22 2007
prev sibling next sibling parent David Brown <dlang davidb.org> writes:
On Mon, Oct 22, 2007 at 07:46:52PM -0700, Walter Bright wrote:
 David Brown wrote:
 On Mon, Oct 22, 2007 at 03:53:33PM -0700, Walter Bright wrote:
 It's about 500K of rom needed. And the code will run several times 
 faster, even with a simplistic code generator, which will make up for it.

directly executes Java bytecodes.

It is a native compiler if it directly executes Java bytecodes!

But it doesn't have to. Some phones will execute them directly, some will JIT or even simulate them. David
Oct 22 2007
prev sibling parent David Brown <dlang davidb.org> writes:
On Tue, Oct 23, 2007 at 12:43:08AM -0400, Robert Fraser wrote:
David Brown Wrote:

 On Mon, Oct 22, 2007 at 03:53:33PM -0700, Walter Bright wrote:
 
 It's about 500K of rom needed. And the code will run several times faster, 
 even with a simplistic code generator, which will make up for it.

Doubtful that it would be faster, since the processor that they use directly executes Java bytecodes.

It doesn't, but I think it might be stretching my NDA to explain how it actually works.

Ok, vast oversimplification, but if it is arm based, it is probably Jazelle based, which, depending on implementation can either directly execute bytecodes or has a lot of support for JIT. <http://www.arm.com/products/esd/jazelle_home.html> which has plenty of non-NDA stuff people can read. David
Oct 22 2007
prev sibling parent David Brown <dlang davidb.org> writes:
On Mon, Oct 22, 2007 at 03:53:33PM -0700, Walter Bright wrote:

 It's about 500K of rom needed. And the code will run several times faster, 
 even with a simplistic code generator, which will make up for it.

Doubtful that it would be faster, since the processor that they use directly executes Java bytecodes. David
Oct 22 2007
prev sibling next sibling parent reply David Brown <dlang davidb.org> writes:
On Sun, Oct 21, 2007 at 10:08:26PM -0700, Walter Bright wrote:
 Robert Fraser wrote:
 Walter Bright Wrote:
 I've never been able to discover what the fundamental advantage of
 a VM is.

off the top of my head: say you're designing an application for mobile devices/smartphones. There are a lot of different phones out there, probably all with different APIs (or no native API accessible outside the company that made the phone), but if your software is running in a VM it'll run the same everywhere. Now say you're a cell phone manufacturer introducing a new smartphone -- adding the VM makes all the software written to use that platform instantly compatible with your device.

That isn't an advantage of the VM. It's an advantage of a language that has no implementation-defined or undefined behavior. Given that, the same portability results are achieved.

It's still a VM advantage. It helps the model where there are many developers who only distribute binaries. If they are distributing for a VM, they only have to distribute a single binary. Otherwise, they still would have to recompile for every possible target. Dave
Oct 21 2007
parent reply Michael P <dontspam me.com> writes:
You say it's more portable but you need a VM and often a compiler instead of
just having a compiler.

Distribute a single binary could be achived by encrypting the source code and
sending it to your client, Then you could have a compiler that knows the key to
it (or that the compiler gets the key from a server) so it takes the code and
decrypts it and at the same time compiles it. 

The problem with today's VM is that they are slow (and its not just a myth, try
it for yourself!). The argument that ppl will not notice the difference is not
true. Just starting a VM often takes far to long. That they often bring a huge
standard API is very good, but that is because its necessary for ppl to start
program in it (ppl are lazy). 

VMs doesn't solve anything IMH, its just easier to use them than not doing so
(greater security and control over what is happening at runtime and so on). 

I did program some pascal on a pda itself (palm pilot) and it worked perfect. I
didn't have to learn new tricks or anything like that ( unlike Symbians C++
api).

As a sidenote I'm taking a C++ course at my university and the lectures goes
something like this: bla, bla, bla, undefined behaviour, bla, bla,bla
undefined, bla, bla, bla, undefined. So code can work in one compiler but not
in another. In fact our final exam will be about avoiding pitfalls, not how to
"code" in C++. This is a good example of a language that causes pain to port to
different platforms. 

David Brown Wrote:

 On Sun, Oct 21, 2007 at 10:08:26PM -0700, Walter Bright wrote:
 Robert Fraser wrote:
 Walter Bright Wrote:
 I've never been able to discover what the fundamental advantage of
 a VM is.

off the top of my head: say you're designing an application for mobile devices/smartphones. There are a lot of different phones out there, probably all with different APIs (or no native API accessible outside the company that made the phone), but if your software is running in a VM it'll run the same everywhere. Now say you're a cell phone manufacturer introducing a new smartphone -- adding the VM makes all the software written to use that platform instantly compatible with your device.

That isn't an advantage of the VM. It's an advantage of a language that has no implementation-defined or undefined behavior. Given that, the same portability results are achieved.

It's still a VM advantage. It helps the model where there are many developers who only distribute binaries. If they are distributing for a VM, they only have to distribute a single binary. Otherwise, they still would have to recompile for every possible target. Dave

Oct 22 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Michael P wrote:
 Distribute a single binary could be achived by encrypting the source
 code and sending it to your client, Then you could have a compiler
 that knows the key to it (or that the compiler gets the key from a
 server) so it takes the code and decrypts it and at the same time
 compiles it.

I once went through the design of encrypting source, and concluded it wasn't very practical (look at CSS for DVDs!). Also, it's pretty clear that VM bytecode does a lousy job of obfuscating source - good Java byte code decompilers exist. You might as well distribute source - after running it through a comment stripper, of course.
Oct 22 2007
parent reply Derek Parnell <derek psych.ward> writes:
On Mon, 22 Oct 2007 11:27:47 -0700, Walter Bright wrote:

 Michael P wrote:
 Distribute a single binary could be achived by encrypting the source
 code and sending it to your client, Then you could have a compiler
 that knows the key to it (or that the compiler gets the key from a
 server) so it takes the code and decrypts it and at the same time
 compiles it.

I once went through the design of encrypting source, and concluded it wasn't very practical (look at CSS for DVDs!). Also, it's pretty clear that VM bytecode does a lousy job of obfuscating source - good Java byte code decompilers exist. You might as well distribute source - after running it through a comment stripper, of course.

I work daily with a language called Progress. It is a 4GL style used primarily with large databases. Anyhow, it 'compiles' to a type of p-Code and we distribute our apps using its encrypted source facility. The run-time application server executes the p-Code in a VM. We have been doing this since 1994. It is very fast and applications are transportable to other architectures without changing the source code. I've moved applications from System V (Olivetti) to VAX-VMS to Redhat without having to even recompile. It is practical to encrypt source code. VM's can be bloody fast. One can distribute portable applications without compromising intellectual property. I regard your point of view as blinkered. It seems to me that your opinion could be paraphrased with "if we had a perfect world we wouldn't have to solve problems". There is a role for VM languages and there is a role for native-code languages. It is not an either/or situation. -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Oct 22 2007
next sibling parent Jascha Wetzel <firstname mainia.de> writes:
Derek Parnell wrote:
 I regard your point of view as blinkered. It seems to me that your opinion
 could be paraphrased with "if we had a perfect world we wouldn't have to
 solve problems". There is a role for VM languages and there is a role for
 native-code languages. It is not an either/or situation.

true, of course. it's all a question of choosing the right tools for the task. IMHO, the problem is, that interpreted and VM languages have become that popular, that they are often being deployed in the wrong places. people start writing anything in the language they like or know best, regardless of whether it's the right tool for the job. and half baked arguments are being used to justify that. it has become necessary to point out what native tools can do at any given opportunity. this especially includes rectifying several myths about VMs.
Oct 23 2007
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Derek Parnell wrote:
 On Mon, 22 Oct 2007 11:27:47 -0700, Walter Bright wrote:
 I once went through the design of encrypting source, and concluded it 
 wasn't very practical (look at CSS for DVDs!). Also, it's pretty clear 
 that VM bytecode does a lousy job of obfuscating source - good Java byte 
 code decompilers exist.

 You might as well distribute source - after running it through a comment 
 stripper, of course.

I work daily with a language called Progress. It is a 4GL style used primarily with large databases. Anyhow, it 'compiles' to a type of p-Code and we distribute our apps using its encrypted source facility. The run-time application server executes the p-Code in a VM. We have been doing this since 1994. It is very fast and applications are transportable to other architectures without changing the source code.

That's achievable with a language that is defined with portable semantics in mind. A VM doesn't contribute to it.
 I've moved
 applications from System V (Olivetti) to VAX-VMS to Redhat without having
 to even recompile.

Since you didn't have to change the source code, either, it doesn't make much difference if recompilation was necessary or not.
 It is practical to encrypt source code.

Since the people you are trying to hide it from must have the decryption keys in order to use it, it is inherently insecure. All it takes is one person with motivation to reverse engineer a crack, and then *all* of the source is available to *everyone*. It happens with DRM stuff all the time.
 VM's can be bloody fast.

They can be fast enough, but they'll never be faster than native code.
 One can distribute portable applications without compromising intellectual
 property.

All it takes is one motivated hacker, and *all* of your stuff then is compromised.
 I regard your point of view as blinkered. It seems to me that your opinion
 could be paraphrased with "if we had a perfect world we wouldn't have to
 solve problems". There is a role for VM languages and there is a role for
 native-code languages. It is not an either/or situation.

I've implemented both VMs and native compilers, I know intimately how they work. I don't believe that the claims made of VMs are justified. BTW, because of the way the Java VM bytecodes are defined, it is particularly easy to decompile.
Oct 27 2007
prev sibling next sibling parent David Brown <dlang davidb.org> writes:
On Sun, Oct 21, 2007 at 11:21:37PM -0500, Dave wrote:

 Runtime reflection aside, I can't think of anything a VM can do that a 
 static compiler couldn't with the possible (but largely unproven) exception 
 of sometimes generating better code because of access to runtime info.

I believe most already do this kind of analysis. I'm not sure it helps, since there is plenty of other overhead to using a VM, so it probably just makes the VM use less costly. David
Oct 21 2007
prev sibling next sibling parent Jussi Jumppanen <jussij zeusedit.com> writes:
Walter Bright Wrote:

 Jussi Jumppanen wrote:
 I think Microsoft's longer term vision is to have .NET everywhere
 and I mean everywhere.

I've never been able to discover what the fundamental advantage of a VM is.

When I said everywhere, I guess I was thinking more in terms of software. For example, I've heard stories the latest Microsoft SQL Server lets you embed C# code directly into the SQL stored procedure code. Also, I suspect it won't be long before C# takes over from VBA and becomes the embedded scripting language of choice for all Microsoft software. While VBA was really nothing more than a toy language, C# adds far more power as it gives the scripts access to the .NET framework.
Oct 21 2007
prev sibling next sibling parent reply 0ffh <spam frankhirsch.net> writes:
Walter Bright wrote:
 I've never been able to discover what the fundamental advantage of a VM is.

I don't think there are any very fundamental advantages. But there sure seem to be a few things that make them attractive for some people. The most convincing of these revolve around neither run-time nor compile-time, but around write-time issues. In short: They try to make the language implementor's life easy. We see: The exact opposite of what you're trying to achieve (using C++). Regards, Frank Appendix: "Reasons for having a VM" 1. They are a way to separate the compiler back-end from the rest of the compiler. Clearly you wouldn't have to implement the VM in this scenario. 2. As far as the oldest VM I know designed for a specific language to be executed in is concerned: "UCSD p-System began around 1977 as the idea of UCSD's Kenneth Bowles, who believed that the number of new computing platforms coming out at the time would make it difficult for new programming languages to gain acceptance." (or that's what Wikipedia says). 3. From hxxp://en.wikipedia.org/wiki/P-code_machine: "a) For porting purposes. It is much easier to write a small (compared to the size of the compiler) p-code interpreter for a new machine, as opposed to changing a compiler to generate native code for the same machine. b) For quickly getting a compiler up and running. Generating machine code is one of the more complicated parts of writing a compiler. By comparison, generating p-code is much easier. c) Size constraints. Since p-code is based on an ideal virtual machine, many times the resulting p-code is much smaller than the same program translated to machine code. d) For debugging purposes. Since p-code is interpreted, the interpreter can apply many additional runtime checks that are harder to implement with native code."
Oct 22 2007
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
0ffh wrote:
 Walter Bright wrote:
 I've never been able to discover what the fundamental advantage of a 
 VM is.

I don't think there are any very fundamental advantages. But there sure seem to be a few things that make them attractive for some people. The most convincing of these revolve around neither run-time nor compile-time, but around write-time issues. In short: They try to make the language implementor's life easy. We see: The exact opposite of what you're trying to achieve (using C++). Regards, Frank Appendix: "Reasons for having a VM" 1. They are a way to separate the compiler back-end from the rest of the compiler. Clearly you wouldn't have to implement the VM in this scenario.

This has been done other ways - see the gcc, where they all share a common optimizer/backend. David Friedman used that to implement gdc. LLVM is another project aiming to do the same thing.
 2. As far as the oldest VM I know designed for a specific language to be 
 executed in is concerned: "UCSD p-System began around 1977 as the idea 
 of UCSD's Kenneth Bowles, who believed that the number of new computing 
 platforms coming out at the time would make it difficult for new 
 programming languages to gain acceptance." (or that's what Wikipedia says).

I know about the P-system. It was ahead of its time.
 3. From hxxp://en.wikipedia.org/wiki/P-code_machine:
 "a) For porting purposes. It is much easier to write a small (compared 
 to the size of the compiler) p-code interpreter for a new machine, as 
 opposed to changing a compiler to generate native code for the same 
 machine.

Interpreted VMs tend to suck. The good ones include JITs, which are full blown compiler optimizers and back ends. Even a brain-dead simple code generator will run 10x faster than an interpreter.
  b) For quickly getting a compiler up and running. Generating machine 
 code is one of the more complicated parts of writing a compiler. By 
 comparison, generating p-code is much easier.

Generating *good* code is hard. Most CPU instruction sets are actually not much more complex than p-code, if you're not trying to generate optimal code. You can do RPN stack machine code generation for the x86 very simply, for example. Heck, you can generate code that is a stream of *function calls* for each operation (often called 'threaded code').
  c) Size constraints. Since p-code is based on an ideal virtual machine, 
 many times the resulting p-code is much smaller than the same program 
 translated to machine code.

P-code does tend to be smaller, that's true. Except that the VM's bloat tends to way overwhelm any size savings in the executable code.
  d) For debugging purposes. Since p-code is interpreted, the interpreter 
 can apply many additional runtime checks that are harder to implement 
 with native code."

That's a crock.
Oct 22 2007
parent reply 0ffh <spam frankhirsch.net> writes:
Walter Bright wrote:
 [...] You can do RPN stack machine code generation for the x86 very
 simply, for example.

Yup, I've done my Forth. An experience I found very instructive. =)
 Heck, you can generate code that is a stream  of *function calls* for
 each operation (often called 'threaded code').

Heck, the metaprogramming capabilites are a dream! I'ts just that *tiny* bit too low level for my tastes. Regards, Frank
Oct 22 2007
parent 0ffh <spam frankhirsch.net> writes:
0ffh wrote about forth (sorry for the self-quote!):
 I'ts just that *tiny* bit too low level for my tastes.

Actually, this was my latest try at solving this: http://wiki.dprogramming.com/uploads/Drat/grace2.zip Regards, Frank
Oct 22 2007
prev sibling parent reply Christopher Wright <dhasenan gmail.com> writes:
0ffh wrote:
 Walter Bright wrote:
 I've never been able to discover what the fundamental advantage of a 
 VM is.

I don't think there are any very fundamental advantages. But there sure seem to be a few things that make them attractive for some people. The most convincing of these revolve around neither run-time nor compile-time, but around write-time issues. In short: They try to make the language implementor's life easy. We see: The exact opposite of what you're trying to achieve (using C++). Regards, Frank Appendix: "Reasons for having a VM" 1. They are a way to separate the compiler back-end from the rest of the compiler. Clearly you wouldn't have to implement the VM in this scenario. 2. As far as the oldest VM I know designed for a specific language to be executed in is concerned: "UCSD p-System began around 1977 as the idea of UCSD's Kenneth Bowles, who believed that the number of new computing platforms coming out at the time would make it difficult for new programming languages to gain acceptance." (or that's what Wikipedia says). 3. From hxxp://en.wikipedia.org/wiki/P-code_machine: "a) For porting purposes. It is much easier to write a small (compared to the size of the compiler) p-code interpreter for a new machine, as opposed to changing a compiler to generate native code for the same machine. b) For quickly getting a compiler up and running. Generating machine code is one of the more complicated parts of writing a compiler. By comparison, generating p-code is much easier. c) Size constraints. Since p-code is based on an ideal virtual machine, many times the resulting p-code is much smaller than the same program translated to machine code. d) For debugging purposes. Since p-code is interpreted, the interpreter can apply many additional runtime checks that are harder to implement with native code."

Aside from the benefits of dubious reality, why not just emit LLVM code? It simplifies your backend at the expense of a longer compile, but still generates native code (for Intel-based, PowerPC, ARM, Thumb, SPARC, and Alpha processors, anyway). And if you really want it, there's a JIT compiler for those.
Oct 22 2007
parent 0ffh <spam frankhirsch.net> writes:
Christopher Wright wrote:
 0ffh wrote:
 1. They are a way to separate the compiler back-end from the rest of 
 the compiler. Clearly you wouldn't have to implement the VM in this 
 scenario.
 [...]

It simplifies your backend at the expense of a longer compile, but still generates native code (for Intel-based, PowerPC, ARM, Thumb, SPARC, and Alpha processors, anyway). And if you really want it, there's a JIT compiler for those.

I'd think that's covered by point 1. Regards, Frank
Oct 22 2007
prev sibling next sibling parent reply Lutger <lutger.blijdestijn gmail.com> writes:
Walter Bright wrote:
 Jussi Jumppanen wrote:
 I think Microsoft's longer term vision is to have .NET everywhere
 and I mean everywhere.

I've never been able to discover what the fundamental advantage of a VM is.

I thought the .NET platform was developed with the intent to replace COM? And by extension, complementing and / or replacing the C way of cross-talking between languages for application development.
Oct 22 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Lutger wrote:
 Walter Bright wrote:
 I've never been able to discover what the fundamental advantage of a 
 VM is.

I thought the .NET platform was developed with the intent to replace COM?

I don't know what MS's reasons were, but it seems strange to replace COM with something inaccessible from C++ (the main language used to interface with COM).
 And by extension, complementing and / or replacing the C way of 
 cross-talking between languages for application development.

Except that .net cannot talk to C or C++ code, which are the usual languages for applications. All languages need to interoperate are a standard calling convention, not a wholly different environment.
Oct 22 2007
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Walter Bright" wrote
 Lutger wrote:
 Walter Bright wrote:
 I've never been able to discover what the fundamental advantage of a VM 
 is.

I thought the .NET platform was developed with the intent to replace COM?

I don't know what MS's reasons were

I think the reason was pretty obvious... replace Java :) Sun slapped MS's hand when it tried to differentiate Java, so MS wanted to not depend on Java anymore. As far as COM, I don't think they ever wanted to replace it because that would make all the millions of lines of code already written with COM useless. MS's main goal in anything they do is, and always has been, backwards compatibility. Why do you think Windows is so damn bloated compared to Linux? Because it all has to be able to still run Windows 3.1 crap.
but it seems strange to replace COM with something inaccessible from C++ 
(the main language used to interface with COM).

.net is accessible from C++. It's called C++.net :) With .net, you can implement "managed" or garbage collected C++ classes, and from there you can call either normal C/C++ classes or other .net langauge-based classes/functions. I usually do this because it's much easier (in my mind) than importing C/C++ stuff into C# IMO, D does a much better job of importing C functions, and is much more understandable. As far as interfacing with C++, .net has D beat because it can use the classes directly from a C++.net class. But I think, as you do, that this is more trouble than it's worth.
 And by extension, complementing and / or replacing the C way of 
 cross-talking between languages for application development.

Except that .net cannot talk to C or C++ code, which are the usual languages for applications. All languages need to interoperate are a standard calling convention, not a wholly different environment.

The whole point of .net is to allow ANY language to generate the SAME bytecode. For example, you can have a C++.net class, calling a VB.net class, which calls a COBOL.net class (yes, COBOL.net exists, I can't believe it either). It's actually a really neat idea, but in practice, you generally only use one language anyways, and interfacing with old code usually means you have to write wrappers or reimplement the code, so it's not like you can magically merge legacy stuff with .net without a ton of work. So yes, you can talk to C or C++ using .net, but it's not always pretty. -Steve
Oct 22 2007
parent Walter Bright <newshound1 digitalmars.com> writes:
Steven Schveighoffer wrote:
 The whole point of .net is to allow ANY language to generate the SAME 
 bytecode.

That's a means to an end. But there are other means to the same end. A VM is a very expensive and inefficient way to get there, and even then the results run slowly (relative to native code).
Oct 22 2007
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Vladimir Panteleev wrote:
 On Mon, 22 Oct 2007 21:41:26 +0300, Vladimir Panteleev
 <thecybershadow gmail.com> wrote:
 
 On Mon, 22 Oct 2007 21:22:50 +0300, Walter Bright
 <newshound1 digitalmars.com> wrote:
 
 Except that .net cannot talk to C or C++ code, which are the
 usual languages for applications.

can also define the exact layout of structures, and thus share data structures with native code.

Here's some documentation on it: http://msdn2.microsoft.com/en-us/library/aa288468(VS.71).aspx Note that you can also specify one of several calling conventions: Cdecl, Stdcall, Thiscall (which allows some basic OOP simulation) and Winapi (same as Stdcall on Windows).

Thanks for the reference. It says that the parameters must go through "marshalling", which means they go through a translation layer.
Oct 22 2007
prev sibling next sibling parent "Vladimir Panteleev" <thecybershadow gmail.com> writes:
On Mon, 22 Oct 2007 21:27:47 +0300, Walter Bright <newshound1 digitalmars.com>
wrote:

 Michael P wrote:
 Distribute a single binary could be achived by encrypting the source
 code and sending it to your client, Then you could have a compiler
 that knows the key to it (or that the compiler gets the key from a
 server) so it takes the code and decrypts it and at the same time
 compiles it.

I once went through the design of encrypting source, and concluded it wasn't very practical (look at CSS for DVDs!). Also, it's pretty clear that VM bytecode does a lousy job of obfuscating source - good Java byte code decompilers exist. You might as well distribute source - after running it through a comment stripper, of course.

Source code obfuscators for most interpreted languages exist, as well as obfuscators for bytecode (VM languages) which make decompilation very hard. -- Best regards, Vladimir mailto:thecybershadow gmail.com
Oct 22 2007
prev sibling next sibling parent "Vladimir Panteleev" <thecybershadow gmail.com> writes:
On Mon, 22 Oct 2007 21:22:50 +0300, Walter Bright <newshound1 digitalmars.com>
wrote:

 Except that .net cannot talk to C or C++ code, which are the usual languages
for applications.

.NET (at least C#) can call native code in DLLs. Unlike Java, you can also define the exact layout of structures, and thus share data structures with native code. -- Best regards, Vladimir mailto:thecybershadow gmail.com
Oct 22 2007
prev sibling next sibling parent "Vladimir Panteleev" <thecybershadow gmail.com> writes:
On Mon, 22 Oct 2007 21:41:26 +0300, Vladimir Panteleev
<thecybershadow gmail.com> wrote:

 On Mon, 22 Oct 2007 21:22:50 +0300, Walter Bright <newshound1 digitalmars.com>
wrote:

 Except that .net cannot talk to C or C++ code, which are the usual languages
for applications.

.NET (at least C#) can call native code in DLLs. Unlike Java, you can also define the exact layout of structures, and thus share data structures with native code.

Here's some documentation on it: http://msdn2.microsoft.com/en-us/library/aa288468(VS.71).aspx Note that you can also specify one of several calling conventions: Cdecl, Stdcall, Thiscall (which allows some basic OOP simulation) and Winapi (same as Stdcall on Windows). -- Best regards, Vladimir mailto:thecybershadow gmail.com
Oct 22 2007
prev sibling next sibling parent reply "Vladimir Panteleev" <thecybershadow gmail.com> writes:
On Mon, 22 Oct 2007 05:19:39 +0300, Walter Bright <newshound1 digitalmars.com>
wrote:

 I've never been able to discover what the fundamental advantage of a VM is.

Some of the things which are only possible, or a good deal easier to use/implement with VMs: 1) code generation - used very seldomly, it might be used for runtime-specified cases where top performance is required (e.g. genetic programming?) 2) VMs make modularity much easier in that you don't have to recompile all modules ("plugins") on all platforms, which is often not possible with projects whose core supports many platforms, but most developers don't have access to all supported platforms. 3) very flexible reflection - like being able to derive from classes in other modules. Though this can be done in native languages by including enough metadata, most compiled languages don't. 4) compilation is not a simple process for most computer users out there. If you want to provide a simple, cross-platform end-user application, it's much easier to use a VM - the VM vendor has taken care of porting the VM to all those platforms, and you don't need to bother maintaining source code portability, make/autoconf/etc. files, and compilation instructions (dependencies, etc.) (ok, most computer users out there use Windows, and many non-Windows users know how to compile a program, but the point stands :P) 5) it's much easier to provide security/isolation for VM languages. Although native code isolation can be done using hardware, it's complicated and inefficient. This allows VM languages to be safely embedded in places such as web pages (Flash for ActionScript, applets for Java, Silverlight for .NET). -- Best regards, Vladimir mailto:thecybershadow gmail.com
Oct 22 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Vladimir Panteleev wrote:
 On Mon, 22 Oct 2007 05:19:39 +0300, Walter Bright
 <newshound1 digitalmars.com> wrote:
 
 I've never been able to discover what the fundamental advantage of
 a VM is.

Some of the things which are only possible, or a good deal easier to use/implement with VMs: 1) code generation - used very seldomly, it might be used for runtime-specified cases where top performance is required (e.g. genetic programming?)

Are you referring to a JIT? JITs aren't easier to implement than a compiler back end.
 2) VMs make modularity much easier in that you don't have to
 recompile all modules ("plugins") on all platforms, which is often
 not possible with projects whose core supports many platforms, but
 most developers don't have access to all supported platforms.

Problem is solved by defining your ".class" file to be compressed source code. Dealing with back end bugs on platform X is no different from dealing with VM bugs on platform X. Java is infamous for "compile once, debug everywhere".
 3) very flexible reflection - like being able to derive from classes
 in other modules. Though this can be done in native languages by
 including enough metadata, most compiled languages don't.

I think this is possible with compiled languages, but nobody has done it yet.
 4) compilation is not a simple process for most computer users out
 there.

Since the VM includes a JIT (a compiler) and runs it transparently to the user, there's no reason that compiler couldn't compile source code into native code transparently to the user.
 If you want to provide a simple, cross-platform end-user
 application, it's much easier to use a VM - the VM vendor has taken
 care of porting the VM to all those platforms,

And the language vendor would have taken care of getting a compiler for those platforms!
 and you don't need to
 bother maintaining source code portability, make/autoconf/etc. files,
 and compilation instructions (dependencies, etc.) (ok, most computer
 users out there use Windows, and many non-Windows users know how to
 compile a program, but the point stands :P)

This can be automated as well. BTW, non-programmers run compilers all the time - after all, how else could they run a javascript app?
 5) it's much easier to provide security/isolation for VM languages.
 Although native code isolation can be done using hardware, it's
 complicated and inefficient.

The virtualization hardware works very well! It's complex, but it is far more efficient than a VM is. In fact, you're likely to be running on a hardware virtualized machine anyway!
 This allows VM languages to be safely
 embedded in places such as web pages (Flash for ActionScript, applets
 for Java, Silverlight for .NET).

It is not necessary to have a VM to achieve this. If you design a language that does not have arbitrary pointers, and you control the code generation, you can sandbox it in software every bit as effectively. This is why, for example, the Java JITs don't compromise their security model.
Oct 22 2007
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Vladimir Panteleev wrote:
 Indeed. Infact, most of the issues I mentioned can be solved by
 distributing source code instead of intermediary bytecode. Actually,
 if you compare the Java/.NET VM with a hypothetical system which
 compiles the source code and runs the binary on the fly, the
 difference is pretty low - it's just that bytecode is one level lower
 than source code (and source code parsing/lexing would slow down
 compilation to native code by some degree).

To some degree, yes. You can address this, though, by pre-tokenizing the source.
 I don't think it would be hard to turn D into a VM just like .NET -
 just split the front-end from the back-end, make the front-end
 serialize the AST and distribute a back-end that reads ASTs, "JITs"
 them, links to Phobos/other libraries and runs them. You could even
 scan the AST for unsafe code (pointers, some types of casts), add
 that with forced bounds checking, and you have a "safe" D
 VM/compiler. So, I'd like to ask - what exactly are we debating
 again? :)
 
 When comparing VMs (systems that compile to bytecode) to just
 distributing the source code (potentially wrapping it in a bundle or
 framework that can automatically compile and run the source for the
 user), the later inherits all the disadvantages of the VM (slow on
 first start, as the source code has to be compiled; the source or
 some other high-level source structures can be extracted; etc.). The
 only obvious advantage is that the source is readily available in
 case it's necessary to debug the application, but Java already has
 the option to include the source in the .jar file (although this
 causes it to include code in both bytecode and source).
 
 If we assume that all bytecode or source is compiled before it's ran
 (nothing is interpreted), as should happen in a "perfect" VM, the
 term "VM" loses much of its original meaning. The only thing left is
 the restrictions imposed on the language (no unsafe constructs like
 pointers) and means to operate on the AST (reflection, code
 generation, etc.) Taking that into consideration, comparing a perfect
 "VM" with distributing native code seems to make slow start-up and
 the bulky VM runtime the only disadvantages of using VMs. (Have I
 abstractized so much that I'm forgetting something important here?)

I don't think you've forgotten anything important.
 
 5) it's much easier to provide security/isolation for VM
 languages. Although native code isolation can be done using
 hardware, it's complicated and inefficient.

is far more efficient than a VM is. In fact, you're likely to be running on a hardware virtualized machine anyway!

Unfortunately, virtualization extensions are not available on all platforms - and implementing sandboxing on platforms where it's not supported by hardware would be quite complicated (involving disassembly, recompilation or interpretation).

I agree, but not in the case where source code is distributed and the compiler is controlled by the box, not the programmer.
 VirtualBox is a nice
 part-open-source virtualization product, and they stated that the
 software virtualization they implemented is faster than today's
 hardware virtualization.

I find that hard to believe.
 This allows VM languages to be safely embedded in places such as
 web pages (Flash for ActionScript, applets for Java, Silverlight
 for .NET).

language that does not have arbitrary pointers, and you control the code generation, you can sandbox it in software every bit as effectively. This is why, for example, the Java JITs don't compromise their security model.

This requires that the code is given at a level high enough where this is enforceable - that is, either at source or bytecode/AST level.

Right.
 I also thought of another point (though it only stands against
 distributing native code binaries, not self-compiling source code): 
 6) Bytecode can be compiled to optimized code for the specific
 environment it is run on (processor vendor and family). It's not a
 big plus, just a "nice" advantage.

That's often been touted, but it doesn't seem to produce much in the way of real results.
Oct 22 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
David Brown wrote:
 Simon Peyton-Jones spoke at the recent HOPL (History of Programming
 Languages) conference.  His group was originally trying to come up with
 better hardware to directly execute the abstract machine used in haskell.
 They problem they found is that because of the enormous advances in general
 purpose processors, even a simple simulation of the virtual machine on a
 modern PC ran faster than the hardware machine they could build.
 
 I'm not sure if this applies to the x86 that virtual-box simulates, but it
 could easily be the case for something like JVM.  A software JVM on a fast
 desktop machine is much faster than a hardware JVM on a small embedded
 system.

It sounds like they discovered that fast hardware (a fast desktop machine) runs faster than slow hardware (small embedded system)! But it doesn't sound like on the same machine they showed that software ran faster than hardware.
Oct 23 2007
parent reply 0ffh <spam frankhirsch.net> writes:
Walter Bright wrote:
 It sounds like they discovered that fast hardware (a fast desktop 
 machine) runs faster than slow hardware (small embedded system)! But it 
 doesn't sound like on the same machine they showed that software ran 
 faster than hardware.

It sounds like they discovered, that for any resonable amount of money to spend on hardware, they'd be better of with cheap, fast, but "wrong" off- the-shelf hardware and software emulation than with some expensive custom made piece of metal that runs their stuff natively. Regards, Frank
Oct 23 2007
next sibling parent reply 0ffh <spam frankhirsch.net> writes:
David Wilson wrote:
 On 23/10/2007, 0ffh <spam frankhirsch.net> wrote:
 It sounds like they discovered, that for any resonable amount of money to
 spend on hardware, they'd be better of with cheap, fast, but "wrong" off-
 the-shelf hardware and software emulation than with some expensive custom
 made piece of metal that runs their stuff natively.

This is *not* the case. Current hardware virtualization cannot reach the performance of software paravirtualization for various reasons. Hardware nested page table support being the most often mentioned - many common ops are still very expensive under current Vanderpool/Pacifica, some worse than cooperative virt. See e.g. http://project-xen.web.cern.ch/project-xen/xen/hardware.html (search for "memory management"). Current virtualization hardware tech was pushed out the door to take advantage of the virtualization bubble last year/two ago. It is very wrong to assume that just because there is "hardware support" it's going to be "faster" (for some easily quantifiable value of faster), useful, or both.

There is just *no* *effing* *way* anything could be faster without hardware support compared to with. The ultimate hardware support is putting your sw into silicon. No way around here, just forget it. Complain to glod. It's just like that. Regards, Frank
Oct 23 2007
parent reply BCS <ao pathlink.com> writes:
Reply to 0ffh,

 David Wilson wrote:
 
 On 23/10/2007, 0ffh <spam frankhirsch.net> wrote:
 
 It sounds like they discovered, that for any resonable amount of
 money to spend on hardware, they'd be better of with cheap, fast,
 but "wrong" off- the-shelf hardware and software emulation than with
 some expensive custom made piece of metal that runs their stuff
 natively.
 

the performance of software paravirtualization for various reasons. Hardware nested page table support being the most often mentioned - many common ops are still very expensive under current Vanderpool/Pacifica, some worse than cooperative virt. See e.g. http://project-xen.web.cern.ch/project-xen/xen/hardware.html (search for "memory management"). Current virtualization hardware tech was pushed out the door to take advantage of the virtualization bubble last year/two ago. It is very wrong to assume that just because there is "hardware support" it's going to be "faster" (for some easily quantifiable value of faster), useful, or both.

hardware support compared to with. The ultimate hardware support is putting your sw into silicon. No way around here, just forget it. Complain to glod. It's just like that. Regards, Frank

OTOH Intel can spend a LOT more time and money getting there chips fast than most people can. If you can throw that kind of resources at it, you can make it faster. If all you can do is put it on a PCI card, forget it. Somewhere in between, they switch places. The question is where. It could be that right now, that point isn't yet practical.
Oct 23 2007
parent reply 0ffh <spam frankhirsch.net> writes:
BCS wrote:
 Reply to 0ffh,
 There is just *no* *effing* *way* anything could be faster without
 hardware support compared to with. The ultimate hardware support
 is putting your sw into silicon. No way around here, just forget it.
 Complain to glod. It's just like that.
 Regards, Frank

OTOH Intel can spend a LOT more time and money getting there chips fast than most people can. If you can throw that kind of resources at it, you can make it faster. If all you can do is put it on a PCI card, forget it. Somewhere in between, they switch places. The question is where. It could be that right now, that point isn't yet practical.

Right now I ain't talking practical anymore. There was a clear challenge regarding basic truths. Basic truth is: Hardware is faster... :-P Regards, Frank
Oct 23 2007
parent reply BCS <ao pathlink.com> writes:
Reply to 0ffh,

 BCS wrote:
 
 Reply to 0ffh,
 
 There is just *no* *effing* *way* anything could be faster without
 hardware support compared to with. The ultimate hardware support
 is putting your sw into silicon. No way around here, just forget it.
 Complain to glod. It's just like that.
 Regards, Frank

fast than most people can. If you can throw that kind of resources at it, you can make it faster. If all you can do is put it on a PCI card, forget it. Somewhere in between, they switch places. The question is where. It could be that right now, that point isn't yet practical.

challenge regarding basic truths. Basic truth is: Hardware is faster... :-P

I agree. But another basic truth is; if it ain't practical, /we/ ain't go'na get it. Yet.
 
 Regards, Frank
 

Oct 23 2007
parent reply 0ffh <spam frankhirsch.net> writes:
BCS wrote:
 Reply to 0ffh,
 Right now I ain't talking practical anymore. There was a clear
 challenge regarding basic truths. Basic truth is: Hardware is
 faster... :-P

I agree. But another basic truth is; if it ain't practical, /we/ ain't go'na get it. Yet.

Well, I still have my FPGA kit.... heh! =) Regards, Frank
Oct 23 2007
parent reply Bruce Adams <tortoise_74 yeah.who.co.uk> writes:
0ffh Wrote:

 BCS wrote:
 Reply to 0ffh,
 Right now I ain't talking practical anymore. There was a clear
 challenge regarding basic truths. Basic truth is: Hardware is
 faster... :-P

I agree. But another basic truth is; if it ain't practical, /we/ ain't go'na get it. Yet.

Well, I still have my FPGA kit.... heh! =) Regards, Frank

I was going to raise that point. I think FPGAs are still too small, slow and expensive to compete at the moment but perhaps in the future. Transmeta had a similar idea but they lost to the might of the intel / AMD war. (http://en.wikipedia.org/wiki/Transmeta). How are you finding your kit? I was thinking about getting some kit a while back. Have you got a D 'compiler' for it or indeed anything that turns higher level code (other than VHDL) directly into hardware? Regards, Bruce.
Oct 23 2007
next sibling parent Daniel Keep <daniel.keep.lists gmail.com> writes:
Bruce Adams wrote:
 0ffh Wrote:
 
 BCS wrote:
 Reply to 0ffh,
 Right now I ain't talking practical anymore. There was a clear
 challenge regarding basic truths. Basic truth is: Hardware is
 faster... :-P

go'na get it. Yet.

Regards, Frank

I was going to raise that point. I think FPGAs are still too small, slow and expensive to compete at the moment but perhaps in the future. Transmeta had a similar idea but they lost to the might of the intel / AMD war. (http://en.wikipedia.org/wiki/Transmeta). How are you finding your kit? I was thinking about getting some kit a while back. Have you got a D 'compiler' for it or indeed anything that turns higher level code (other than VHDL) directly into hardware? Regards, Bruce.

This is going way off-topic, but I remember reading a paper once where a group of researchers had taken some fairly well-optimised open-source speech recognition engine, and re-implemented it using an FPGA. Comparing against the software implementation running on a pretty fast machine, the FPGA blew it out of the water. It was something on the order of 10 times faster, on a fraction of the power, memory and clock speed. But yes, different tools for different jobs. I'm just pointing out that FPGAs have been used to improve performance beyond what could be done with a general purpose system. -- Daniel
Oct 24 2007
prev sibling parent 0ffh <spam frankhirsch.net> writes:
Bruce Adams wrote:
 0ffh Wrote:
 Well, I still have my FPGA kit.... heh! =)

I was going to raise that point. I think FPGAs are still too small, slow and expensive to compete at the moment but perhaps in the future.

Actually, I find FPGAs quite affordable already. It's just the development software used to be incredibly expensive, but that has changed a few years ago (at least for some manufacturers).
 How are you finding your kit?

It's fun but not much use without a hardware guy around. Luckily I know a few... =)
 I was thinking about getting some kit a while back. Have you
 got a D 'compiler' for it or indeed anything that turns higher level
 code (other than VHDL) directly into hardware?

No, I'm using Verilog and dreaming I had something /much/ better. VHDL is sure not it. I kinda like ABEL (no kiddin) because it's just so incredibly easy to use, but people keep telling me it scales badly or whatever. Regards, Frank
Oct 24 2007
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
0ffh wrote:
 Walter Bright wrote:
 It sounds like they discovered that fast hardware (a fast desktop 
 machine) runs faster than slow hardware (small embedded system)! But 
 it doesn't sound like on the same machine they showed that software 
 ran faster than hardware.

It sounds like they discovered, that for any resonable amount of money to spend on hardware, they'd be better of with cheap, fast, but "wrong" off- the-shelf hardware and software emulation than with some expensive custom made piece of metal that runs their stuff natively.

That's often true for embedded systems.
Oct 23 2007
prev sibling parent Reiner Pope <some address.com> writes:
Vladimir Panteleev wrote:
 This allows VM languages to be safely
 embedded in places such as web pages (Flash for ActionScript, applets
 for Java, Silverlight for .NET).

language that does not have arbitrary pointers, and you control the code generation, you can sandbox it in software every bit as effectively. This is why, for example, the Java JITs don't compromise their security model.

This requires that the code is given at a level high enough where this is enforceable - that is, either at source or bytecode/AST level.

However, work has been done on Proof Carrying Code and Typed Assembly Language so that you can distribute code as assembly, and still have the security policy enforceable. I don't know how developed this is, though. -- Reiner
Oct 23 2007
prev sibling next sibling parent "Vladimir Panteleev" <thecybershadow gmail.com> writes:
On Tue, 23 Oct 2007 00:39:51 +0300, Walter Bright <newshound1 digitalmars.com>
wrote:

 Vladimir Panteleev wrote:
 On Mon, 22 Oct 2007 21:41:26 +0300, Vladimir Panteleev
 <thecybershadow gmail.com> wrote:

 On Mon, 22 Oct 2007 21:22:50 +0300, Walter Bright
 <newshound1 digitalmars.com> wrote:

 Except that .net cannot talk to C or C++ code, which are the
 usual languages for applications.

can also define the exact layout of structures, and thus share data structures with native code.

Here's some documentation on it: http://msdn2.microsoft.com/en-us/library/aa288468(VS.71).aspx Note that you can also specify one of several calling conventions: Cdecl, Stdcall, Thiscall (which allows some basic OOP simulation) and Winapi (same as Stdcall on Windows).

Thanks for the reference. It says that the parameters must go through "marshalling", which means they go through a translation layer.

Quite so - I hope the JIT compiler generates proper optimal native code, though. (sorry, saw the e-mail reply before the NG reply) -- Best regards, Vladimir mailto:thecybershadow gmail.com
Oct 22 2007
prev sibling next sibling parent "Vladimir Panteleev" <thecybershadow gmail.com> writes:
On Tue, 23 Oct 2007 00:54:40 +0300, Walter Bright <newshound1 digitalmars.com>
wrote:

 Vladimir Panteleev wrote:
 On Mon, 22 Oct 2007 05:19:39 +0300, Walter Bright
 <newshound1 digitalmars.com> wrote:

 I've never been able to discover what the fundamental advantage of
 a VM is.

Some of the things which are only possible, or a good deal easier to use/implement with VMs: 1) code generation - used very seldomly, it might be used for runtime-specified cases where top performance is required (e.g. genetic programming?)

Are you referring to a JIT? JITs aren't easier to implement than a compiler back end.

I'm referring about using the standard library to emit code. This allows to generate arbitrary code at runtime, without having to bundle a compiler or compiler components with your program. Integration with existing code is also available, so you could create an on-the-fly class that is derived from a "hard-coded" class in the application. The use case I mentioned is genetic programming - a technique where genetic evolution argorithms are applied to bytecode programs, and in this case it is desireable for the generated programs to run at maximum speed without compromising the host's stability.
 2) VMs make modularity much easier in that you don't have to
 recompile all modules ("plugins") on all platforms, which is often
 not possible with projects whose core supports many platforms, but
 most developers don't have access to all supported platforms.

Problem is solved by defining your ".class" file to be compressed source code. Dealing with back end bugs on platform X is no different from dealing with VM bugs on platform X. Java is infamous for "compile once, debug everywhere".

Yes, though I didn't mention debugging. Otherwise, see below.
 3) very flexible reflection - like being able to derive from classes
 in other modules. Though this can be done in native languages by
 including enough metadata, most compiled languages don't.

I think this is possible with compiled languages, but nobody has done it yet.

I believe DDL was going in that direction.
 4) compilation is not a simple process for most computer users out
 there.

Since the VM includes a JIT (a compiler) and runs it transparently to the user, there's no reason that compiler couldn't compile source code into native code transparently to the user.

Indeed. Infact, most of the issues I mentioned can be solved by distributing source code instead of intermediary bytecode. Actually, if you compare the Java/.NET VM with a hypothetical system which compiles the source code and runs the binary on the fly, the difference is pretty low - it's just that bytecode is one level lower than source code (and source code parsing/lexing would slow down compilation to native code by some degree). I don't think it would be hard to turn D into a VM just like .NET - just split the front-end from the back-end, make the front-end serialize the AST and distribute a back-end that reads ASTs, "JITs" them, links to Phobos/other libraries and runs them. You could even scan the AST for unsafe code (pointers, some types of casts), add that with forced bounds checking, and you have a "safe" D VM/compiler. So, I'd like to ask - what exactly are we debating again? :) When comparing VMs (systems that compile to bytecode) to just distributing the source code (potentially wrapping it in a bundle or framework that can automatically compile and run the source for the user), the later inherits all the disadvantages of the VM (slow on first start, as the source code has to be compiled; the source or some other high-level source structures can be extracted; etc.). The only obvious advantage is that the source is readily available in case it's necessary to debug the application, but Java already has the option to include the source in the .jar file (although this causes it to include code in both bytecode and source). If we assume that all bytecode or source is compiled before it's ran (nothing is interpreted), as should happen in a "perfect" VM, the term "VM" loses much of its original meaning. The only thing left is the restrictions imposed on the language (no unsafe constructs like pointers) and means to operate on the AST (reflection, code generation, etc.) Taking that into consideration, comparing a perfect "VM" with distributing native code seems to make slow start-up and the bulky VM runtime the only disadvantages of using VMs. (Have I abstractized so much that I'm forgetting something important here?)
 5) it's much easier to provide security/isolation for VM languages.
 Although native code isolation can be done using hardware, it's
 complicated and inefficient.

The virtualization hardware works very well! It's complex, but it is far more efficient than a VM is. In fact, you're likely to be running on a hardware virtualized machine anyway!

Unfortunately, virtualization extensions are not available on all platforms - and implementing sandboxing on platforms where it's not supported by hardware would be quite complicated (involving disassembly, recompilation or interpretation). VirtualBox is a nice part-open-source virtualization product, and they stated that the software virtualization they implemented is faster than today's hardware virtualization.
 This allows VM languages to be safely
 embedded in places such as web pages (Flash for ActionScript, applets
 for Java, Silverlight for .NET).

It is not necessary to have a VM to achieve this. If you design a language that does not have arbitrary pointers, and you control the code generation, you can sandbox it in software every bit as effectively. This is why, for example, the Java JITs don't compromise their security model.

This requires that the code is given at a level high enough where this is enforceable - that is, either at source or bytecode/AST level. I also thought of another point (though it only stands against distributing native code binaries, not self-compiling source code): 6) Bytecode can be compiled to optimized code for the specific environment it is run on (processor vendor and family). It's not a big plus, just a "nice" advantage. -- Best regards, Vladimir mailto:thecybershadow gmail.com
Oct 22 2007
prev sibling next sibling parent David Brown <dlang davidb.org> writes:
On Mon, Oct 22, 2007 at 02:14:29AM -0700, Walter Bright wrote:

 If the source code is portable, i.e. there is no undefined or 
 implementation defined behavior, there's no reason that the VM object code 
 should be more portable than the source. (And remember all the troubles 
 with Java VMs behaving differently?)

In the smartphone market, source is almost never distributed. Having multiple architectures would require every software vendor to support every desired architecture. Having a VM allows them to easily distribute a product that works on all phones instead of one. For this market, at least, native code isn't really even an option. Of course, since nearly all smart phones use a single processor (ARM), this really isn't applicable. The smart phone people do like the sandbox aspect as well, though. David
Oct 22 2007
prev sibling next sibling parent David Brown <dlang davidb.org> writes:
On Mon, Oct 22, 2007 at 02:29:04AM -0700, Walter Bright wrote:

  d) For debugging purposes. Since p-code is interpreted, the interpreter 
 can apply many additional runtime checks that are harder to implement with 
 native code."

That's a crock.

It's not that they're harder to implement, but who you trust to be making the runtime tests. For a VM, it can perform runtime tests, and if you trust the VM, you can run untrusted programs and trust that they won't do things (at least in the theory that your VM is perfect). You can't do this is you're trusting that the checks were done by whatever compiler compiled the program you're running. Both .NET and Java do this. They allow a kind of sandbox operation of untrusted code. It's almost impossible to do this with native code since most of the type information is gone at that point. While still in the VM, pointer types can be tested and casts and such forbidden. David
Oct 22 2007
prev sibling next sibling parent reply =?ISO-8859-1?Q?Julio_C=E9sar_Carrascal_Urquijo?= writes:
Walter Bright wrote:
 I've never been able to discover what the fundamental advantage of a VM is.

The only advantage a VM has over native code that I see is security. I'm not talking about this process can't write memory of another process. I'm talking about this process can't write to the hard disk, only to Isolated Storage; but this one can because it's signed by a Thawte certificate and the VM. This is a lot more than disallowing pointer arithmetic. I'm not aware of any compiled language that has managed to do this. On the other hand, most .NET developers ignore CAS (Code Access Security) in their apps, so it doesn't seem like a great advantage anyway. -- Julio César Carrascal Urquijo http://www.artelogico.com/
Oct 22 2007
parent Christopher Wright <dhasenan gmail.com> writes:
Julio César Carrascal Urquijo wrote:
 Walter Bright wrote:
 I've never been able to discover what the fundamental advantage of a 
 VM is.

The only advantage a VM has over native code that I see is security. I'm not talking about this process can't write memory of another process. I'm talking about this process can't write to the hard disk, only to Isolated Storage; but this one can because it's signed by a Thawte certificate and the VM.

This policy should be carried out at the operating system level for any reasonable assurance of security.
 This is a lot more than disallowing pointer arithmetic. I'm not aware of 
 any compiled language that has managed to do this.

C + SELinux? If your language doesn't have a VM, the VM can't check any certificates, only the OS. The reverse is not true -- your OS can check VM-bound applications' certificates, depending on how VM applications are launched and whether the VM cooperates. Though in SELinux, you don't have certificates; you have a complex set of permissions, essentially, that some really dedicated person has to come up with.
 On the other hand, most .NET developers ignore CAS (Code Access 
 Security) in their apps, so it doesn't seem like a great advantage anyway.

Nobody uses SELinux, either, so that's okay.
Oct 23 2007
prev sibling next sibling parent reply Clay Smith <clayasaurus gmail.com> writes:
Walter Bright wrote:
 Jussi Jumppanen wrote:
 I think Microsoft's longer term vision is to have .NET everywhere
 and I mean everywhere.

I've never been able to discover what the fundamental advantage of a VM is.

The ability to have multiple language's targeting the same VM, as well as lowering the barrier for a language to become cross platform. I think the future of computing may be to allow the programmer to choose whatever compiled language they want, and eventually have all languages compiled down to the same 'bytecode' so they can all interoperate with each other.
Oct 22 2007
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Clay Smith" wrote
 Walter Bright wrote:
 Jussi Jumppanen wrote:
 I think Microsoft's longer term vision is to have .NET everywhere
 and I mean everywhere.

I've never been able to discover what the fundamental advantage of a VM is.

The ability to have multiple language's targeting the same VM, as well as lowering the barrier for a language to become cross platform. I think the future of computing may be to allow the programmer to choose whatever compiled language they want, and eventually have all languages compiled down to the same 'bytecode' so they can all interoperate with each other.

The future is now :) .net does this. C++.net and J# (Java-like .net) and C# and VB and COBOL.net, and oh I don't know, look at this page: http://en.wikipedia.org/wiki/.NET_Languages BTW, I don't think the result is an advantage, as in practice, the language is more important than the object format, so you still end up only using the best language (in my mind, C# is the best .net language). All these languages must use the .net library to be compatible. -Steve
Oct 23 2007
parent Clay Smith <clayasaurus gmail.com> writes:
Steven Schveighoffer wrote:
 "Clay Smith" wrote
 Walter Bright wrote:
 Jussi Jumppanen wrote:
 I think Microsoft's longer term vision is to have .NET everywhere
 and I mean everywhere.

I've never been able to discover what the fundamental advantage of a VM is.

lowering the barrier for a language to become cross platform. I think the future of computing may be to allow the programmer to choose whatever compiled language they want, and eventually have all languages compiled down to the same 'bytecode' so they can all interoperate with each other.

The future is now :) .net does this. C++.net and J# (Java-like .net) and C# and VB and COBOL.net, and oh I don't know, look at this page: http://en.wikipedia.org/wiki/.NET_Languages BTW, I don't think the result is an advantage, as in practice, the language is more important than the object format, so you still end up only using the best language (in my mind, C# is the best .net language). All these languages must use the .net library to be compatible. -Steve

The .NET languages do look really promising in this regard, the problem is that Microsoft only supports its own platform. Maybe Mono might catch up, but if Microsoft decides to be evil, it could easily pull the rig on Mono by changing specs or using their army of lawyers. What I'm envisioning will be something that is not tied to any one platform or corporation and will be open source, with no one claiming to own the technology. Of course, it would require all programmers to agree on a common ground, so it is probably unrealistic.
Oct 23 2007
prev sibling next sibling parent David Brown <dlang davidb.org> writes:
On Mon, Oct 22, 2007 at 08:54:01PM -0700, Walter Bright wrote:

 VirtualBox is a nice
 part-open-source virtualization product, and they stated that the
 software virtualization they implemented is faster than today's
 hardware virtualization.

I find that hard to believe.

Simon Peyton-Jones spoke at the recent HOPL (History of Programming Languages) conference. His group was originally trying to come up with better hardware to directly execute the abstract machine used in haskell. They problem they found is that because of the enormous advances in general purpose processors, even a simple simulation of the virtual machine on a modern PC ran faster than the hardware machine they could build. I'm not sure if this applies to the x86 that virtual-box simulates, but it could easily be the case for something like JVM. A software JVM on a fast desktop machine is much faster than a hardware JVM on a small embedded system. David
Oct 22 2007
prev sibling next sibling parent "Vladimir Panteleev" <thecybershadow gmail.com> writes:
On Tue, 23 Oct 2007 06:54:01 +0300, Walter Bright <newshound1 digitalmars.com>
wrote:

 VirtualBox is a nice
 part-open-source virtualization product, and they stated that the
 software virtualization they implemented is faster than today's
 hardware virtualization.
 
 I find that hard to believe.

I should have included this link to support this: http://www.virtualbox.org/wiki/Developer_FAQ See the 2nd Q/A pair. -- Best regards, Vladimir mailto:thecybershadow gmail.com
Oct 23 2007
prev sibling parent "David Wilson" <dw botanicus.net> writes:
On 23/10/2007, 0ffh <spam frankhirsch.net> wrote:
 Walter Bright wrote:
 It sounds like they discovered that fast hardware (a fast desktop
 machine) runs faster than slow hardware (small embedded system)! But it
 doesn't sound like on the same machine they showed that software ran
 faster than hardware.

It sounds like they discovered, that for any resonable amount of money to spend on hardware, they'd be better of with cheap, fast, but "wrong" off- the-shelf hardware and software emulation than with some expensive custom made piece of metal that runs their stuff natively.

This is *not* the case. Current hardware virtualization cannot reach the performance of software paravirtualization for various reasons. Hardware nested page table support being the most often mentioned - many common ops are still very expensive under current Vanderpool/Pacifica, some worse than cooperative virt. See e.g. http://project-xen.web.cern.ch/project-xen/xen/hardware.html (search for "memory management"). Current virtualization hardware tech was pushed out the door to take advantage of the virtualization bubble last year/two ago. It is very wrong to assume that just because there is "hardware support" it's going to be "faster" (for some easily quantifiable value of faster), useful, or both. Thanks, David. "A little knowledge is dangerous"
 Regards, Frank

Oct 23 2007