www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Re: D vs. C#

reply Bruce Adams <tortoise_74 yeah.who.co.uk> writes:
Walter Bright Wrote:

 Roberto Mariottini wrote:
 David Brown wrote:
 On Sun, Oct 21, 2007 at 10:08:26PM -0700, Walter Bright wrote:

 That isn't an advantage of the VM. It's an advantage of a language 
 that has no implementation-defined or undefined behavior. Given that, 
 the same portability results are achieved.

It's still a VM advantage. It helps the model where there are many developers who only distribute binaries. If they are distributing for a VM, they only have to distribute a single binary. Otherwise, they still would have to recompile for every possible target.

And not only that: if my product is compiled for Java-CLDC it will work on any cell phone that support CLDC, based on any kind of processor/architecture, included those I don't know of, included even those that today don't exist and will be made in the future.

Javascript is distributed in source code, and executes on a variety of machines. A VM is not necessary to achieve portability to machines unknown. What is necessary is a portable language design.

Imagine it as a compatibility layer or a shared library. If my OS supports POSIX I can develop for POSIX. If I develop for windows as well I have to learn and use other APIs. A VM is just a special kind of API that provides a language backend and interpreter.
Oct 22 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Bruce Adams wrote:
 Imagine it as a compatibility layer or a shared library. If my OS
 supports POSIX I can develop for POSIX. If I develop for windows as
 well I have to learn and use other APIs. A VM is just a special kind
 of API that provides a language backend and interpreter.

It can be thought of that way, it's just entirely unnecessary to achieve those goals, and throws in a bunch of problems: 1) perennial performance issues 2) incompatibility and inoperability with native languages 3) gigantic runtimes needed
Oct 22 2007
next sibling parent reply "Vladimir Panteleev" <thecybershadow gmail.com> writes:
On Mon, 22 Oct 2007 21:19:30 +0300, Walter Bright <newshound1 digitalmars.com>
wrote:

 3) gigantic runtimes needed

IMO it's better to have one copy of a gigantic runtime than having parts of it statically linked in every EXE, causing lots of repeating code in a product with lots of binaries (such as an operating system). .NET executables are much smaller compared to most native executables (where the runtime is statically linked) - so, knowing that .NET will only gain more popularity in the future, I find a one-time 20MB download preferred to re-downloading the same components with every new program. Now that Microsoft is including it in their new operating systems, Vista users will just benefit from a smaller download size. -- Best regards, Vladimir mailto:thecybershadow gmail.com
Oct 22 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Vladimir Panteleev wrote:
 On Mon, 22 Oct 2007 21:19:30 +0300, Walter Bright
 <newshound1 digitalmars.com> wrote:
 
 3) gigantic runtimes needed

IMO it's better to have one copy of a gigantic runtime than having parts of it statically linked in every EXE, causing lots of repeating code in a product with lots of binaries (such as an operating system). .NET executables are much smaller compared to most native executables (where the runtime is statically linked) - so, knowing that .NET will only gain more popularity in the future, I find a one-time 20MB download preferred to re-downloading the same components with every new program. Now that Microsoft is including it in their new operating systems, Vista users will just benefit from a smaller download size.

This problem is addressed by DLLs (Windows) and shared libraries (Linux).
Oct 22 2007
next sibling parent reply Jascha Wetzel <firstname mainia.de> writes:
Vladimir Panteleev wrote:
 Except, they're not really as easy to use.
 
 With .NET, you can derive from a class in a compiled assembly without having
access to the source. You just add the assembly in the project's dependencies
and import the namespace with "using". In C, you must use the included .h files
(and .h files are a pain to maintain anyway since you must maintain the
declaration and implementation separately, but that's not news to you). You
must still use .lib and .di files with D and such - although they can be
automated in the build process, it's still a hassle. 
 
 Besides that, statically linking in the runtime seems to be a too common
practice, as "DLL hell" has been a discouragement for dynamically-linked
libraries in the past (side-by-side assemblies is supposed to remedy that
though). I guess the fault is not in the DLLs themselves, it's how people and
Microsoft used them... 
 

That is correct, but the obvious solution to that problem is to support the OO paradigm in dynamic linking. That is, we don't need a VM, we need DDL. Had C++ standardized it's ABI, this problem would probably not exist today.
Oct 23 2007
parent reply "Chris Miller" <chris dprogramming.com> writes:
On Tue, 23 Oct 2007 07:28:42 -0400, Jascha Wetzel <firstname mainia.de>  
wrote:

 Vladimir Panteleev wrote:
 Except, they're not really as easy to use.
  With .NET, you can derive from a class in a compiled assembly without  
 having access to the source. You just add the assembly in the project's  
 dependencies and import the namespace with "using". In C, you must use  
 the included .h files (and .h files are a pain to maintain anyway since  
 you must maintain the declaration and implementation separately, but  
 that's not news to you). You must still use .lib and .di files with D  
 and such - although they can be automated in the build process, it's  
 still a hassle.  Besides that, statically linking in the runtime seems  
 to be a too common practice, as "DLL hell" has been a discouragement  
 for dynamically-linked libraries in the past (side-by-side assemblies  
 is supposed to remedy that though). I guess the fault is not in the  
 DLLs themselves, it's how people and Microsoft used them...

That is correct, but the obvious solution to that problem is to support the OO paradigm in dynamic linking. That is, we don't need a VM, we need DDL. Had C++ standardized it's ABI, this problem would probably not exist today.

http://www.codesourcery.com/cxx-abi/ I don't know the whole deal, but I guess some decided not to go by this; I don't even know if DMC does or not.
Oct 23 2007
next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Chris Miller wrote:
   http://www.codesourcery.com/cxx-abi/
 I don't know the whole deal, but I guess some decided not to go by this; 
 I don't even know if DMC does or not.

DMC++ follows the Microsoft C++ ABI under Windows.
Oct 23 2007
prev sibling parent reply Don Clugston <dac nospam.com.au> writes:
Chris Miller wrote:
 On Tue, 23 Oct 2007 07:28:42 -0400, Jascha Wetzel <firstname mainia.de> 
 wrote:
 
 Vladimir Panteleev wrote:
 Except, they're not really as easy to use.
  With .NET, you can derive from a class in a compiled assembly 
 without having access to the source. You just add the assembly in the 
 project's dependencies and import the namespace with "using". In C, 
 you must use the included .h files (and .h files are a pain to 
 maintain anyway since you must maintain the declaration and 
 implementation separately, but that's not news to you). You must 
 still use .lib and .di files with D and such - although they can be 
 automated in the build process, it's still a hassle.  Besides that, 
 statically linking in the runtime seems to be a too common practice, 
 as "DLL hell" has been a discouragement for dynamically-linked 
 libraries in the past (side-by-side assemblies is supposed to remedy 
 that though). I guess the fault is not in the DLLs themselves, it's 
 how people and Microsoft used them...

That is correct, but the obvious solution to that problem is to support the OO paradigm in dynamic linking. That is, we don't need a VM, we need DDL. Had C++ standardized it's ABI, this problem would probably not exist today.

http://www.codesourcery.com/cxx-abi/ I don't know the whole deal, but I guess some decided not to go by this; I don't even know if DMC does or not.

That was added after the fact. Unfortunately, the ABIs for Linux 64 and Windows 64 are diverging. It's ridiculous. I don't know if D will be able to support a common ABI across both platforms.
Oct 23 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Don Clugston wrote:
 Unfortunately, the ABIs for Linux 64 and Windows 64 are diverging. It's 
 ridiculous. I don't know if D will be able to support a common ABI 
 across both platforms.

dmd already supports two different ABIs - win32 and linux 32. There are numerous subtle differences in calling conventions, alignment, register usage, as well as the major one of name mangling.
Oct 24 2007
parent Don Clugston <dac nospam.com.au> writes:
Walter Bright wrote:
 Don Clugston wrote:
 Unfortunately, the ABIs for Linux 64 and Windows 64 are diverging. 
 It's ridiculous. I don't know if D will be able to support a common 
 ABI across both platforms.

dmd already supports two different ABIs - win32 and linux 32. There are numerous subtle differences in calling conventions, alignment, register usage, as well as the major one of name mangling.

The Linux64/Win64 difference is worse, though. It's possible to have a pure asm function which will work on both Linux32 and Win32; that's not possible for the 64 bit case. But I'm most worried about the requirements for system exception handling.
Oct 26 2007
prev sibling next sibling parent reply serg kovrov <sergk mailinator.com> writes:
Walter Bright wrote:
 This problem is addressed by DLLs (Windows) and shared libraries (Linux).

I wanted to ask long time ago, will D-runtime be available as dll/so? Sorry if this was asked/answered before, I didn't manage to find this. -- serg
Oct 23 2007
parent Walter Bright <newshound1 digitalmars.com> writes:
serg kovrov wrote:
 Walter Bright wrote:
 This problem is addressed by DLLs (Windows) and shared libraries (Linux).

I wanted to ask long time ago, will D-runtime be available as dll/so?

Eventually, yes. It just lacks someone working on it.
Oct 23 2007
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Vladimir Panteleev wrote:
 With .NET, you can derive from a class in a compiled assembly without
 having access to the source. You just add the assembly in the
 project's dependencies and import the namespace with "using". In C,
 you must use the included .h files (and .h files are a pain to
 maintain anyway since you must maintain the declaration and
 implementation separately, but that's not news to you).

Yes, but that's a language bug, not anything inherent to native compilers.
 You must
 still use .lib and .di files with D and such - although they can be
 automated in the build process, it's still a hassle.

D has the potential to do better, it's just that its a bit mired in the old school.
 Besides that, statically linking in the runtime seems to be a too
 common practice, as "DLL hell" has been a discouragement for
 dynamically-linked libraries in the past (side-by-side assemblies is
 supposed to remedy that though). I guess the fault is not in the DLLs
 themselves, it's how people and Microsoft used them...

The solution to this is to have automatically generated versions for each build of a DLL/shared library. I imagine that .net does the same thing for assemblies.
Oct 23 2007
next sibling parent Kyle Furlong <kylefurlong gmail.com> writes:
Walter Bright wrote:
 Vladimir Panteleev wrote:
 With .NET, you can derive from a class in a compiled assembly without
 having access to the source. You just add the assembly in the
 project's dependencies and import the namespace with "using". In C,
 you must use the included .h files (and .h files are a pain to
 maintain anyway since you must maintain the declaration and
 implementation separately, but that's not news to you).

Yes, but that's a language bug, not anything inherent to native compilers.
 You must
 still use .lib and .di files with D and such - although they can be
 automated in the build process, it's still a hassle.

D has the potential to do better, it's just that its a bit mired in the old school.

What do you envision as better for the future? Or were you just speaking hypothetically? Will link compatibility be kept for 2.0, 3.0 etc?
 
 Besides that, statically linking in the runtime seems to be a too
 common practice, as "DLL hell" has been a discouragement for
 dynamically-linked libraries in the past (side-by-side assemblies is
 supposed to remedy that though). I guess the fault is not in the DLLs
 themselves, it's how people and Microsoft used them...

The solution to this is to have automatically generated versions for each build of a DLL/shared library. I imagine that .net does the same thing for assemblies.

Oct 23 2007
prev sibling parent davidl <davidl 126.com> writes:
在 Wed, 24 Oct 2007 08:54:16 +0800,Walter Bright  
<newshound1 digitalmars.com> 写道:

 Vladimir Panteleev wrote:
 With .NET, you can derive from a class in a compiled assembly without
 having access to the source. You just add the assembly in the
 project's dependencies and import the namespace with "using". In C,
 you must use the included .h files (and .h files are a pain to
 maintain anyway since you must maintain the declaration and
 implementation separately, but that's not news to you).

Yes, but that's a language bug, not anything inherent to native compilers.
 You must
 still use .lib and .di files with D and such - although they can be
 automated in the build process, it's still a hassle.

D has the potential to do better, it's just that its a bit mired in the old school.
 Besides that, statically linking in the runtime seems to be a too
 common practice, as "DLL hell" has been a discouragement for
 dynamically-linked libraries in the past (side-by-side assemblies is
 supposed to remedy that though). I guess the fault is not in the DLLs
 themselves, it's how people and Microsoft used them...

The solution to this is to have automatically generated versions for each build of a DLL/shared library. I imagine that .net does the same thing for assemblies.

The solution is banning those guys from creating changing DLL/shared libraries. They just have no idea of what DLLs are and how DLLs should be. Generating versions is a bad idea. Consider FireFox with its tons of plugins. Almost all plugins I use actually works well with *any* FireFox version. Just it bothers me to change the version no in the jar file. Cause FireFox APIs & Javascript is something fixed. So the interface of what plugins rely on is fixed. That's basically how and what DLLs should be. Interface interacts with design. I can't imagine a good design yields changing interface. -- 使用 Opera 革命性的电子邮件客户程序: http://www.opera.com/mail/
Oct 23 2007
prev sibling next sibling parent "Janice Caron" <caron800 googlemail.com> writes:
On 10/22/07, Walter Bright <newshound1 digitalmars.com> wrote:
 3) gigantic runtimes needed

This one is the killer for me. Java is huge. Net is even bigger. I'm just not interested in putting that much bloat onto my machine just to run the odd one or two programs.
Oct 22 2007
prev sibling next sibling parent reply Bruce Adams <tortoise_74 yeah.who.co.uk> writes:
Walter Bright Wrote:

 Bruce Adams wrote:
 Imagine it as a compatibility layer or a shared library. If my OS
 supports POSIX I can develop for POSIX. If I develop for windows as
 well I have to learn and use other APIs. A VM is just a special kind
 of API that provides a language backend and interpreter.

It can be thought of that way, it's just entirely unnecessary to achieve those goals, and throws in a bunch of problems: 1) perennial performance issues

The difference is what you are optimising the performance of. A dynamic language using a VM is optimising the programmers performance by allowing them to skip the compilation step at the expense of slower code.
 2) incompatibility and inoperability with native languages

This is partly by design. A VM operating as a sandbox should not be able to go down to the hardware level. However I think good interoperability has been demonstrated. Most scripting languages sport a way of writing extensions. These must be executed by the VM somehow. And then there's SWIG for automating the generation of wrappers.
 3) gigantic runtimes needed

An interpreter itself is relatively small. I can only assume that a lot of the bloat is down to bad coding. If you look at games these days they weigh in at a ridiculous 4Gb install. No amount of uncompressed data for performance gain excuses that. I suspect its the same sloppy coding for VMs on a smaller scale. It would not surprise me to see much smaller (and more elegantly designed) run-times on devices such as PDAs where the bloat cannot be tolerated. <asbestos suit> I wonder if the compile time side of D might benefit from running inside a VM when people start to do really evil and complicated things with it. </asbestos suit>
Oct 22 2007
next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Bruce Adams wrote:
 Walter Bright Wrote:
 
 Bruce Adams wrote:
 Imagine it as a compatibility layer or a shared library. If my OS
  supports POSIX I can develop for POSIX. If I develop for windows
 as well I have to learn and use other APIs. A VM is just a
 special kind of API that provides a language backend and
 interpreter.

achieve those goals, and throws in a bunch of problems: 1) perennial performance issues

The difference is what you are optimising the performance of. A dynamic language using a VM is optimising the programmers performance by allowing them to skip the compilation step at the expense of slower code.

I bet D compiles faster to native code <g>. In any case, I was talking about performance of apps, not the edit/compile/debug loop.
 2) incompatibility and inoperability with native languages

This is partly by design. A VM operating as a sandbox should not be able to go down to the hardware level. However I think good interoperability has been demonstrated. Most scripting languages sport a way of writing extensions. These must be executed by the VM somehow. And then there's SWIG for automating the generation of wrappers.

VMs go through some sort of marshalling and compatiblity layer to connect to the outside world. Native languages can connect directly.
 3) gigantic runtimes needed

An interpreter itself is relatively small. I can only assume that a lot of the bloat is down to bad coding. If you look at games these days they weigh in at a ridiculous 4Gb install. No amount of uncompressed data for performance gain excuses that. I suspect its the same sloppy coding for VMs on a smaller scale. It would not surprise me to see much smaller (and more elegantly designed) run-times on devices such as PDAs where the bloat cannot be tolerated.

The reason the gigantic runtimes are needed is because the VM has to carry around with it essentially an entire small operating system's worth of libraries. They all have to be there, not just the ones the app actually uses. The VM winds up duplicating much of the functionality of the underlying OS APIs.
 <asbestos suit> I wonder if the compile time side of D might benefit
 from running inside a VM when people start to do really evil and
 complicated things with it. </asbestos suit>

I don't think D compile times have been a problem <g>.
Oct 22 2007
prev sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Bruce Adams wrote:
 
 An interpreter itself is relatively small. I can only assume that a
 lot of the bloat is down to bad coding. If you look at games these
 days they weigh in at a ridiculous 4Gb install. No amount of
 uncompressed data for performance gain excuses that. 

It's not the code that makes modern games eat up 4Gb of space, it's the textures, animations, 3D models, audio, video cut scenes, etc. The code is a pretty small part of that. --bb
Oct 23 2007
parent reply Bruce Adams <tortoise_74 yeah.who.co.uk> writes:
Bill Baxter Wrote:

 Bruce Adams wrote:
 
 An interpreter itself is relatively small. I can only assume that a
 lot of the bloat is down to bad coding. If you look at games these
 days they weigh in at a ridiculous 4Gb install. No amount of
 uncompressed data for performance gain excuses that. 

It's not the code that makes modern games eat up 4Gb of space, it's the textures, animations, 3D models, audio, video cut scenes, etc. The code is a pretty small part of that. --bb

That's partly my point. A lot of that could be achieved programmatically or with better compression. You get map and model files (effectively data structures representing maps and model) that are huge and hugely inefficient with it, describing low level details with little or no abstraction. E.g. a pyramid might made of points rather than recognising a pyramid as an abstraction. Some bright sparks have decided to use XML as their data format. Its only a little bigger and only takes a little extra time to parse. This costs little on a modern machine but can hardly be considered compact and efficient.
Oct 23 2007
parent reply Nathan Reed <nathaniel.reed gmail.com> writes:
Bruce Adams wrote:
 Bill Baxter Wrote:
 
 Bruce Adams wrote:
 An interpreter itself is relatively small. I can only assume that a
 lot of the bloat is down to bad coding. If you look at games these
 days they weigh in at a ridiculous 4Gb install. No amount of
 uncompressed data for performance gain excuses that. 

textures, animations, 3D models, audio, video cut scenes, etc. The code is a pretty small part of that. --bb

That's partly my point. A lot of that could be achieved programmatically or with better compression. You get map and model files (effectively data structures representing maps and model) that are huge and hugely inefficient with it, describing low level details with little or no abstraction. E.g. a pyramid might made of points rather than recognising a pyramid as an abstraction. Some bright sparks have decided to use XML as their data format. Its only a little bigger and only takes a little extra time to parse. This costs little on a modern machine but can hardly be considered compact and efficient.

Map and model file formats for most modern games that I know of *do* provide a way to factor out common geometry elements, so you only store one copy of the geometry for a streetlight (say) rather than repeating it for every streetlight in the game world. Even so, a modern game involves a hell of a lot of content. That's just the way it is. I'm not sure how compressed that data is on the hard drive. It's possible that they could shrink the data significantly with more attention to compression. However, that probably adversely impacts level loading times which are already long enough (I was playing the latest installment of Half-Life the other day, and seeing approx. 20-30 second load times). Despite your opinion about uncompressed data for performance's sake, a lot of gamers *would* rather the game take up 4GB of space than add to the load times. Thanks, Nathan Reed
Oct 23 2007
parent reply Bruce Adams <tortoise_74 yeah.who.co.uk> writes:
Nathan Reed Wrote:

 Bruce Adams wrote:
 Bill Baxter Wrote:
 
 Bruce Adams wrote:
 An interpreter itself is relatively small. I can only assume that a
 lot of the bloat is down to bad coding. If you look at games these
 days they weigh in at a ridiculous 4Gb install. No amount of
 uncompressed data for performance gain excuses that. 

textures, animations, 3D models, audio, video cut scenes, etc. The code is a pretty small part of that. --bb

That's partly my point. A lot of that could be achieved programmatically or with better compression. You get map and model files (effectively data structures representing maps and model) that are huge and hugely inefficient with it, describing low level details with little or no abstraction. E.g. a pyramid might made of points rather than recognising a pyramid as an abstraction. Some bright sparks have decided to use XML as their data format. Its only a little bigger and only takes a little extra time to parse. This costs little on a modern machine but can hardly be considered compact and efficient.

Map and model file formats for most modern games that I know of *do* provide a way to factor out common geometry elements, so you only store one copy of the geometry for a streetlight (say) rather than repeating it for every streetlight in the game world. Even so, a modern game involves a hell of a lot of content. That's just the way it is. I'm not sure how compressed that data is on the hard drive. It's possible that they could shrink the data significantly with more attention to compression. However, that probably adversely impacts level loading times which are already long enough (I was playing the latest installment of Half-Life the other day, and seeing approx. 20-30 second load times). Despite your opinion about uncompressed data for performance's sake, a lot of gamers *would* rather the game take up 4GB of space than add to the load times. Thanks, Nathan Reed

Don't get hung up on the geometry example. My example generator is broken. It is my contention that both the performance and compactness can be improved given the time and effort. I imagine it varies a lot from shop to shop but typically from what I hear they are working to tight deadlines with poor processes. Hopefully they still at least use the rule "get it right, then get it fast" but they miss off the "then get small" at the end. The huge install sizes and huge patches to supposedly "complete" games are one result of this. Battlefield 2 is painful slow to load each tiny level and yet still has a 4Gb install. Its almost a part of the package now. If someone realised a game that only needed a CD and not a DVD a lot of people would (wrongly) assume it was less feature rich than the DVD version. Take a look at a good shareware game and you see more of the full craft at work parly because download sizes are restrictive (though less so than they were).
Oct 23 2007
parent Robert Fraser <fraserofthenight gmail.com> writes:
Bruce Adams wrote:
 Nathan Reed Wrote:
 
 Bruce Adams wrote:
 Bill Baxter Wrote:

 Bruce Adams wrote:
 An interpreter itself is relatively small. I can only assume that a
 lot of the bloat is down to bad coding. If you look at games these
 days they weigh in at a ridiculous 4Gb install. No amount of
 uncompressed data for performance gain excuses that. 

textures, animations, 3D models, audio, video cut scenes, etc. The code is a pretty small part of that. --bb


provide a way to factor out common geometry elements, so you only store one copy of the geometry for a streetlight (say) rather than repeating it for every streetlight in the game world. Even so, a modern game involves a hell of a lot of content. That's just the way it is. I'm not sure how compressed that data is on the hard drive. It's possible that they could shrink the data significantly with more attention to compression. However, that probably adversely impacts level loading times which are already long enough (I was playing the latest installment of Half-Life the other day, and seeing approx. 20-30 second load times). Despite your opinion about uncompressed data for performance's sake, a lot of gamers *would* rather the game take up 4GB of space than add to the load times. Thanks, Nathan Reed

Don't get hung up on the geometry example. My example generator is broken. It is my contention that both the performance and compactness can be improved given the time and effort. I imagine it varies a lot from shop to shop but typically from what I hear they are working to tight deadlines with poor processes. Hopefully they still at least use the rule "get it right, then get it fast" but they miss off the "then get small" at the end. The huge install sizes and huge patches to supposedly "complete" games are one result of this. Battlefield 2 is painful slow to load each tiny level and yet still has a 4Gb install. Its almost a part of the package now. If someone realised a game that only needed a CD and not a DVD a lot of people would (wrongly) assume it was less feature rich than the DVD version. Take a look at a good shareware game and you see more of the full craft at work parly because download sizes are restrictive (though less so than they were).

I'm guessing it's not cost-efficient to spend development time on minimizing file size, since mot PC gamers probably don't care. At $30/hour of developer time, it's hard to justify investing in something that's a non-isue to most of the audience... although, with online distribution systems like Steam so popular, it's becoming a bigger issue now.
Oct 23 2007
prev sibling parent "Vladimir Panteleev" <thecybershadow gmail.com> writes:
On Tue, 23 Oct 2007 00:56:28 +0300, Walter Bright <newshound1 digitalmars.com>
wrote:

 Vladimir Panteleev wrote:
 On Mon, 22 Oct 2007 21:19:30 +0300, Walter Bright
 <newshound1 digitalmars.com> wrote:

 3) gigantic runtimes needed

IMO it's better to have one copy of a gigantic runtime than having parts of it statically linked in every EXE, causing lots of repeating code in a product with lots of binaries (such as an operating system). .NET executables are much smaller compared to most native executables (where the runtime is statically linked) - so, knowing that .NET will only gain more popularity in the future, I find a one-time 20MB download preferred to re-downloading the same components with every new program. Now that Microsoft is including it in their new operating systems, Vista users will just benefit from a smaller download size.

This problem is addressed by DLLs (Windows) and shared libraries (Linux).

Except, they're not really as easy to use. With .NET, you can derive from a class in a compiled assembly without having access to the source. You just add the assembly in the project's dependencies and import the namespace with "using". In C, you must use the included .h files (and .h files are a pain to maintain anyway since you must maintain the declaration and implementation separately, but that's not news to you). You must still use .lib and .di files with D and such - although they can be automated in the build process, it's still a hassle. Besides that, statically linking in the runtime seems to be a too common practice, as "DLL hell" has been a discouragement for dynamically-linked libraries in the past (side-by-side assemblies is supposed to remedy that though). I guess the fault is not in the DLLs themselves, it's how people and Microsoft used them... -- Best regards, Vladimir mailto:thecybershadow gmail.com
Oct 22 2007