www.digitalmars.com         C & C++   DMDScript  

D - Project feedback - debugger

reply Frank Wills <fdwills sandarh.com> writes:
Please give me some feedback on this:

Summary: Develop a debugger for D.

Choices:

  1) Jump in with D: Develop using D,
with limited debugger support. I like
coding in D, but I like to use a
debugger to examine code carefully
during runtime. This is where I have
started, but it's slow going without
a fully supportive debugger.

  2) C++ bootstrap: Develop using C++,
using existing C++ code as foundation.
Use this debugger to support rewriting
the debugger in D.

  3) C# bootstrap: Same as 2. C# might
allow for more rapid development, but
may have serious limitations for low
level programming.

I'm interested in developing a debugger
for D, and ultimately I intend for it to
be written in D. However the current state
of using debuggers with D is not optimal,
although I have had good success with MS's
SDK debugger and the debugger that is in
MS's Visual Studio.

I have a lot of code that I have developed
in C++ which could easily serve as the
foundation for writing the debugger, and
the advantage there would be that I would
be working with a lot of code that I am
very familiar with and that I have refined
over some length of time. There would also
be no problem doing low level programming.

I could also use C# for the initial version,
and that would to some extent allow me to
work quickly, as C# is very easy to use
and the MS framework allows for rapid
application development. It is very possible
that C# would get in the way of getting
close to the system when writing a debugger.
(P.S. I am using the OSS ShartDevelop
.NET IDE and MS's free SDK.) I can't say
I am wild about using C# because of the VM
it uses, and the need for others to install
.NET to use this C# version of the debugger.
Aug 06 2003
next sibling parent reply "Charles Sanders" <sanders-consulting comcast.net> writes:
Great idea I look forward to the debugger!

   2) C++ bootstrap: Develop using C++,
 using existing C++ code as foundation.
 Use this debugger to support rewriting
 the debugger in D.

I vote this option, if you already have a large code base which you can use, then defintly this route, give time for D to mature a bit and then a rewrite if you like. Charles "Frank Wills" <fdwills sandarh.com> wrote in message news:bgrteh$1lpf$1 digitaldaemon.com...
 Please give me some feedback on this:

 Summary: Develop a debugger for D.

 Choices:

   1) Jump in with D: Develop using D,
 with limited debugger support. I like
 coding in D, but I like to use a
 debugger to examine code carefully
 during runtime. This is where I have
 started, but it's slow going without
 a fully supportive debugger.

   2) C++ bootstrap: Develop using C++,
 using existing C++ code as foundation.
 Use this debugger to support rewriting
 the debugger in D.

   3) C# bootstrap: Same as 2. C# might
 allow for more rapid development, but
 may have serious limitations for low
 level programming.

 I'm interested in developing a debugger
 for D, and ultimately I intend for it to
 be written in D. However the current state
 of using debuggers with D is not optimal,
 although I have had good success with MS's
 SDK debugger and the debugger that is in
 MS's Visual Studio.

 I have a lot of code that I have developed
 in C++ which could easily serve as the
 foundation for writing the debugger, and
 the advantage there would be that I would
 be working with a lot of code that I am
 very familiar with and that I have refined
 over some length of time. There would also
 be no problem doing low level programming.

 I could also use C# for the initial version,
 and that would to some extent allow me to
 work quickly, as C# is very easy to use
 and the MS framework allows for rapid
 application development. It is very possible
 that C# would get in the way of getting
 close to the system when writing a debugger.
 (P.S. I am using the OSS ShartDevelop
 .NET IDE and MS's free SDK.) I can't say
 I am wild about using C# because of the VM
 it uses, and the need for others to install
 .NET to use this C# version of the debugger.

Aug 06 2003
parent Frank Wills <fdwills sandarh.com> writes:
Thanks. After some work getting started
in D I began to think about how much more
I would want good debugging support as the
project became more complex. Visual Studio
has good debugging support for C++.

Charles Sanders wrote:
 Great idea I look forward to the debugger!
 
 
  2) C++ bootstrap: Develop using C++,
using existing C++ code as foundation.
Use this debugger to support rewriting
the debugger in D.

I vote this option, if you already have a large code base which you can use, then defintly this route, give time for D to mature a bit and then a rewrite if you like. Charles

Aug 06 2003
prev sibling next sibling parent reply "Matthew Wilson" <matthew stlsoft.org> writes:
My 2 pence-worth:

1. The debugger that is released with D 1.0 must be able to debug D *and*
the C/C++ that some D calls will make into C-interface libraries. This is
essential. To my mind this cannot succeed without some debugging expertise
(such as yourself, maybe) and Walter working in concert. I know there's been
a debate on the validity of debuggers as a software development tool on one
of the other NGs, but I cannot see how D will be taken seriously unless it
has a very competent debugger. Since we will inevitably (and quite right
too) get some complex and one might say obfuscated code once the templates
and the other libraries get going, a debugger may be an essential learning
tool as well.

2. The debugger should be preferentially written in D, with the odd
C-library call thrown in for good measure. It should *not* be written in any
.NET or Java, as that will lead to a bit fat f**ker of a thing, that people
will be (rightly) prejudiced against: if it's in C# then how is it going to
port to other architectures, if it's written in Java it'll run like a dead
pig. In either case, people will have to install VMs, and have gargantuan
disks, blah blah. Not the D way at all.

3. The debugger infrastructure *must* be modularisable (sic.), such that it
can be plugged into DMC++'s IDDE, DIDE, VS.NET (once we've worked out how to
do that), CodeWarrior, and the various other popular IDDEs. This will have
to include COM (and I'll gladly help here), but also an open non-OS-specific
interface. (The COM would probably be just a simple layer over that for
Win32 versions)



"Frank Wills" <fdwills sandarh.com> wrote in message
news:bgrteh$1lpf$1 digitaldaemon.com...
 Please give me some feedback on this:

 Summary: Develop a debugger for D.

 Choices:

   1) Jump in with D: Develop using D,
 with limited debugger support. I like
 coding in D, but I like to use a
 debugger to examine code carefully
 during runtime. This is where I have
 started, but it's slow going without
 a fully supportive debugger.

   2) C++ bootstrap: Develop using C++,
 using existing C++ code as foundation.
 Use this debugger to support rewriting
 the debugger in D.

   3) C# bootstrap: Same as 2. C# might
 allow for more rapid development, but
 may have serious limitations for low
 level programming.

 I'm interested in developing a debugger
 for D, and ultimately I intend for it to
 be written in D. However the current state
 of using debuggers with D is not optimal,
 although I have had good success with MS's
 SDK debugger and the debugger that is in
 MS's Visual Studio.

 I have a lot of code that I have developed
 in C++ which could easily serve as the
 foundation for writing the debugger, and
 the advantage there would be that I would
 be working with a lot of code that I am
 very familiar with and that I have refined
 over some length of time. There would also
 be no problem doing low level programming.

 I could also use C# for the initial version,
 and that would to some extent allow me to
 work quickly, as C# is very easy to use
 and the MS framework allows for rapid
 application development. It is very possible
 that C# would get in the way of getting
 close to the system when writing a debugger.
 (P.S. I am using the OSS ShartDevelop
 .NET IDE and MS's free SDK.) I can't say
 I am wild about using C# because of the VM
 it uses, and the need for others to install
 .NET to use this C# version of the debugger.

Aug 06 2003
parent reply Frank Wills <fdwills sandarh.com> writes:
Thanks for the feedback and insights, and your offer
of help with COM.

Matthew Wilson wrote:
 My 2 pence-worth:
 
 1. The debugger that is released with D 1.0 must be able to debug D *and*
 the C/C++ that some D calls will make into C-interface libraries. This is
 essential.  To my mind this cannot succeed without some debugging expertise
 (such as yourself, maybe) and Walter working in concert. I know there's been
 a debate on the validity of debuggers as a software development tool on one
 of the other NGs, but I cannot see how D will be taken seriously unless it
 has a very competent debugger. Since we will inevitably (and quite right
 too) get some complex and one might say obfuscated code once the templates
 and the other libraries get going, a debugger may be an essential learning
 tool as well.

I can't imagine who wouldn't want to take a close look at code and data while a program is running. It's just another level or layer of inspection (on top of careful design and coding.) To me, not examining everything carefully with a debugger is reckless.
 1. The debugger that is released with D 1.0 must be able to debug
 D *and* the C/C++ that some D calls will make into C-interface
 libraries.

That's a good point.
 
 2. The debugger should be preferentially written in D, with the odd
 C-library call thrown in for good measure. It should *not* be written in any
 .NET or Java, as that will lead to a bit fat f**ker of a thing, that people
 will be (rightly) prejudiced against: if it's in C# then how is it going to
 port to other architectures, if it's written in Java it'll run like a dead
 pig. In either case, people will have to install VMs, and have gargantuan
 disks, blah blah. Not the D way at all.

I'd _like_ to have a fully supportive debugger if I'm writing a debugger. Otherwise it would be too much work and go too slow. That's what got me thinking about falling back on using C++ as a bootstrap project for a D version. Of course I would enjoy coding D more than C++.
 It should *not* be written in any
 .NET or Java, as that will lead to a bit fat f**ker of a thing

I was disappointed that MS went the route of using a VM. C# is really nice to use, but a full featured app that I had written in C++ has a memory footprint of about 175K. Just the start of a rewrite of that app in C# has a memory footprint more that 100 times larger at 21,000K.
 3. The debugger infrastructure *must* be modularisable (sic.), such that it
 can be plugged into DMC++'s IDDE, DIDE, VS.NET (once we've worked out how to
 do that), CodeWarrior, and the various other popular IDDEs. This will have
 to include COM (and I'll gladly help here), but also an open non-OS-specific
 interface. (The COM would probably be just a simple layer over that for
 Win32 versions)

I've been thinking that way in general, especially as others are working on their own IDEs.
 
 
 "Frank Wills" <fdwills sandarh.com> wrote in message
 news:bgrteh$1lpf$1 digitaldaemon.com...
 
Please give me some feedback on this:

Summary: Develop a debugger for D.

Choices:

  1) Jump in with D: Develop using D,
with limited debugger support. I like
coding in D, but I like to use a
debugger to examine code carefully
during runtime. This is where I have
started, but it's slow going without
a fully supportive debugger.

  2) C++ bootstrap: Develop using C++,
using existing C++ code as foundation.
Use this debugger to support rewriting
the debugger in D.

  3) C# bootstrap: Same as 2. C# might
allow for more rapid development, but
may have serious limitations for low
level programming.


Aug 06 2003
next sibling parent reply John Reimer <jjreimer telus.net> writes:
 I was disappointed that MS went the route of using a VM. C# is
 really nice to use, but a full featured app that I had written in
 C++ has a memory footprint of about 175K. Just the start of a
 rewrite of that app in C# has a memory footprint more that 100
 times larger at 21,000K.
 

Excuse me?! You mean almost 20 MB? Woah! Who would want to use C# if the app ends up that big? I can hardly imagine people supporting and using such a technology! I had no idea... Later, John
Aug 06 2003
next sibling parent reply "Matthew Wilson" <dmd synesis.com.au> writes:
"John Reimer" <jjreimer telus.net> wrote in message
news:3F31CB6A.4000406 telus.net...
 I was disappointed that MS went the route of using a VM. C# is
 really nice to use, but a full featured app that I had written in
 C++ has a memory footprint of about 175K. Just the start of a
 rewrite of that app in C# has a memory footprint more that 100
 times larger at 21,000K.

Excuse me?! You mean almost 20 MB? Woah! Who would want to use C# if the app ends up that big? I can hardly imagine people supporting and using such a technology!

For all that I find such bloat inexcusable (not to say risible!), it is not the case that the memory sizes are integral multiples of each other, more likely (though not absolutely) that it represents a fixed overhead, ie. C++ C# 175K 20MB 2MB 23MB 4MB 26MB and such. (Of course, I just invented thos figures ...)
Aug 06 2003
parent reply Frank Wills <fdwills sandarh.com> writes:
Matthew Wilson wrote:
 "John Reimer" <jjreimer telus.net> wrote in message
 news:3F31CB6A.4000406 telus.net...
 
I was disappointed that MS went the route of using a VM. C# is
really nice to use, but a full featured app that I had written in
C++ has a memory footprint of about 175K. Just the start of a
rewrite of that app in C# has a memory footprint more that 100
times larger at 21,000K.

Excuse me?! You mean almost 20 MB? Woah! Who would want to use C# if the app ends up that big? I can hardly imagine people supporting and using such a technology!

For all that I find such bloat inexcusable (not to say risible!), it is not the case that the memory sizes are integral multiples of each other, more likely (though not absolutely) that it represents a fixed overhead, ie. C++ C# 175K 20MB 2MB 23MB 4MB 26MB and such. (Of course, I just invented thos figures ...)

Well, I guess that shows what happens when you invent figures! ;) Actually I would have expected that same thing myself, but alas it is not so. The C++ app is all C++, so there is little overhead, and it's memory usage per copy is linear. The 20MB C# app can only run one copy, but a second C# app that I just started uses 8,000K (8MB), and each additional copy uses another 8MB. It's memory usage is also linear. Ten copies of this start of a C# app uses 10 x 8MB = 80MB.
Aug 06 2003
parent reply "Matthew Wilson" <dmd synesis.com.au> writes:
That's not really what I was suggesting.

I'm saying that if a (single instance of a) C++ app had a footprint of 2MB,
it's likely that an
equivalent C# one would be 23MB, rather than 400MB.

I would have a *very* hard time swallowing the latter, and frankly .NET
would not be viable if that were the case.


"Frank Wills" <fdwills sandarh.com> wrote in message
news:bgskt3$2bd8$1 digitaldaemon.com...
 Matthew Wilson wrote:
 "John Reimer" <jjreimer telus.net> wrote in message
 news:3F31CB6A.4000406 telus.net...

I was disappointed that MS went the route of using a VM. C# is
really nice to use, but a full featured app that I had written in
C++ has a memory footprint of about 175K. Just the start of a
rewrite of that app in C# has a memory footprint more that 100
times larger at 21,000K.

Excuse me?! You mean almost 20 MB? Woah! Who would want to use C# if the app ends up that big? I can hardly imagine people supporting and using such a technology!

For all that I find such bloat inexcusable (not to say risible!), it is


 the case that the memory sizes are integral multiples of each other,


 likely (though not absolutely) that it represents a fixed overhead, ie.

 C++          C#
 175K        20MB
 2MB         23MB
 4MB          26MB

 and such. (Of course, I just invented thos figures ...)

Well, I guess that shows what happens when you invent figures! ;) Actually I would have expected that same thing myself, but alas it is not so. The C++ app is all C++, so there is little overhead, and it's memory usage per copy is linear. The 20MB C# app can only run one copy, but a second C# app that I just started uses 8,000K (8MB), and each additional copy uses another 8MB. It's memory usage is also linear. Ten copies of this start of a C# app uses 10 x 8MB = 80MB.

Aug 06 2003
parent reply Frank Wills <fdwills sandarh.com> writes:
Matthew Wilson wrote:
 That's not really what I was suggesting.
 
 I'm saying that if a (single instance of a) C++ app had a footprint of 2MB,
 it's likely that an
 equivalent C# one would be 23MB, rather than 400MB.

OK, I see what you are saying. Sure, I also don't think a 2MB C++ app would be 200MB if done in C#, but look at what I wrote below. The start of an app I wrote in C# is 8MB (per copy running.) A C# app with a moderate amount of functionality is 21MB. I wouldn't have guessed that a moderately featured C# app would be 21MB, so I'm not going to assume the overhead is very fixed, just because I've already been suprised at the massive memory use so far. What I've come to expect for memory use doesn't seem to apply to .NET.
 I would have a *very* hard time swallowing the latter, and frankly .NET
 would not be viable if that were the case.
 

I bet that people who think that all good programs are written in assembler are already having a hard time swallowing .NET.
 
 "Frank Wills" <fdwills sandarh.com> wrote in message
 news:bgskt3$2bd8$1 digitaldaemon.com...
 
Matthew Wilson wrote:

"John Reimer" <jjreimer telus.net> wrote in message
news:3F31CB6A.4000406 telus.net...


I was disappointed that MS went the route of using a VM. C# is
really nice to use, but a full featured app that I had written in
C++ has a memory footprint of about 175K. Just the start of a
rewrite of that app in C# has a memory footprint more that 100
times larger at 21,000K.

Excuse me?! You mean almost 20 MB? Woah! Who would want to use C# if the app ends up that big? I can hardly imagine people supporting and using such a technology!

For all that I find such bloat inexcusable (not to say risible!), it is


not
the case that the memory sizes are integral multiples of each other,


more
likely (though not absolutely) that it represents a fixed overhead, ie.

C++          C#
175K        20MB
2MB         23MB
4MB          26MB

and such. (Of course, I just invented thos figures ...)

Well, I guess that shows what happens when you invent figures! ;) Actually I would have expected that same thing myself, but alas it is not so. The C++ app is all C++, so there is little overhead, and it's memory usage per copy is linear. The 20MB C# app can only run one copy, but a second C# app that I just started uses 8,000K (8MB), and each additional copy uses another 8MB. It's memory usage is also linear. Ten copies of this start of a C# app uses 10 x 8MB = 80MB.


Aug 06 2003
next sibling parent reply "Matthew Wilson" <matthew stlsoft.org> writes:
 I would have a *very* hard time swallowing the latter, and frankly .NET
 would not be viable if that were the case.

I bet that people who think that all good programs are written in assembler are already having a hard time swallowing .NET.

Mate, people who think good programs are written in C or C++ are having that same hard time! Makes you appreciate why they're not bothering supporting the .NET framework on old OSs (that are resident on systems with, say, 128MB or less memory - IIRC, NT 4 is not supported!). :)
Aug 06 2003
parent Frank Wills <fdwills sandarh.com> writes:
Matthew Wilson wrote:
I would have a *very* hard time swallowing the latter, and frankly .NET
would not be viable if that were the case.

I bet that people who think that all good programs are written in assembler are already having a hard time swallowing .NET.

Mate, people who think good programs are written in C or C++ are having that same hard time! Makes you appreciate why they're not bothering supporting the .NET framework on old OSs (that are resident on systems with, say, 128MB or less memory - IIRC, NT 4 is not supported!). :)

NT 4.0 was supported in .NET 1.0, but they dropped it in .NET 1.1. MS claims that they didn't have the resources to support .NET on NT 4.0. Yeah, right. Like I believe that. I'm still using NT 4.0. I've purchased Win2K as a safety backup, but I'm doing all I can to move to *BSD and Linux. No way do I ever want to use anything from MS beyond Win2K.
Aug 06 2003
prev sibling parent reply Benji Smith <dlanguage xxagg.com> writes:
The .NET framework runtime library is roughly 20MB, so I imagine that's why your
application is taking up so much memory during runtime. Your own app's code
probably occupies less than 1MB of memory.

The .NET runtime is a very very fancy bit of engineering. It can do lots of
interesting things, like creating a set of permissions for dependant code. For
example, you could write an app with plugin support, and limit the abilities of
any plugins (plugins may not read or write from disks, etc.) even if those
plugins are writtn by 3rd party developers.

But, if you're not using very many of these cool capabilities of the .NET
framework, you're getting short-changed. Personally, I think the .NET framework
should be broken into lots of little pieces that can each be loaded when needed.
And, I suspect that that's exactly what will happen in the next five years or
so.

--Benji Smith

In article <bgsnfr$2djt$1 digitaldaemon.com>, Frank Wills says...
I wouldn't have guessed that a moderately featured
C# app would be 21MB, so I'm not going to assume
the overhead is very fixed, just because I've already
been suprised at the massive memory use so far. What
I've come to expect for memory use doesn't seem to
apply to .NET.

Aug 07 2003
next sibling parent Frank Wills <fdwills sandarh.com> writes:
Benji Smith wrote:
 The .NET framework runtime library is roughly 20MB, so I imagine that's why
your
 application is taking up so much memory during runtime. Your own app's code
 probably occupies less than 1MB of memory.

One C# app that I wrote, which uses 21 MB of memory, has only 0.035 MB of code. Compiled, the exe is 0.057 MB. Another C# app that I had just started, which uses 8 MB per copy in memory, has 0.0007 MB of code (that's not a typo.) Compiled, the exe is 0.005 MB.
 The .NET runtime is a very very fancy bit of engineering. It can do lots of
 interesting things, like creating a set of permissions for dependant code. For
 example, you could write an app with plugin support, and limit the abilities of
 any plugins (plugins may not read or write from disks, etc.) even if those
 plugins are writtn by 3rd party developers.

Yes, except for the VM (I dislike VMs), and the huge resource cost, and the distribution difficulties, and the MS product tie-in (it forces people to keep upgrading their OS), it is pretty slick (think honey pot for programmers.)
 But, if you're not using very many of these cool capabilities of the .NET
 framework, you're getting short-changed. Personally, I think the .NET framework
 should be broken into lots of little pieces that can each be loaded when
needed.
 And, I suspect that that's exactly what will happen in the next five years or
 so.

Some of that is done already. Apps tend to have a memory footprint relative the Foundation components used. A small app can be under 10MB. I'm sure they will improve .NET, but I doubt that without some kind of serious market pressure .NET will grow smaller. I've never seen MS make anything grow smaller. When NT 3.0 came out it could hardly run on the 486s that were available at the time. NT has only grown bigger (and changed names). I wonder what WinXP would be like on a 486 with 32 MB of RAM.
 
 --Benji Smith
 
 In article <bgsnfr$2djt$1 digitaldaemon.com>, Frank Wills says...
 
I wouldn't have guessed that a moderately featured
C# app would be 21MB, so I'm not going to assume
the overhead is very fixed, just because I've already
been suprised at the massive memory use so far. What
I've come to expect for memory use doesn't seem to
apply to .NET.


Aug 07 2003
prev sibling parent John Reimer <jjreimer telus.net> writes:
Benji Smith wrote:
 The .NET framework runtime library is roughly 20MB, so I imagine that's why
your
 application is taking up so much memory during runtime. Your own app's code
 probably occupies less than 1MB of memory.
 

That makes sense.
 But, if you're not using very many of these cool capabilities of the .NET
 framework, you're getting short-changed. Personally, I think the .NET framework
 should be broken into lots of little pieces that can each be loaded when
needed.
 And, I suspect that that's exactly what will happen in the next five years or
 so.

Broken into lots of tiny pieces, I agree with. As to where it should be fed... well I could offer some suggestions ;-). I just can't stand huge bloated technologies, no matter how masterful the engineering is. One technology that has been doing this sort of thing marvelously well for several years is Taos Intent. Mind you, it doesn't achieve quite the same goals as .NET. It's a very nifty cross-platform technology in a very small footprint. It operates on the principle of tiny "tools," code units that are loaded/translated as needed. Several languages compilers have been ported to it's OS layer. Tao seems to appeal much more to the old-school assembler/embedded programmer types. Still it's making some headway. Anyone familiar with Tao? Later, John
Aug 07 2003
prev sibling parent reply Frank Wills <fdwills sandarh.com> writes:
Something isn't it? I'm glad Walter
is developing D.

John Reimer wrote:
 
 I was disappointed that MS went the route of using a VM. C# is
 really nice to use, but a full featured app that I had written in
 C++ has a memory footprint of about 175K. Just the start of a
 rewrite of that app in C# has a memory footprint more that 100
 times larger at 21,000K.

Excuse me?! You mean almost 20 MB? Woah! Who would want to use C# if the app ends up that big? I can hardly imagine people supporting and using such a technology! I had no idea... Later, John

Aug 06 2003
parent reply Ilya Minkov <midiclub 8ung.at> writes:
Frank Wills wrote:

 Excuse me?! You mean almost 20 MB? Woah! Who would want to use C# if 
 the app ends up that big?  I can hardly imagine people supporting and 
 using such a technology!

 I had no idea...


What's so wrong with it? And who says it has to stay like that? Personally, i would avoid MS libraries using .NET, so that one can take advantage of alternative VMs which will probably be better with time... and are not Windows-bound. It may have something to do with loading strategy. Like, SUN Java VM - a real bloater - loads many parts of the library at once, while IBM Java VM rather waits until they are actually needed. They both share the same library. The large memory footprint can also be a fake in some cases - like when you load files as memory-mapped, but never read some parts of them. -i.
Aug 07 2003
parent reply Frank Wills <fdwills sandarh.com> writes:
Ilya Minkov wrote:
 
 Excuse me?! You mean almost 20 MB? Woah! Who would want to use C# if 
 the app ends up that big?  I can hardly imagine people supporting and 
 using such a technology!

 I had no idea...


What's so wrong with it? And who says it has to stay like that?

It may not, or it may get worse. I don't have much confidence in MS going in the direction of requiring less hardware resouces. I don't run WinXP, nor do I have access to the alpha/beta Longhorn, but I've read that with the new database file system a machine that runs Win2K or WinXP fairly well gets bogged down pretty well running Longhorn.
 Personally, i would avoid MS libraries using .NET, so that one can take 
 advantage of alternative VMs which will probably be better with time... 
 and are not Windows-bound.

Do you like VMs? Just curious.
 It may have something to do with loading strategy. Like, SUN Java VM - a 
 real bloater - loads many parts of the library at once, while IBM Java 
 VM rather waits until they are actually needed. They both share the same 
 library.

Yeah. Someone could develop a .NET VM that might be a lot leaner, faster.
 The large memory footprint can also be a fake in some cases - like when 
 you load files as memory-mapped, but never read some parts of them.
 
 -i.
 

running at once, and even though the system Task Manager shows that they are each consuming the same large amount of memory, the machine stlll continues to run, whereas it would choke on the memory load if non .NET apps were consuming that much memory.
Aug 07 2003
parent reply Ilya Minkov <midiclub 8ung.at> writes:
Frank Wills wrote:

 Do you like VMs? Just curious.

I *HATE* it when i direct my browser to some webpage... and then it locks up for half a minute. What's up? SUN Java loading! Thanks to IBM VM, it got much better now. :) I think there is much future with VMs. I believe i posted some material about Tick C - which is a compiler which generated executables carrying a tiny VM with them, maybe 100k or so... What for? It instantieates templates at run-time. The good thing about this is, there are many values in programs which get settled in the beginning, and stay the same during the whole program runtime. You cannot hardcode them as constants - although coneptually they are. That's what Tick C does: depending on what data you have, it may be compiled, inlined, and optimised into the code. The optimiser in Tick C is really crappy, and would usually generate code almost half as fast as an optimising compiler. Nontheless, when used to generate a code to multiply a vector by a constant martix, it would outscore GCC by a few times, and was even faster than GCC when used as a GIMP plug-in for simple image processing. You see, that's where the future might go - at least what sound, video, and image processing conserns, where you apply the same operation very often. I myself have been interested in Structural Audio, and have been intending to write an efficient sound processing VM. After avaluating all kinds of compilers, i had more or less an idea of what it should be like, technically. Java VM in its current form was not even considered. But with time, i stumbled over .NET, and the MONO project. .NET alone seems to get much more right than Java does - and IMO was an objious step which someone *had* to do. The JIT-compiler from MONO will be available not only internally, but is also developed to be be easy to embed in own projects. I think i even contributed some very minor JIT idea. The current one is about the level of Tick C, the next one should be significantly better. New VM-based languages pop right out of the floor! A decent-performance and very tiny Demoscene-related (OpenGL+libSDL) VM appeared just 2 months ago -- targeted at high-performance visual effects!
 Yeah. Someone could develop a .NET VM that might be a lot leaner, faster.

Hope so. And if development is so open and well-modularized as it is with MONO, research instututions would also be eager to use it to test their optimisation ideas... What do we get? Accumulated power!
 The large memory footprint can also be a fake in some cases - like 
 when you load files as memory-mapped, but never read some parts of them.


 I think it is. I've tried to load the system down with lots of C# apps
 running at once, and even though the system Task Manager shows that
 they are each consuming the same large amount of memory, the machine
 stlll continues to run, whereas it would choke on the memory load if
 non .NET apps were consuming that much memory.

This is interesting. -i.
Aug 07 2003
parent reply Frank Wills <fdwills sandarh.com> writes:
Ilya Minkov wrote:
 Frank Wills wrote:
 
 Do you like VMs? Just curious.

I *HATE* it when i direct my browser to some webpage... and then it locks up for half a minute. What's up? SUN Java loading! Thanks to IBM VM, it got much better now. :) I think there is much future with VMs. I believe i posted some material about Tick C - which is a compiler which generated executables carrying a tiny VM with them, maybe 100k or so... What for? It instantieates templates at run-time. The good thing about this is, there are many values in programs which get settled in the beginning, and stay the same during the whole program runtime. You cannot hardcode them as constants - although coneptually they are. That's what Tick C does: depending on what data you have, it may be compiled, inlined, and optimised into the code. The optimiser in Tick C is really crappy, and would usually generate code almost half as fast as an optimising compiler. Nontheless, when used to generate a code to multiply a vector by a constant martix, it would outscore GCC by a few times, and was even faster than GCC when used as a GIMP plug-in for simple image processing. You see, that's where the future might go - at least what sound, video, and image processing conserns, where you apply the same operation very often.

VM? Just do the optimization when the code loads, but don't add any kind of VM layer?
 I myself have been interested in Structural Audio, and have been 
 intending to write an efficient sound processing VM. After avaluating 
 all kinds of compilers, i had more or less an idea of what it should be 
 like, technically. Java VM in its current form was not even considered. 
 But with time, i stumbled over .NET, and the MONO project. .NET alone 
 seems to get much more right than Java does - and IMO was an objious 
 step which someone *had* to do. The JIT-compiler from MONO will be 
 available not only internally, but is also developed to be be easy to 
 embed in own projects. I think i even contributed some very minor JIT 
 idea. The current one is about the level of Tick C, the next one should 
 be significantly better.

related to the twin towers in NY. http://www.cyberclass.net/palmquist.htm
 
 New VM-based languages pop right out of the floor! A decent-performance 
 and very tiny Demoscene-related (OpenGL+libSDL) VM appeared just 2 
 months ago -- targeted at high-performance visual effects!
 
 Yeah. Someone could develop a .NET VM that might be a lot leaner, faster.

Hope so. And if development is so open and well-modularized as it is with MONO, research instututions would also be eager to use it to test their optimisation ideas... What do we get? Accumulated power!

Java VMs tend to be. I wonder if .NET isn't more in the direction of what you are talking about.
 The large memory footprint can also be a fake in some cases - like 
 when you load files as memory-mapped, but never read some parts of them.


 I think it is. I've tried to load the system down with lots of C# apps
 running at once, and even though the system Task Manager shows that
 they are each consuming the same large amount of memory, the machine
 stlll continues to run, whereas it would choke on the memory load if
 non .NET apps were consuming that much memory.

This is interesting. -i.

Aug 07 2003
parent reply Ilya Minkov <midiclub 8ung.at> writes:
Frank Wills wrote:

 Couldn't that kind of optimization be done without any kind of
 VM? Just do the optimization when the code loads, but don't
 add any kind of VM layer?

It could, if you would exactly know what data is constant and what not, by patching. Even better, since you can profit from a powerful optimiser. However, if you take generic matrix multi
 That's pretty interesting. I found this article on Structural Audio
 related to the twin towers in NY.
 http://www.cyberclass.net/palmquist.htm

This is different. It's about de*struct*ion, as well as *struct*ure engeneering vs. *audio* engeneering. Besides, for such non-monolythic things (unlike planes and bridges), frequencies are probably too low. And yet another thing: they were build to hold wind and impact. And wind can be *really* hard through turbulences, and thus caused very poweful vibrations, which for some distinct wind speed may fall together with a resonation freqency. And yet, they couldn't collapse if the metal didn't melt. Not that i was an expert, but there are too many factors, be it economic, political, jurisdictional, and so on, that i could think they collapsed by accident. It can be proven that buildings were not hit by the boeing-sozed planes. The hole was simply too small: it was as large as from a 1-man plane which hit Pirelli center in Italy a few weeks before. See more here: http://www.serendipity.li/wot/psyopnews1.htm http://www.serendipity.li/wot/wtc_ch2.htm http://www.serendipity.li/wtc.htm and ascending. This is not at all original, i found a similar analysis a few monts after the fall on a german website dedicated to Mahatma Gandhi. --- Structural Audio is a composition of all information to create a sound by its source pieces. In a real world, you can decribe a sound by packing a musical score, a few instruments, and musicials which play them with their special style, in a huge box, and make them play on demand. :) In a computer world things are much easier. Since music is recorded using computers anyway, it is initally composed of a MIDI score, live wave recordings like vocals, and synthesis algorithms and data. By utilising this information available at creation, one can achieve high compression ratios compared to streaming formats like MP3 and OGG. Simplest examples of structural audio include MOD-like data formats, which contain a score and instrument wave data, but a simple pack of MIDI and a soundbank would also qualify. However, there's more to it. "Real" structural audio formats, like CSound and MPEG4-SA, allow for
 Yes, .NETs VM is in my opinion a much better thing than what
 Java VMs tend to be. I wonder if .NET isn't more in the direction
 of what you are talking about.

Not really. But since any VM supports reflection, any would theoretically do. Besides, most .NET VM always compile and never interpret - it doesn't take much more time to compile without any optimisations. MONO compiler shall be 2-stage: as soon as the function is called often enough, it is compiled with full optimisation. After a short while, source/bytecode need not be held in memory any longer - either because a function is used rarely and need not be optimised, or because it's optimised to 100%. This compile-only behaviour is fortunate for any use with repetitious code, especially if full optimisation can be assumed at once. -i.
Aug 07 2003
parent reply Frank Wills <fdwills sandarh.com> writes:
This is some pretty interesting reading, including
the links.

Ilya Minkov wrote:
 Frank Wills wrote:
 
 Couldn't that kind of optimization be done without any kind of
 VM? Just do the optimization when the code loads, but don't
 add any kind of VM layer?

It could, if you would exactly know what data is constant and what not, by patching. Even better, since you can profit from a powerful optimiser. However, if you take generic matrix multi

 
 That's pretty interesting. I found this article on Structural Audio
 related to the twin towers in NY.
 http://www.cyberclass.net/palmquist.htm

This is different. It's about de*struct*ion, as well as *struct*ure engeneering vs. *audio* engeneering. Besides, for such non-monolythic things (unlike planes and bridges), frequencies are probably too low. And yet another thing: they were build to hold wind and impact. And wind can be *really* hard through turbulences, and thus caused very poweful vibrations, which for some distinct wind speed may fall together with a resonation freqency. And yet, they couldn't collapse if the metal didn't melt. Not that i was an expert, but there are too many factors, be it economic, political, jurisdictional, and so on, that i could think they collapsed by accident. It can be proven that buildings were not hit by the boeing-sozed planes. The hole was simply too small: it was as large as from a 1-man plane which hit Pirelli center in Italy a few weeks before. See more here: http://www.serendipity.li/wot/psyopnews1.htm http://www.serendipity.li/wot/wtc_ch2.htm http://www.serendipity.li/wtc.htm and ascending. This is not at all original, i found a similar analysis a few monts after the fall on a german website dedicated to Mahatma Gandhi. --- Structural Audio is a composition of all information to create a sound by its source pieces. In a real world, you can decribe a sound by packing a musical score, a few instruments, and musicials which play them with their special style, in a huge box, and make them play on demand. :) In a computer world things are much easier. Since music is recorded using computers anyway, it is initally composed of a MIDI score, live wave recordings like vocals, and synthesis algorithms and data. By utilising this information available at creation, one can achieve high compression ratios compared to streaming formats like MP3 and OGG. Simplest examples of structural audio include MOD-like data formats, which contain a score and instrument wave data, but a simple pack of MIDI and a soundbank would also qualify. However, there's more to it. "Real" structural audio formats, like CSound and MPEG4-SA, allow for

 
 Yes, .NETs VM is in my opinion a much better thing than what
 Java VMs tend to be. I wonder if .NET isn't more in the direction
 of what you are talking about.

Not really. But since any VM supports reflection, any would theoretically do. Besides, most .NET VM always compile and never interpret - it doesn't take much more time to compile without any optimisations. MONO compiler shall be 2-stage: as soon as the function is called often enough, it is compiled with full optimisation. After a short while, source/bytecode need not be held in memory any longer - either because a function is used rarely and need not be optimised, or because it's optimised to 100%. This compile-only behaviour is fortunate for any use with repetitious code, especially if full optimisation can be assumed at once. -i.

Aug 07 2003
parent Ilya Minkov <midiclub 8ung.at> writes:
Frank Wills wrote:

 This sentence got cut off.

No, it didn't get cut off -- you must know i am a broken robot. I write all parts of a mail at once and i simply sent it off before writing it to the end. So here come the damaged paragraphs. --- It could, if you would exactly know what data is constant and what not, by patching. Even better, since you can profit from a powerful optimiser. However, if you take generic matrix multiplication routine, you can have undeterminable variable parts, or you can have simple coefficients, esp. 0, which simplify the expression by orders of magnitude. That's why it makes sense to compile and optimise at run-time. --- However, there's more to it. "Real" structural audio formats, like CSound and MPEG4-SA, allow you to store code for instrument syntesis, mixdown and post-processing. This is a lot more flexible. A good thing about sound, is that you generally have all the score available at once, so you can predict, when exactly you need what code, and whether there are distinct constant parameters or plug-in chains, where a specialised and inlined version can be compiled. This is an area for infinite tuning, and i argue that it may give much higher performance than current systems using precompiled plug-ins, like Logic Audio, Cubase VST, and alike. -i.
Aug 08 2003
prev sibling parent reply Farmer <itsFarmer. freenet.de> writes:
Frank Wills <fdwills sandarh.com> wrote in 
news:bgsahk$21bb$1 digitaldaemon.com:

 I'd _like_ to have a fully supportive debugger if I'm writing a
 debugger. Otherwise it would be too much work and go too slow. That's
 what got me thinking about falling back on using C++ as a bootstrap
 project for a D version. Of course I would enjoy coding D more than
 C++.

Why not ask Walter to further extend the amount of generated debug data? AFAIK, DMD generates line numbers and typeinfo for local or global variables. But it doesn't generate debug info for struct or class members and it uses the inappropriate type '__int64' for D arrays. I think, if DMD would generate debug info for structs and classes as DMC++ already does, debugging of D code would be *much* easier. Furthermore D arrays should not be tagged as type __int64, but as a C struct that represents the implementation of D arrays, like typedef struct { unsigned int length; void* ptr; } DArray; secondary topic:
 I was disappointed that MS went the route of using a VM. [...]

compilation, as the "kernel profile" for C# does not require any advanced reflection support. But I haven't heard of a native compiler with a native runtime for C#, yet.
Aug 07 2003
parent reply Frank Wills <fdwills sandarh.com> writes:
Farmer wrote:
 Frank Wills <fdwills sandarh.com> wrote in 
 news:bgsahk$21bb$1 digitaldaemon.com:
 
 
I'd _like_ to have a fully supportive debugger if I'm writing a
debugger. Otherwise it would be too much work and go too slow. That's
what got me thinking about falling back on using C++ as a bootstrap
project for a D version. Of course I would enjoy coding D more than
C++.

Why not ask Walter to further extend the amount of generated debug data? AFAIK, DMD generates line numbers and typeinfo for local or global variables. But it doesn't generate debug info for struct or class members and it uses the inappropriate type '__int64' for D arrays. I think, if DMD would generate debug info for structs and classes as DMC++ already does, debugging of D code would be *much* easier. Furthermore D arrays should not be tagged as type __int64, but as a C struct that represents the implementation of D arrays, like typedef struct { unsigned int length; void* ptr; } DArray;

That explains what I've been seeing when I debug an app in Visual Studio. Char arrays show up as a large int value. Walter, is this something you could do? What is the current state of debug info?
 
 
 secondary topic:
 
I was disappointed that MS went the route of using a VM. [...]

But it seems that some clever people at MS left the door open to native compilation, as the "kernel profile" for C# does not require any advanced reflection support. But I haven't heard of a native compiler with a native runtime for C#, yet.

There is a compiler that comes with the SDK. I've used it but not looked at it more than to see if there was any difference in load and execution speed.
Aug 07 2003
parent "Walter" <walter digitalmars.com> writes:
"Frank Wills" <fdwills sandarh.com> wrote in message
news:bgum1t$18tq$1 digitaldaemon.com...
 That explains what I've been seeing when I debug an app in
 Visual Studio. Char arrays show up as a large int value.
 Walter, is this something you could do? What is the current
 state of debug info?

Yes, I can fix that. I just haven't spent much of any time on the debug info.
Sep 13 2003
prev sibling parent reply Ilya Minkov <midiclub 8ung.at> writes:
I found a funny thing: a platform-independant debugger, based upon LCC. 
Maybe hack up an LCC-based backend? :)

http://www.cs.princeton.edu/software/lcc/cdb/

-i.
Aug 08 2003
parent Frank Wills <fdwills sandarh.com> writes:
Hey, thanks. I'll take a look at it.

Ilya Minkov wrote:
 I found a funny thing: a platform-independant debugger, based upon LCC. 
 Maybe hack up an LCC-based backend? :)
 
 http://www.cs.princeton.edu/software/lcc/cdb/
 
 -i.
 

Aug 08 2003