www.digitalmars.com         C & C++   DMDScript  

D - [Helmets on]: "Language independence ..."?

reply "Matthew Wilson" <matthew stlsoft.org> writes:
Am ready to be fired at, so take your best shots.

My question is, has anyone seriously considered the notion of DMD producing
C-code. A kind of D-front if you like.

Now I realise straight up that DMD can probably produce code straight to
binary that will likely be faster than a compiled intermediate C-form in a
non-trivial number of cases. So I'm not proposing that compile to machine
code be abandoned.

However, since there is currently a single compiler/linker that is usable
with DMD, the bottleneck could be alleviated by allowing any C compiler to
provide the back-end, and this could also help with porting to other
platforms (not just Linux, but Mac, VAX, Solaris, etc. etc.). It would also
allow debugging within one's accustomed compiler/environment, though of
course this would be C debugging not D. (However, since a lot of the bugs
being reported are in the generated code, this may not be a bad thing.)

Now I know the answer's going to be "LOTS", but how much work would it be to
give the compiler this extra mode, perhaps with a -ic (intermediate C)? Does
anyone think the benefits significant and, if so, worth the effort in Walter
doing so.

Matthew
Jul 23 2003
next sibling parent Russ Lewis <spamhole-2001-07-16 deming-os.org> writes:
There is currently ongoing work to get a D frontend for gcc for exactly 
this reason.  I don't know the status, but things seem very quiet over 
there - perhaps the work is stalled?

Matthew Wilson wrote:
 Am ready to be fired at, so take your best shots.
 
 My question is, has anyone seriously considered the notion of DMD producing
 C-code. A kind of D-front if you like.
 
 Now I realise straight up that DMD can probably produce code straight to
 binary that will likely be faster than a compiled intermediate C-form in a
 non-trivial number of cases. So I'm not proposing that compile to machine
 code be abandoned.
 
 However, since there is currently a single compiler/linker that is usable
 with DMD, the bottleneck could be alleviated by allowing any C compiler to
 provide the back-end, and this could also help with porting to other
 platforms (not just Linux, but Mac, VAX, Solaris, etc. etc.). It would also
 allow debugging within one's accustomed compiler/environment, though of
 course this would be C debugging not D. (However, since a lot of the bugs
 being reported are in the generated code, this may not be a bad thing.)
 
 Now I know the answer's going to be "LOTS", but how much work would it be to
 give the compiler this extra mode, perhaps with a -ic (intermediate C)? Does
 anyone think the benefits significant and, if so, worth the effort in Walter
 doing so.
 
 Matthew
 
 
Jul 23 2003
prev sibling next sibling parent reply "Walter" <walter digitalmars.com> writes:
I've thought many times about doing just that. But I think it's a lot more
work to generate C than it would be to integrate with gcc. There are also
some things that won't work - such as converting
the inline assembler to C! -Walter

"Matthew Wilson" <matthew stlsoft.org> wrote in message
news:bfnnvo$305l$1 digitaldaemon.com...
 Am ready to be fired at, so take your best shots.

 My question is, has anyone seriously considered the notion of DMD
producing
 C-code. A kind of D-front if you like.

 Now I realise straight up that DMD can probably produce code straight to
 binary that will likely be faster than a compiled intermediate C-form in a
 non-trivial number of cases. So I'm not proposing that compile to machine
 code be abandoned.

 However, since there is currently a single compiler/linker that is usable
 with DMD, the bottleneck could be alleviated by allowing any C compiler to
 provide the back-end, and this could also help with porting to other
 platforms (not just Linux, but Mac, VAX, Solaris, etc. etc.). It would
also
 allow debugging within one's accustomed compiler/environment, though of
 course this would be C debugging not D. (However, since a lot of the bugs
 being reported are in the generated code, this may not be a bad thing.)

 Now I know the answer's going to be "LOTS", but how much work would it be
to
 give the compiler this extra mode, perhaps with a -ic (intermediate C)?
Does
 anyone think the benefits significant and, if so, worth the effort in
Walter
 doing so.

 Matthew
Jul 23 2003
next sibling parent reply "Matthew Wilson" <matthew stlsoft.org> writes:
I'd be more than happy to accept a version that didn't work with inline
assembler

"Walter" <walter digitalmars.com> wrote in message
news:bfnuu7$4cp$1 digitaldaemon.com...
 I've thought many times about doing just that. But I think it's a lot more
 work to generate C than it would be to integrate with gcc. There are also
 some things that won't work - such as converting
 the inline assembler to C! -Walter

 "Matthew Wilson" <matthew stlsoft.org> wrote in message
 news:bfnnvo$305l$1 digitaldaemon.com...
 Am ready to be fired at, so take your best shots.

 My question is, has anyone seriously considered the notion of DMD
producing
 C-code. A kind of D-front if you like.

 Now I realise straight up that DMD can probably produce code straight to
 binary that will likely be faster than a compiled intermediate C-form in
a
 non-trivial number of cases. So I'm not proposing that compile to
machine
 code be abandoned.

 However, since there is currently a single compiler/linker that is
usable
 with DMD, the bottleneck could be alleviated by allowing any C compiler
to
 provide the back-end, and this could also help with porting to other
 platforms (not just Linux, but Mac, VAX, Solaris, etc. etc.). It would
also
 allow debugging within one's accustomed compiler/environment, though of
 course this would be C debugging not D. (However, since a lot of the
bugs
 being reported are in the generated code, this may not be a bad thing.)

 Now I know the answer's going to be "LOTS", but how much work would it
be
 to
 give the compiler this extra mode, perhaps with a -ic (intermediate C)?
Does
 anyone think the benefits significant and, if so, worth the effort in
Walter
 doing so.

 Matthew
Jul 24 2003
parent "Sean L. Palmer" <palmer.sean verizon.net> writes:
All the C compilers I know of have an inline assembler.  ;)

Just pass asm blocks straight thru to the C code and let the programmer deal
with them.

Sean

"Matthew Wilson" <matthew stlsoft.org> wrote in message
news:bfo0bm$69k$1 digitaldaemon.com...
 I'd be more than happy to accept a version that didn't work with inline
 assembler

 "Walter" <walter digitalmars.com> wrote in message
 news:bfnuu7$4cp$1 digitaldaemon.com...
 I've thought many times about doing just that. But I think it's a lot
more
 work to generate C than it would be to integrate with gcc. There are
also
 some things that won't work - such as converting
 the inline assembler to C! -Walter
Jul 24 2003
prev sibling parent reply Ilya Minkov <midiclub 8ung.at> writes:
Walter wrote:
 I've thought many times about doing just that. But I think it's a lot more
 work to generate C than it would be to integrate with gcc. There are also
 some things that won't work - such as converting
 the inline assembler to C! -Walter
You can't say "it won't work"!!! It may, just not portably. No sense sticking to ANSI C where it doesn't yuild what you need. There would be a need to: * accomodate for compiler-dependant extended sytaxes, e.g. SEH with most Win32 compilers and Java-like exception handling with GCC. Preferable solution - INI-file. Or even simply an #include of a file #defining a number of macros which would sort these things away. The problem of the #include approach is with the mulri-line macros in the case one has to #define something in them, and the lack of flexibility. * accomodate for different inline assembly notations. Like, converter to AT&T-style assembly and somesuch. Maybe through plug-in scripts? May also be in the main code. Besides, i believe there are quite a few people who know C and may want to help with that :) , while GCC internals are not something one wants to mess with. And another point: GCC is so slow, that going through C would be faster on most systems, where other compilers exist. -i.
Jul 24 2003
parent reply "Martin M. Pedersen" <martin moeller-pedersen.dk> writes:
"Ilya Minkov" <midiclub 8ung.at> wrote in message
news:bfon1k$q2o$1 digitaldaemon.com...
 Besides, i believe there are quite a few people who know C and may want
 to help with that :) , while GCC internals are not something one wants
 to mess with.
I'm such a person. I have always believed that it was the way to go - for several reasons: - It does not require expertice with GCC. - Dependence of GCC could make the compiler break if GCC changes. dfront would probably be a relative small free standing project. - dfront could be ported to platforms where GCC is not available. Cross-compilation (generation of C files, that is) could easily be done. - dfront could be implemented in D using DMD for its initial development. It would be fun to do, and also be a great proof-of-concept for D. Regards, Martin M. Pedersen
Jul 24 2003
parent reply "Walter" <walter digitalmars.com> writes:
"Martin M. Pedersen" <martin moeller-pedersen.dk> wrote in message
news:bforpc$upb$1 digitaldaemon.com...
 "Ilya Minkov" <midiclub 8ung.at> wrote in message
 news:bfon1k$q2o$1 digitaldaemon.com...
 Besides, i believe there are quite a few people who know C and may want
 to help with that :) , while GCC internals are not something one wants
 to mess with.
I'm such a person. I have always believed that it was the way to go - for several reasons: - It does not require expertice with GCC. - Dependence of GCC could make the compiler break if GCC changes. dfront would probably be a relative small free standing project. - dfront could be ported to platforms where GCC is not available. Cross-compilation (generation of C files, that is) could easily be done. - dfront could be implemented in D using DMD for its initial development.
It
 would be fun to do, and also be a great proof-of-concept for D.
If someone wants to start up a dfront project, I'll help with advice and ideas.
Jul 24 2003
parent reply Roberto Mariottini <Roberto_member pathlink.com> writes:
In article <bfpfga$1hi2$1 digitaldaemon.com>, Walter says...
[...]
If someone wants to start up a dfront project, I'll help with advice and
ideas.
I have some questions, before asking myself whether I'll have enough time to do it or not: - does the D frontend build a syntax tree? - are the currently distributed sources complete enough to do a dfront project? - what are, in your opinion, the main obstacles one can find in trying to generate a c source from a D source? I guess there's not only the inline assembler. Thanks in advace. Ciao.
Jul 25 2003
parent "Walter" <walter digitalmars.com> writes:
"Roberto Mariottini" <Roberto_member pathlink.com> wrote in message
news:bfr5ic$1kr$1 digitaldaemon.com...
 In article <bfpfga$1hi2$1 digitaldaemon.com>, Walter says...
 [...]
If someone wants to start up a dfront project, I'll help with advice and
ideas.
I have some questions, before asking myself whether I'll have enough time
to do
 it or not:

 - does the D frontend build a syntax tree?
Yes.
 - are the currently distributed sources complete enough to do a dfront
project? Yes.
 - what are, in your opinion, the main obstacles one can find in trying to
 generate a c source from a D source? I guess there's not only the inline
 assembler.
o Exception handling - probably have to make this work with setjmp/longjmp. o Nested functions - can be done by making a struct of all the locals, and then passing a pointer to that. o Inline assembly - probably not worth it o Arbitrary initialized data - a nuisance to make this work with C
 Thanks in advace.

 Ciao.
Jul 26 2003
prev sibling next sibling parent reply "J. Daniel Smith" <J_Daniel_Smith HoTMaiL.com> writes:
Since the objective is not raw performance, C++ might be a better target
than C.  While probably not quite as wide-spread on a multitude of
platforms, D->C++ is probably going to result in nicer generated code than
D->C; hence easier debugging.  Targeting C++ would also make it easier to do
D.NET since you would "just" generate Managed C++.

   Dan

"Matthew Wilson" <matthew stlsoft.org> wrote in message
news:bfnnvo$305l$1 digitaldaemon.com...
 Am ready to be fired at, so take your best shots.

 My question is, has anyone seriously considered the notion of DMD
producing
 C-code. A kind of D-front if you like.

 Now I realise straight up that DMD can probably produce code straight to
 binary that will likely be faster than a compiled intermediate C-form in a
 non-trivial number of cases. So I'm not proposing that compile to machine
 code be abandoned.

 However, since there is currently a single compiler/linker that is usable
 with DMD, the bottleneck could be alleviated by allowing any C compiler to
 provide the back-end, and this could also help with porting to other
 platforms (not just Linux, but Mac, VAX, Solaris, etc. etc.). It would
also
 allow debugging within one's accustomed compiler/environment, though of
 course this would be C debugging not D. (However, since a lot of the bugs
 being reported are in the generated code, this may not be a bad thing.)

 Now I know the answer's going to be "LOTS", but how much work would it be
to
 give the compiler this extra mode, perhaps with a -ic (intermediate C)?
Does
 anyone think the benefits significant and, if so, worth the effort in
Walter
 doing so.

 Matthew
Jul 25 2003
parent reply Ilya Minkov <midiclub 8ung.at> writes:
There's still one problem that i see: C++ doesn't have the "finally" 
construct. This and inline assembly and other things would force to use 
configuration files and compiler extensions.

The code generated through C++ wouldn't be slower as compared to the 
generated C code. But you are definately right: namespaces and classes 
in C++ are very flexible to represent any data types of D, and at the 
same time respective debuggers are able to unmangle names.

One could go through multiple stages
  - Create D AST
  - Transform into C++ AST, then either
    - output C++ as text
    - output to a G++ -based backend
    - possibly other back-ends? Like, Java bytecode, .NET
    - Transform into C AST, then either
      - output C as text
      - possibly output as bytecode for our own interpreter?

The transformation stage into C would not need to take care of all C++ 
contructs, so it probably won't become too complex. :)

What that leads me to think about again: GCJ reads in Java files or JVM 
bytecode, while generating code equivalent to C++ and linkable against 
C++. It is probably a hacked version of G++. If we adhere to their 
conventions, we could possibly achieve C++ - Java - D triple 
cross-language compatibility! Maybe the GCJ people can give us some clue 
how they did it?

-i.

J. Daniel Smith wrote:
 Since the objective is not raw performance, C++ might be a better target
 than C.  While probably not quite as wide-spread on a multitude of
 platforms, D->C++ is probably going to result in nicer generated code than
 D->C; hence easier debugging.  Targeting C++ would also make it easier to do
 D.NET since you would "just" generate Managed C++.
Jul 25 2003
parent reply "Walter" <walter digitalmars.com> writes:
"Ilya Minkov" <midiclub 8ung.at> wrote in message
news:bfrefo$a1m$1 digitaldaemon.com...
 There's still one problem that i see: C++ doesn't have the "finally"
 construct. This and inline assembly and other things would force to use
 configuration files and compiler extensions.
The "finally" can be faked by making it a "destructor" for a dummy object. But then you've got the problems of accessing the local variables in the stack frame - the way to do that is make all the locals a member of a struct, and the finally will be the destructor for that struct. And that leads to the problem of what to do with multiple finally's, each with different code and each needing to access the stack frame.
Jul 26 2003
parent reply Ilya Minkov <midiclub 8ung.at> writes:
Walter wrote:
 But then you've got the problems of accessing the local variables in the
 stack frame - the way to do that is make all the locals a member of a
 struct, and the finally will be the destructor for that struct.
 
 And that leads to the problem of what to do with multiple finally's, each
 with different code and each needing to access the stack frame.
Since the struct trick is the same for the nested functions, i believe, one can factor out "try" blocks which have "finally" clause into nested functions. BTW, does this struct trick work with functions nested in nested functions? -i.
Jul 26 2003
parent reply "Walter" <walter digitalmars.com> writes:
"Ilya Minkov" <midiclub 8ung.at> wrote in message
news:bfuq25$i5e$1 digitaldaemon.com...
 Walter wrote:
 But then you've got the problems of accessing the local variables in the
 stack frame - the way to do that is make all the locals a member of a
 struct, and the finally will be the destructor for that struct.

 And that leads to the problem of what to do with multiple finally's,
each
 with different code and each needing to access the stack frame.
Since the struct trick is the same for the nested functions, i believe, one can factor out "try" blocks which have "finally" clause into nested functions.
It doesn't work because there can be only one destructor, and it cannot take arguments.
 BTW, does this struct trick work with functions nested in nested
functions? Yes (!). You'll also have to do some ugly things like copy the parameters into the struct: ========================= void foo(int a) { int b; int bar(int c) { return a + b + c; } return b + bar(6); } ======================= struct foo_tmp { int a; int b; }; void foo(int a) { struct foo_tmp tmp; tmp.a = a; tmp.b = 0; return tmp.b + bar(&tmp, 6); } void foo_bar(struct foo_tmp *this, int c) { return this->a + this->b + c; } ========================== Painfully ugly, but workable.
Jul 26 2003
parent reply Ilya Minkov <midiclub 8ung.at> writes:
Walter wrote:

 "Ilya Minkov" <midiclub 8ung.at> wrote in message
Since the struct trick is the same for the nested functions, i believe,
one can factor out "try" blocks which have "finally" clause into nested
functions.
 It doesn't work because there can be only one destructor, and it cannot take
 arguments.
But the constructor can. ;) BTW, i'm not sure i can follow about what you would need multiple destructors and/or arguments for, after you'd turn a whole block in question into a nested function?
BTW, does this struct trick work with functions nested in nested functions?
Yes (!).
Ah, i get it. just make an extra struct for each new level, so that the usual nested functions get 1 stack struct, the ones nested in them 2 and so on... Or use one struct, but have a field pointing to a previous-level struct in all but the top-level one.
 You'll also have to do some ugly things like copy the parameters
 into the struct:
This is not much different from locals, which also have to be initialised first. -i.
Jul 26 2003
parent reply "Walter" <walter digitalmars.com> writes:
"Ilya Minkov" <midiclub 8ung.at> wrote in message
news:bfv1q2$pee$1 digitaldaemon.com...
 Walter wrote:

 "Ilya Minkov" <midiclub 8ung.at> wrote in message
Since the struct trick is the same for the nested functions, i believe,
one can factor out "try" blocks which have "finally" clause into nested
functions.
 It doesn't work because there can be only one destructor, and it cannot
take
 arguments.
But the constructor can. ;) BTW, i'm not sure i can follow about what you would need multiple destructors and/or arguments for, after you'd turn a whole block in question into a nested function?
If you have multiple finally blocks, with different code in each, how do you translate that into *one* destructor? I think that translating into C++ rather than C doesn't really get us where we want to go.
Jul 26 2003
parent Ilya Minkov <midiclub 8ung.at> writes:
Walter wrote:
 If you have multiple finally blocks, with different code in each, how do you
 translate that into *one* destructor?
If i understand corectly, the try...finally blocks have to be nested in each other. So why not translate them into nested functions? I know, overhead. Not that the overhead would matter much though - it is a rather rare case to nest too many of these.
 I think that translating into C++ rather than C doesn't really get us where
 we want to go.
Maybe. Just evaluating. OTOH C++ compatibility may be not that bad. -i.
Jul 26 2003
prev sibling parent reply Mark T <Mark_member pathlink.com> writes:
In article <bfnnvo$305l$1 digitaldaemon.com>, Matthew Wilson says...
Am ready to be fired at, so take your best shots.

My question is, has anyone seriously considered the notion of DMD producing
C-code. A kind of D-front if you like.
see http://smarteiffel.loria.fr/ for an example OO language (Eiffel) to C "translator" it is GPL licensed and maybe the backend could be reused or adapted personally I much prefer C than C++ as a target
Jul 26 2003
parent Andy Friesen <andy ikagames.com> writes:
Mark T wrote:
 In article <bfnnvo$305l$1 digitaldaemon.com>, Matthew Wilson says...
 
Am ready to be fired at, so take your best shots.

My question is, has anyone seriously considered the notion of DMD producing
C-code. A kind of D-front if you like.
see http://smarteiffel.loria.fr/ for an example OO language (Eiffel) to C "translator" it is GPL licensed and maybe the backend could be reused or adapted personally I much prefer C than C++ as a target
It seems to me that C would be better in the end, since it compiles faster, and on more platforms. C++ would be a lot easier to implement, though, since most of what's in D is directly available in C++. (most notably exceptions. Doing them in C might be a bit of a pain)
Jul 26 2003