www.digitalmars.com         C & C++   DMDScript  

D - What's the Current & Future Status of Functoin Pointers?

reply Russell Lewis <spamhole-2001-07-16 deming-os.org> writes:
I know that delegates are in.  C-style function pointers still work as 
well (in DLI, at least).  Is there any plan for delegate-style syntax of 
extern(C) function pointers?

The problem I'm facing is that I'm writing a parser for D.  It's pretty 
trivial to write a grammar rule for a function declaration:

function_declaration:
	type IDENT ( type IDENT , ... ) { statement ... }

It's clean, sensible, and easy to read.  But if I have to support 
old-syntax C function pointers, then things get REALLY REALLY ugly! 
Now, the type is spread out, partially in front of the IDENT and 
partially after it.  So the grammar gets really hard to read:

function_declaration:
	type IDENT ( func_decl_arg , ... ) { statement ... }

func_decl_arg:
	type IDENT
	type ( * IDENT ) ( func_decl_arg , ... )

However, if we could use something like the delegate syntax for 
EVERYTHING, then the complexity could be hidden inside my 'type' grammar.
Jan 22 2003
next sibling parent reply "Walter" <walter digitalmars.com> writes:
"Russell Lewis" <spamhole-2001-07-16 deming-os.org> wrote in message
news:3E2ED62E.9010304 deming-os.org...
 I know that delegates are in.  C-style function pointers still work as
 well (in DLI, at least).  Is there any plan for delegate-style syntax of
 extern(C) function pointers?
Daniel and I have talked about a unified syntax for delegates and function pointers, but so far it's just talk.
 The problem I'm facing is that I'm writing a parser for D.
Why not just use the free one I supply? That way you're assured it will work just like the D compiler.
Jan 25 2003
parent reply Russ Lewis <spamhole-2001-07-16 deming-os.org> writes:
Walter wrote:

 "Russell Lewis" <spamhole-2001-07-16 deming-os.org> wrote in message
 news:3E2ED62E.9010304 deming-os.org...
 I know that delegates are in.  C-style function pointers still work as
 well (in DLI, at least).  Is there any plan for delegate-style syntax of
 extern(C) function pointers?
Daniel and I have talked about a unified syntax for delegates and function pointers, but so far it's just talk.
 The problem I'm facing is that I'm writing a parser for D.
Why not just use the free one I supply? That way you're assured it will work just like the D compiler.
The parser project started a long time ago. As it turned out, I've written from the ground up a whole new automatic parser generator - like Bison, but substantially more powerful. Developing a D parser has been a way to test the new utility. I have kind of fallen in love with the output of this new type of parser, so if I ever actually do any D development, I think my preference would be to use my parser rather than anybody else's. Anyhow, whether I ever use my parser or not, I think the issue still stands for *anybody* who will be writing a parser for D. -- The Villagers are Online! http://villagersonline.com .[ (the fox.(quick,brown)) jumped.over(the dog.lazy) ] .[ (a version.of(English).(precise.more)) is(possible) ] ?[ you want.to(help(develop(it))) ]
Jan 27 2003
next sibling parent reply Bill Cox <bill viasic.com> writes:
Hi, Russ.

How is your parser generator more powerful?  If it's significantly 
better than Bison, are you putting it into the open source comunity? 
I'm just interested in case it's something I should be using.  I 
currently do tons of Bison.

Thanks,
Bill Cox

Russ Lewis wrote:
 Walter wrote:
 
 
"Russell Lewis" <spamhole-2001-07-16 deming-os.org> wrote in message
news:3E2ED62E.9010304 deming-os.org...

I know that delegates are in.  C-style function pointers still work as
well (in DLI, at least).  Is there any plan for delegate-style syntax of
extern(C) function pointers?
Daniel and I have talked about a unified syntax for delegates and function pointers, but so far it's just talk.
The problem I'm facing is that I'm writing a parser for D.
Why not just use the free one I supply? That way you're assured it will work just like the D compiler.
The parser project started a long time ago. As it turned out, I've written from the ground up a whole new automatic parser generator - like Bison, but substantially more powerful. Developing a D parser has been a way to test the new utility. I have kind of fallen in love with the output of this new type of parser, so if I ever actually do any D development, I think my preference would be to use my parser rather than anybody else's. Anyhow, whether I ever use my parser or not, I think the issue still stands for *anybody* who will be writing a parser for D. -- The Villagers are Online! http://villagersonline.com .[ (the fox.(quick,brown)) jumped.over(the dog.lazy) ] .[ (a version.of(English).(precise.more)) is(possible) ] ?[ you want.to(help(develop(it))) ]
Jan 27 2003
parent reply Russ Lewis <spamhole-2001-07-16 deming-os.org> writes:
Bill Cox wrote:

 Hi, Russ.

 How is your parser generator more powerful?  If it's significantly
 better than Bison, are you putting it into the open source comunity?
 I'm just interested in case it's something I should be using.  I
 currently do tons of Bison.
Unfortunately, the company I work for is really strict about intellectual property issues. I hope to convince them to release it to the open source community, but for the moment I can't talk too much. In a nutshell (without revealing company secrets), it is 1) a GLR parser (though Bison just recently added this feature) 2) uses a syntax much like Bison, but FAR more expressive. Complex expressions that would have taken multiple rules in Bison can be expressed in a single line in my parser 3) parses the entire tree, returning a root object, rather than making you hand-code each and every rule 4) can handle (and return to you) multiple parsings, in case the language is ambiguous 5) outputted parser is in D, and thus the parser output is a tree of D objects I call it "cebu" - the C Enabled Bison Upgrade. It currently can generate parsers that parse D, and I believe that there is no reason it cannot parse C as well. As anybody who has used Bison knows, Bison cannot parse C without some massive hacks. -- The Villagers are Online! http://villagersonline.com .[ (the fox.(quick,brown)) jumped.over(the dog.lazy) ] .[ (a version.of(English).(precise.more)) is(possible) ] ?[ you want.to(help(develop(it))) ]
Jan 27 2003
parent Ilya Minkov <midiclub 8ung.at> writes:
Russ Lewis wrote:
     1) a GLR parser (though Bison just recently added this feature)
Wonderful. This solves a myriad of problems.
     2) uses a syntax much like Bison, but FAR more expressive.  Complex
expressions
 that would have taken multiple rules in Bison can be expressed in a single
line in my
 parser
Good.
     3) parses the entire tree, returning a root object, rather than making you
 hand-code each and every rule
Like CocoM Early parser. (used in Dino)
     4) can handle (and return to you) multiple parsings, in case the language
is
 ambiguous
CocoM Early does this as well.
     5) outputted parser is in D, and thus the parser output is a tree of D
objects
Another similarity. Just that it's gonna be much faster than CocoM.
 I call it "cebu" - the C Enabled Bison Upgrade.  It currently can generate
parsers
 that parse D, and I believe that there is no reason it cannot parse C as well.
 As
 anybody who has used Bison knows, Bison cannot parse C without some massive
hacks.
Kewl :> Another difference of CocoM left, is that it can read a grammer at run time. It's a parser, not a parser generator. Another domain of use, at runtime, which also requieres that parser's internal structures must be built very quickly. -i.
Jan 27 2003
prev sibling parent reply Ilya Minkov <midiclub 8ung.at> writes:
Hello.

My opinion corresponds to:
http://www.acm.org/crossroads/xrds7-5/bison.html

So i hope yours *is* better than Bison. What algorithm(s) is it based on?

I am going to write a easy-to-use parsing library for D, which would 
provide run-time extention. So it would not be a parser generator, 
rather something like Dino's runtime Earley parser. In fact, it might 
become an Earley parser. I might simply adapt it from Dino source, sice 
it's GPL and written in good C.

The disadvantage is a lower speed - that's what compiler compilers 
adress. The distinct advantage of a run-time Earley parser would be, 
that no deep algorithm understanding is requiered, run-time extention is 
possible, as well as natural-language parsing.

The run-time performance of a Dino's parser is about 30_000 C code lines 
per second on a 500Mhz P6, which i consider usually enough. And it 
requieres very little time to read in the grammar.  It seems to me that 
parsing speed is not that important, since GCC uses a very fast parser, 
and is yet slow as hell. In fact the absolutely slowest compiler I've 
ever experienced. General design is of major importance.

-i.

Russ Lewis wrote:
 The parser project started a long time ago.  As it turned out, I've written
from
 the ground up a whole new automatic parser generator - like Bison, but
 substantially more powerful.  Developing a D parser has been a way to test the
 new utility.
 
 I have kind of fallen in love with the output of this new type of parser, so if
 I ever actually do any D development, I think my preference would be to use my
 parser rather than anybody else's.
 
 Anyhow, whether I ever use my parser or not, I think the issue still stands for
 *anybody* who will be writing a parser for D.
Jan 27 2003
parent reply Garen Parham <garen_nospam_ wsu.edu> writes:
Ilya Minkov wrote:
...
 The run-time performance of a Dino's parser is about 30_000 C code lines 
 per second on a 500Mhz P6, which i consider usually enough. And it 
 requieres very little time to read in the grammar.  It seems to me that 
 parsing speed is not that important, since GCC uses a very fast parser, 
 and is yet slow as hell. In fact the absolutely slowest compiler I've 
 ever experienced. General design is of major importance.
GCC isn't poorly designed as far as I can tell; it is slow as hell though. Assuming my cursory profiling of gcc is right, the single most expensive thing gcc does is garbage collection. The next most is parsing. Both of them make up the so much of compilation time that the backend seems irrelevent. I've been playing with 3.3 and 3.4 via CVS lately and follow some of the lists, and it looks like with a few gc tunables you can instantly squeeze out 25-40% more performance from it. Why they seem to have been neglected I have no idea. Maybe the gcc hackers are using super beefy hardware and haven't started seriously looking at the problem until lately when gcc 3.2.x was widely available to provide them with lots of complaints. General "design" as I read here and usually see tends to be restricted to considering long-term dominant characteristics. Have you ever seen how fast tinycc compiles C code? Lots of other compilers use the same kind of "algorithm" but so far as I can tell, the reason why its so fast is because it parses everything in one pass.
Jan 30 2003
next sibling parent reply "Mike Wynn" <mike.wynn l8night.co.uk> writes:
Have I missed something here, but who cares how fast/slow the compiler is
the important fact is does it generate fast code
in an x86 the order is, (from some table I saw online last year)
lcc (very poor)
bc++, dmc (sorry walter, just going from figures I've seen), some gcc's are
all about the same
egcs and newer gcc a bit better
VC++ generated code also twice the speed of lcc and 10 to 25% faster than
bcc
Intel's plugin for VS the top

the next point of consern to me is; can I write code that's readable and
know the compiler will optimise it fully, or do I have to write optimised C
to get the performance.

gcc suffers slightly from having such a range of backends, unlike tinycc,
gcc generate an intermediate form of the code and passes that to the
backend, and I would expect that its optimiser uses up a few cycles, I
believe it performs at least two optimiser phases, first on the code
(looking for loop invariants etc) then a simple peep hole optimiser on the
generated code.
and it makes tempary files for transfering info between front and backend
afaik and it is this write read file access that will kill performance
especially with files bigger than the file cache. if you are playing with
gcc, you might want to try using named pipes for the connection between
front end and backend on win32.

just times some gcc compiler, I have a 6Mb 75 file GBA project
compiling it 4 times (4 different configs)
takes 1min 30sec at -O3 ; 1 min 25 at -O1 and 1 min 20 with no optimisation
the longest file is a 4Mb lookuptable (int lut[16][0x8000]) which takes 12
seconds to compile
only build into one version and that verison takes about 35 to 40 seconds to
build
this is on a Duron 800 512Mb RAM, UDMA66 disks Win2K (and all manner of junk
running in the bg)

I remember having a 2Mb Turbo Vision project that used to take over 30
minutes to compile on a P90
that, I call slow, but gcc, on current hardware (800 not exactly fast these
days) I don't consider slow.
I'll have to try on a realy slow machine (celeron 400)

gc can be a killer to performance if you have HUGE amounts of object to
walk, I made the mistake once of setting the java heap size bigger than my
physical memory before running javadoc over some source, (this was at 6pm) I
went out, stayed at a friends over night, got home and it was still parsing
the files, every gc cycle was causing swapping, I think it took about 46
hour in the end, later I set the heap size just below the available mem, it
took 4 hours instead :)

like oo and templating, gc is just another double edged sword the programmer
has to learn to work with.

Mike.

"Garen Parham" <garen_nospam_ wsu.edu> wrote in message
news:b1b7na$65g$1 digitaldaemon.com...
 Ilya Minkov wrote:
 ...
 The run-time performance of a Dino's parser is about 30_000 C code lines
 per second on a 500Mhz P6, which i consider usually enough. And it
 requieres very little time to read in the grammar.  It seems to me that
 parsing speed is not that important, since GCC uses a very fast parser,
 and is yet slow as hell. In fact the absolutely slowest compiler I've
 ever experienced. General design is of major importance.
GCC isn't poorly designed as far as I can tell; it is slow as hell though. Assuming my cursory profiling of gcc is right, the single most expensive thing gcc does is garbage collection. The next most is parsing. Both of them make up the so much of compilation time that the backend seems irrelevent. I've been playing with 3.3 and 3.4 via CVS lately and follow some of the lists, and it looks like with a few gc tunables you can instantly squeeze out 25-40% more performance from it. Why they seem to have been neglected I have no idea. Maybe the gcc
hackers
 are using super beefy hardware and haven't started seriously looking at
the
 problem until lately when gcc 3.2.x was widely available to provide them
 with lots of complaints.

 General "design" as I read here and usually see tends to be restricted to
 considering long-term dominant characteristics.  Have you ever seen how
 fast tinycc compiles C code?  Lots of other compilers use the same kind of
 "algorithm" but so far as I can tell, the reason why its so fast is
because
 it parses everything in one pass.
Jan 30 2003
next sibling parent reply Garen Parham <garen_nospam_ wsu.edu> writes:
Mike Wynn wrote:

 Have I missed something here, but who cares how fast/slow the compiler is
 the important fact is does it generate fast code
 in an x86 the order is, (from some table I saw online last year)
 lcc (very poor)
 bc++, dmc (sorry walter, just going from figures I've seen), some gcc's are
 all about the same
 egcs and newer gcc a bit better
 VC++ generated code also twice the speed of lcc and 10 to 25% faster than
 bcc
 Intel's plugin for VS the top
Code generation is more important, but compile time performance is very important. When testing huge source trees it can mean a difference in days of time lost. All the waiting when developing adds up real fast too. I use icc 7.0 regularly and it has -O2 on by default and is 100-200% faster than gcc/g++ with no optimization, so I don't think its slow at all. It also uses the best C++ front end IMO and generates superior error messages.
 the next point of consern to me is; can I write code that's readable and
 know the compiler will optimise it fully, or do I have to write optimised C
 to get the performance.
 
 gcc suffers slightly from having such a range of backends, unlike tinycc,
 gcc generate an intermediate form of the code and passes that to the
 backend, and I would expect that its optimiser uses up a few cycles, I
 believe it performs at least two optimiser phases, first on the code
 (looking for loop invariants etc) then a simple peep hole optimiser on the
 generated code.
There are lots of optimization passes that can be enabled, but the total time they take is miniscule IME.
 and it makes tempary files for transfering info between front and backend
 afaik and it is this write read file access that will kill performance
 especially with files bigger than the file cache. if you are playing with
 gcc, you might want to try using named pipes for the connection between
 front end and backend on win32.
Using the -pipe flag won't generate temporaries.
 
 I remember having a 2Mb Turbo Vision project that used to take over 30
 minutes to compile on a P90
 that, I call slow, but gcc, on current hardware (800 not exactly fast these
 days) I don't consider slow.
 I'll have to try on a realy slow machine (celeron 400)
 
When I first setup my environment to use tcc instead, I hit F8 to compile. It was so fast I just sat there wondering why nothing had happened. I didn't realize it had compiled already!
 
 like oo and templating, gc is just another double edged sword the programmer
 has to learn to work with.
 
I don't follow that one.
Jan 30 2003
parent "Mike Wynn" <mike.wynn l8night.co.uk> writes:
 like oo and templating, gc is just another double edged sword the
programmer
 has to learn to work with.
I don't follow that one.
people were complaining that gcc is slow becasue it has gc (garbage collection) if used properly it can be faster (you avoid all those copy constructors, and code to keep track of live objects) you can also code complex meshes of objects without worring about who 'owns' what. on the down side, memory footprint can be bigger (you have to wait for the gc to run before you get your memory back) it all depends on the type of app you are writing. wjhat can I say it can be great if used in the right place, is can be a pain if used when it should not be. just like void*, OO, templates, innerclasses, nested classes, closures, inline asm etc etc etc all have their uses, and the more you used them the more you know when its right to do X and when X is going to bite back when your not looking.
Jan 30 2003
prev sibling parent Ilya Minkov <midiclub 8ung.at> writes:
Mike Wynn wrote:
 I remember having a 2Mb Turbo Vision project that used to take over 30
 minutes to compile on a P90
 that, I call slow, but gcc, on current hardware (800 not exactly fast these
 days) I don't consider slow.
 I'll have to try on a realy slow machine (celeron 400)
My main development computer is a notebook, Pentium MMX 233 MHz, 64 MB, Win98, which is lightweight enough that i carry it often with me to the university. Surprisingly enough, my math professor has a similar notebook, although he's not on a limited budget like i am.
 gc can be a killer to performance if you have HUGE amounts of object to
 walk, I made the mistake once of setting the java heap size bigger than my
 physical memory before running javadoc over some source, (this was at 6pm) I
 went out, stayed at a friends over night, got home and it was still parsing
 the files, every gc cycle was causing swapping, I think it took about 46
 hour in the end, later I set the heap size just below the available mem, it
 took 4 hours instead :)
Probably because it did a re-scan every time it hit the mem border. And/or went into swapping. Until it doesn't scan often, there should be no significant performance loss.
 like oo and templating, gc is just another double edged sword the programmer
 has to learn to work with.
Sure. But like OO and Templating are very useful (even if not for all tasks), GC is as well. -i.
Jan 30 2003
prev sibling parent reply Ilya Minkov <midiclub 8ung.at> writes:
Garen Parham wrote:
 GCC isn't poorly designed as far as I can tell; it is slow as hell though. 
 Assuming my cursory profiling of gcc is right, the single most expensive
 thing gcc does is garbage collection.  The next most is parsing.  Both of
 them make up the so much of compilation time that the backend seems
 irrelevent.  I've been playing with 3.3 and 3.4 via CVS lately and follow
 some of the lists, and it looks like with a few gc tunables you can
 instantly squeeze out 25-40% more performance from it. 
Does it do GC? Then why does it swap like hell on my 64mb notebook running lightweight win98? I've seen my allocated virtual memory constantly grow. It looks like it's plugged in where it doesn't do much? Perhaps non-GC-friendly data organisation? Besides, if it's boehm GC, it shouldn't be a significant performance loss. At least if it doesn't run the whole time. But yes, it would run the whole time if a system is forced to swap. :/
 Why they seem to have been neglected I have no idea.  Maybe the gcc hackers
 are using super beefy hardware and haven't started seriously looking at the
 problem until lately when gcc 3.2.x was widely available to provide them
 with lots of complaints.
:/
 General "design" as I read here and usually see tends to be restricted to
 considering long-term dominant characteristics.  Have you ever seen how
 fast tinycc compiles C code?  Lots of other compilers use the same kind of
 "algorithm" but so far as I can tell, the reason why its so fast is because
 it parses everything in one pass.
"It uses multiple simple short passes", it says in tinycc docs. And it also says that the only optimisations made are constant wrapping and replacements within single instructions (MUL to shift, ADD to INC, and so on), i.e. at generation. Unlike GCC or even LCC, it doesn't have means to edit generated code in any manner. It doesn't store an IR, i guess. And of course, it uses no intermediate assembly language file. I have a huge number of documents on my HD, describing different back-end generators for LCC. The major topic is rewriting the IR tree, selecting optimal choices guided by system-specific instruction-costs. I haven't had time to read them though and i won't for a short while. And yet, LCC is reasinably fast. LCC-Win32 gains additional performance because it doesn't save ASM files like original LCC and GCC do, but feeds it to internal assembler. But the assembly is still text, which is IMO simply stupid. It could be some uniform-sized binary data, which is easy to analyse and can be converted to real machine code with only a few shifts. LCC-Win32 adds a peephole optimizer to LCC, which tagges each text assembly instruction with a simple binary "description", and then does a simple pattern-search with replace between labels, using the tags as a primary guidance and also parsing single instructions when needed. Due to tags, the optimisation phase is very fast, and seems to add about 1/5 to compilation time. Simply imagine what if it had to parse ASM over and over. And GCC performance is very low as well with optimizations turned off. The LCC-Win32 author claims that a small number of simple optimisations leads to about 90% of GCC 2.95 code performance on P6 class machines, so it doesn't seem much like a speed-quality tradeoff, rather some deficiency in GCC. Avoiding the assembly phase is actually very simple, VCODE solves it in a following ANSI C -compliant manner: a number of preprocessor macros are written, one for each opcode, which generates the corresponding binary instruction using a couple of ANDs, ORs and shifts, and pushes it onto a kind of software-stack. Then, these only need to be placed instead of generating assembly text. Well, they obviously cannot be stored in an IR like text can, but some intermediate solution is imaginable. VCODE's IR storage is ICODE, just that these are made for runtime code generation, and represent a generalized RISC command set. But "back-ends" generating all kinds of machine code out of them exist. -i.
Jan 30 2003
parent reply Garen Parham <garen_nospam_ wsu.edu> writes:
Ilya Minkov wrote:

 
 Does it do GC? Then why does it swap like hell on my 64mb notebook 
 running lightweight win98? I've seen my allocated virtual memory 
 constantly grow. It looks like it's plugged in where it doesn't do much? 
 Perhaps non-GC-friendly data organisation?
 
 Besides, if it's boehm GC, it shouldn't be a significant performance 
 loss. At least if it doesn't run the whole time. But yes, it would run 
 the whole time if a system is forced to swap. :/
 
Yeah, GCC uses the boehm GC. C and C++ supposedly aren't very amenable to being GC'd but could hardly think it would amount to as much slowness as GCC shows.
 "It uses multiple simple short passes", it says in tinycc docs. And it 
 also says that the only optimisations made are constant wrapping and 
 replacements within single instructions (MUL to shift, ADD to INC, and 
 so on), i.e. at generation. Unlike GCC or even LCC, it doesn't have 
 means to edit generated code in any manner. It doesn't store an IR, i 
 guess. And of course, it uses no intermediate assembly language file.
Yeah it doesn't do hardly anything at all for optimization. But with other compilers not getting even close without any optimization settings turned on it seems they can do way better.
 I have a huge number of documents on my HD, describing different 
 back-end generators for LCC. The major topic is rewriting the IR tree, 
 selecting optimal choices guided by system-specific instruction-costs. I 
 haven't had time to read them though and i won't for a short while.
 
... I've heard LCC was a good compiler to study but haven't read/used it. Have done some cursory browsing and it and the Zephyr/NCI projects seem pretty cool but look like they're dead.
Feb 01 2003
parent Ilya Minkov <midiclub 8ung.at> writes:
Garen Parham wrote:
 Yeah it doesn't do hardly anything at all for optimization.  But with other
 compilers not getting even close without any optimization settings turned
 on it seems they can do way better.
Eliminating intermediate structures speeds up compilation immensely, but also eliminates the possiblity of optimisation. Optimising compilers still need to build all their complicated intermediates, even if they're not optimising at the moment.
 I've heard LCC was a good compiler to study but haven't read/used it.
 Have done some cursory browsing and it and the Zephyr/NCI projects
 seem pretty cool but look like they're dead.
And they're loose frameworks, not directly usable on x86. I bet you're not aware of this one: http://cocom.sourceforge.net/ (russian compiler/interpreter infrastructure) LCC works on x86, but it comes from Microsoft Research. The license is friendly to scientific/non-profit uses of it though. I bet .NET internals are similar to it. -i.
Feb 01 2003
prev sibling parent reply Burton Radons <loth users.sourceforge.net> writes:
Russell Lewis wrote:
 I know that delegates are in.  C-style function pointers still work as 
 well (in DLI, at least).  Is there any plan for delegate-style syntax of 
 extern(C) function pointers?
I agree, the current function pointer syntax is weird; nontrivial usages of it are nearly incomprehensible. Howabout: type function ( declaration , ... ) IDENT; I don't like using "function", but it's the only name I can think of. Maybe we should get rid of function pointer types altogether. They're in the same fix as wchar; a language interface type that is badly supported and atrophying. You'll still be able to get the address of a function, but it'll be as a void pointer. The delegate of a function would be a minifunction that wraps the call properly: popl %eax // Put the return EIP in EAX movl %eax, (%esp) // Cover the null "this" pointer call function // Execute the real function jmp (%esp) // Jump to the caller Because the caller cleans up the stack in extern (D), we can't just substitute a "jmp function" in there and skip the last instruction; if we could, this could just be a couple bytes right before the real function and not have a jmp at all. Ironically, it would make calling class delegates faster than function delegates, but it wouldn't affect normal execution.
Jan 27 2003
next sibling parent reply Burton Radons <loth users.sourceforge.net> writes:
Burton Radons wrote:
 The delegate of a function would be a minifunction that wraps the call 
 properly:
 
     popl %eax // Put the return EIP in EAX
     movl %eax, (%esp) // Cover the null "this" pointer
     call function // Execute the real function
     jmp (%esp) // Jump to the caller
Wait, that's not right; the real function will still have four bytes too many on the stack, thanks to the call. Uh... I can't see how it can be done cheaply. We need the stack space, and we need to correct the stack before returning. This requires eight bytes, and we only have four to play in. If we put the this pointer at the end of the arguments this wouldn't be a problem. But that makes COM interfacing impossible. The only solution is to move the arguments down four bytes, stick the real return value after the arguments, and then correct later. This won't allow delegates for variadic functions, and of course it'll spike the speed hit considerably. Something like: movl (%esp), %eax // Put the return EIP in EAX movl %esp, %esi // Source part of the data move addl $4, %esi // Move from one cell up movl %esp, %edi // Destination part of the data move movl $argumentSize, %ecx // Number of bytes to move rep movsb movl %eax, argumentSize(%esp) // Stuff the return EIP call function // Execute the real function jmp argumentSize(%esp) // Jump to the caller Better than compiling two versions of every function.
Jan 27 2003
parent Scott Wood <scott buserror.net> writes:
Burton Radons <loth users.sourceforge.net> wrote:
 If we put the this pointer at the end of the arguments this wouldn't be 
 a problem.
Oops, I missed this... :-(
  But that makes COM interfacing impossible.
Though, the spec says that COM functions already need to use extern (Windows), so a change specifically to the D ABI shouldn't affect them. Of course, that raises the issue of what to do when extern (Windows (or any other ABI that may be supported)) functions are placed in a delegate, but it's better than having to apply the workaround to all functions. -Scott
Jan 27 2003
prev sibling next sibling parent reply Russ Lewis <spamhole-2001-07-16 deming-os.org> writes:
Burton Radons wrote:

 Maybe we should get rid of function pointer types altogether.  They're
 in the same fix as wchar; a language interface type that is badly
 supported and atrophying.  You'll still be able to get the address of a
 function, but it'll be as a void pointer.
You could do that, except that we would need some syntax for interfacing with C code that requires function pointers. -- The Villagers are Online! http://villagersonline.com .[ (the fox.(quick,brown)) jumped.over(the dog.lazy) ] .[ (a version.of(English).(precise.more)) is(possible) ] ?[ you want.to(help(develop(it))) ]
Jan 27 2003
parent Burton Radons <loth users.sourceforge.net> writes:
Russ Lewis wrote:
 Burton Radons wrote:
 
 
Maybe we should get rid of function pointer types altogether.  They're
in the same fix as wchar; a language interface type that is badly
supported and atrophying.  You'll still be able to get the address of a
function, but it'll be as a void pointer.
You could do that, except that we would need some syntax for interfacing with C code that requires function pointers.
Yeah, I think removing them wouldn't work; any method I can think of for calling them would be too asstastic, and unlike bitfields, there's going to be more function pointers in the future. Once delegates can be taken from a function, the pressure to have parallel APIs for both delegates and function pointers will be removed. You could even cast a function pointer to a delegate and vice versa with a little dynamic machine code generation. Hm.
Jan 27 2003
prev sibling parent Scott Wood <scott buserror.net> writes:
Burton Radons <loth users.sourceforge.net> wrote:
 The delegate of a function would be a minifunction that wraps the call 
 properly:
 
      popl %eax // Put the return EIP in EAX
      movl %eax, (%esp) // Cover the null "this" pointer
      call function // Execute the real function
      jmp (%esp) // Jump to the caller
 
 Because the caller cleans up the stack in extern (D), we can't just 
 substitute a "jmp function" in there and skip the last instruction; if 
 we could, this could just be a couple bytes right before the real 
 function and not have a jmp at all.
What about modifying the ABI so that "this" is the last argument, rather than the first? Then, if it's not needed, it sits harmlessly on the stack like any other local variable of the caller. For ISAs with more registers, it may be better to dedicate an argument register to holding "this", so that the change doesn't incur a performance penalty by moving "this" from a register to the stack when the supply of argument registers is exhausted. Of course, this assumes that non-static methods are more frequent than static methods and plain functions, which would have one less argument register... -Scott
Jan 27 2003