www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Compiler as dll

reply bearophile <bearophileHUGS lycos.com> writes:



While writing a genetic programming program (and in some other situations) I
find it useful to have an eval() function, like in Lisp/Scheme and most
scripting languages.

I think such functionalities may be added to DMD and LDC too:
- 99% of the compiler can be moved into a DLL (or a shared dynamic lib of some
kind) and the DMD/LDC executable can then become very little, it can load such
dynamic lib, and it has just to manage the I/O from/to disk and the command
line arguments, ad little else.
- By default such dynamic lib isn't loaded by D programs, so the size of D
programs is unchanged (and they don't need such dll to run).
- A module can be added to the std lib (Tango and Phobos) that loads such dll
and offers a function like eval() that accepts in input a string of code (or an
AST too, if you want) and compiles/runs it.

Bye,
bearophile
Jan 27 2009
next sibling parent Christopher Wright <dhasenan gmail.com> writes:
bearophile wrote:


 
 While writing a genetic programming program (and in some other situations) I
find it useful to have an eval() function, like in Lisp/Scheme and most
scripting languages.
 
 I think such functionalities may be added to DMD and LDC too:
 - 99% of the compiler can be moved into a DLL (or a shared dynamic lib of some
kind) and the DMD/LDC executable can then become very little, it can load such
dynamic lib, and it has just to manage the I/O from/to disk and the command
line arguments, ad little else.
 - By default such dynamic lib isn't loaded by D programs, so the size of D
programs is unchanged (and they don't need such dll to run).
 - A module can be added to the std lib (Tango and Phobos) that loads such dll
and offers a function like eval() that accepts in input a string of code (or an
AST too, if you want) and compiles/runs it.
I think Walter would take Oren Eini's approach to this: send me a patch for it.
 Bye,
 bearophile
Jan 27 2009
prev sibling next sibling parent "Saaa" <empty needmail.com> writes:
Where and how do you use genetic programming ?? (just interested : )





 While writing a genetic programming program (and in some other situations) 
 I find it useful to have an eval() function, like in Lisp/Scheme and most 
 scripting languages.

 I think such functionalities may be added to DMD and LDC too:
 - 99% of the compiler can be moved into a DLL (or a shared dynamic lib of 
 some kind) and the DMD/LDC executable can then become very little, it can 
 load such dynamic lib, and it has just to manage the I/O from/to disk and 
 the command line arguments, ad little else.
 - By default such dynamic lib isn't loaded by D programs, so the size of D 
 programs is unchanged (and they don't need such dll to run).
 - A module can be added to the std lib (Tango and Phobos) that loads such 
 dll and offers a function like eval() that accepts in input a string of 
 code (or an AST too, if you want) and compiles/runs it.

 Bye,
 bearophile 
Jan 27 2009
prev sibling next sibling parent reply Sandeep Kakarlapudi <sandeep.iitkgpspammenot gmail.com> writes:
I have been wishing for something similar though for different reasons. Often I
have code (in C++) that is executed for a large amount of data like in stream
programming and becomes a major hotspot for the application. Think software
graphics pipelines. Further, this code has lots of branches and since its
executed over many elements they are essentially branches in inner-loops. The
values over which they branch are unknown till runtime and the number of
combinations of the values can be very very high. Here are the basic attempts
to solve that I'm aware of and the problems with each:

1) Programmer codes for each combination of conditions - Takes a lot of time
and is hard to maintain. 

2) Use metaprogramming to generate all combinations at compile time. I usually
go for this using C++ templates. This can bloat the binary size very quickly
when the number of values are many. 

3) Generate the code at runtime. Often the values don't change too often and
hence generating the code at runtime would help. But here the   programmer

bytecode. 

4) With the philosophy of Life Long Program Optimization, the code is
monitored/ traced and automatically optimized. Even if such a language targets
some intermediate bytecode, there can be cases where it might significantly
outperform native code by using this approach. Might not be suitable for a
language that targets native code?? Not suited for systems programming. 

5) Perhaps with some support in the language/library the application can invoke
the optimizer and give the know values and the optimizer can create an
optimized version a function for those values. Is this feasible? 

In summary, what I'm asking for is provision for the programmer to direct code
specialization at runtime. This becomes a subset of bearophile's request so I
would also like to see a more generic and standard(!) way of runtime compiling
/ optimization of high level code. 

I also wonder if any of the std library functionality can be better implemented
given such runtime code compilation and optimization features.

Sandeep 

bearophile Wrote:



 
 While writing a genetic programming program (and in some other situations) I
find it useful to have an eval() function, like in Lisp/Scheme and most
scripting languages.
 
 I think such functionalities may be added to DMD and LDC too:
 - 99% of the compiler can be moved into a DLL (or a shared dynamic lib of some
kind) and the DMD/LDC executable can then become very little, it can load such
dynamic lib, and it has just to manage the I/O from/to disk and the command
line arguments, ad little else.
 - By default such dynamic lib isn't loaded by D programs, so the size of D
programs is unchanged (and they don't need such dll to run).
 - A module can be added to the std lib (Tango and Phobos) that loads such dll
and offers a function like eval() that accepts in input a string of code (or an
AST too, if you want) and compiles/runs it.
 
 Bye,
 bearophile
Jan 27 2009
parent reply BCS <ao pathlink.com> writes:
Reply to Sandeep,

 I have been wishing for something similar though for different
 reasons. Often I have code (in C++) that is executed for a large
 amount of data like in stream programming and becomes a major hotspot
 for the application. Think software graphics pipelines. Further, this
 code has lots of branches and since its executed over many elements
 they are essentially branches in inner-loops. The values over which
 they branch are unknown till runtime and the number of combinations of
 the values can be very very high. Here are the basic attempts to solve
 that I'm aware of and the problems with each:
 
One option you could use under GCC would be to build all the inner control structures out of gotos and use goto variables that are set outside the loop. I'm not sure how dynamic gotos stack up with conditional jumps but you would still avoid the tests.
 
 Sandeep
 
Jan 27 2009
parent Sandeep Kakarlapudi <sandeep.iitkgpspammenot gmail.com> writes:
BCS Wrote:

 Reply to Sandeep,
 
 I have been wishing for something similar though for different
 reasons. Often I have code (in C++) that is executed for a large
 amount of data like in stream programming and becomes a major hotspot
 for the application. Think software graphics pipelines. Further, this
 code has lots of branches and since its executed over many elements
 they are essentially branches in inner-loops. The values over which
 they branch are unknown till runtime and the number of combinations of
 the values can be very very high. Here are the basic attempts to solve
 that I'm aware of and the problems with each:
 
One option you could use under GCC would be to build all the inner control structures out of gotos and use goto variables that are set outside the loop. I'm not sure how dynamic gotos stack up with conditional jumps but you would still avoid the tests.
Nice idea. While it still won't have optimal peformance, it definitely is another option to use. It gets rid of code bloat, but it comes at the cost of readability and hence maintainability. But I think none of these would come close to the having a run time optimizer to perform code specialization. Often the small latency in optimizing the code is well worth its result.
Jan 28 2009
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
I've done the compiler-as-dll thing for the Digital Mars IDE. It has 
some problems, though. The biggest is it's another executable to test, 
doubling the testing process. It could be done as one executable by 
making a shell that calls the dll, but those are just inconvenient, 
harder to debug, and there's the old "dll hell" with versions.

Instead, what you can do is simply dude up command line arguments, spawn 
the command line compiler, and collect the result.
Jan 27 2009
next sibling parent Jarrett Billingsley <jarrett.billingsley gmail.com> writes:
On Tue, Jan 27, 2009 at 6:44 PM, Walter Bright
<newshound1 digitalmars.com> wrote:
 Instead, what you can do is simply dude up command line arguments, spawn the
 command line compiler, and collect the result.
Process startup incurs a very large performance hit. Yes, you wouldn't want to be calling even a library compiler in a tight loop, but still, it precludes doing at least semi-realtime things. (though the D compiler is so fast that it doesn't matter much anyway - see h3r3tic's nucled) Another thing having it as a library enables is much simpler IDE integration, as you probably know already. Furthermore, if the compiler's interface is made flexible enough, you can also do some really interesting things - have callbacks for imports which will automatically fetch and install libraries that you don't already have, for instance. Things like DSSS would just become different frontends for the compiler.
Jan 27 2009
prev sibling next sibling parent reply BCS <none anon.com> writes:
Hello Walter,

 
 Instead, what you can do is simply dude up command line arguments,
 spawn the command line compiler, and collect the result.
 
The one main thing I see not working there is memory-to-memory compiles. I'd love to be able to build a function as a string, call the compiler and get back a function pointer.
Jan 27 2009
parent reply Benji Smith <dlanguage benjismith.net> writes:
BCS wrote:
 Hello Walter,
 
 Instead, what you can do is simply dude up command line arguments,
 spawn the command line compiler, and collect the result.
The one main thing I see not working there is memory-to-memory compiles. I'd love to be able to build a function as a string, call the compiler and get back a function pointer.
I think also, with a compiler-as-dll, it'd have separate modules for lexing, parsing, optimizing, code-generation, and linking. As a user of that compiler DLL, I might like to write my own AST visitor, wrapping all function calls (or scope blocks) with tracing statements before sending them into the rest of the pipeline. Those are the kinds of things that I think would be especially cool with a CompilerServices module in the standard library. Also, consider this: someone could implement AST macros as a library! --benji
Jan 27 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
Benji Smith wrote:
 BCS wrote:
 Hello Walter,

 Instead, what you can do is simply dude up command line arguments,
 spawn the command line compiler, and collect the result.
The one main thing I see not working there is memory-to-memory compiles. I'd love to be able to build a function as a string, call the compiler and get back a function pointer.
I think also, with a compiler-as-dll, it'd have separate modules for lexing, parsing, optimizing, code-generation, and linking. As a user of that compiler DLL, I might like to write my own AST visitor, wrapping all function calls (or scope blocks) with tracing statements before sending them into the rest of the pipeline. Those are the kinds of things that I think would be especially cool with a CompilerServices module in the standard library. Also, consider this: someone could implement AST macros as a library! --benji
You've just described the design of clang - the new frontend for llvm for compiling c/c++. there are many tools used during development which need different subsets of functionality of the compiler that will benefit from such a design: IDE - the IDE needs to use the lexing/parsing/semantic phases for showing you errors on the fly, it's integrated build system needs to know about dependencies, and it'll use the optimizer/code-gen phases to build your projects. lint tools - need the lexing/parsing/semantic phases. stand alone build tool (like rebuild, bud, etc) - needs the lexer/parser for automatically resolving dependencies. doc system (built in IDE/ or stand alone) needs the semantic phases. the compiler itself could (and should) use it - currently there's a built in interpreter for CTFE which is limited to a subset of D. instead of implementing a limited interpreter in addition to the compiler, the compiler libs can easily be utilized for a JIT compiler which will compile CTFE and run them during compilation using the same code that is used to compile regular code.
Jan 27 2009
parent Sandeep Kakarlapudi <sandeep.iitkgpspammenot gmail.com> writes:
Yigal Chripun Wrote:

 the compiler itself could (and should) use it - currently there's a 
 built in interpreter for CTFE which is limited to a subset of D.
 
 instead of implementing a limited interpreter in addition to the 
 compiler, the compiler libs can easily be utilized for a JIT compiler 
 which will compile CTFE and run them during compilation using the same 
 code that is used to compile regular code.
I had the exact same thoughts a couple of days ago - but I remembered that floating point code *might* cause problems either because of precision modes or cross compiling. However going in this direction seems to open up more possibilities.
Jan 28 2009
prev sibling parent reply Yigal Chripun <yigal100 gmail.com> writes:
Walter Bright wrote:
 I've done the compiler-as-dll thing for the Digital Mars IDE. It has
 some problems, though. The biggest is it's another executable to test,
 doubling the testing process. It could be done as one executable by
 making a shell that calls the dll, but those are just inconvenient,
 harder to debug, and there's the old "dll hell" with versions.

 Instead, what you can do is simply dude up command line arguments, spawn
 the command line compiler, and collect the result.
here's a thought: use DDL instead. no more "dll hell" :)
Jan 27 2009
parent reply John Reimer <terminal.node gmail.com> writes:
Hello Yigal,

 Walter Bright wrote:
 
 I've done the compiler-as-dll thing for the Digital Mars IDE. It has
 some problems, though. The biggest is it's another executable to
 test, doubling the testing process. It could be done as one
 executable by making a shell that calls the dll, but those are just
 inconvenient, harder to debug, and there's the old "dll hell" with
 versions.
 
 Instead, what you can do is simply dude up command line arguments,
 spawn the command line compiler, and collect the result.
 
here's a thought: use DDL instead. no more "dll hell" :)
There's one thing ddl doesn't do that dll's/so's do. People seem to misunderstand this aspect of it, as I did for awhile (until recently). ddl does not work for memory sharing like normal dll's, where multiple applications have access to a single dll at runtime. It appears that such support would be quite difficult to implement and moves in the direction of operating system features. It does do runtime linking, however, which is extremely useful for certain situations... specifically any sort of application that needs a plugin architecture for D (ie.. it can link with libraries and object files at runtime) that is gc and exception friendly. Thanks to Tom S. for clarifying this to me. Whether or not this detail is significant to the idea of a compiler dll, I don't know. -JJR
Jan 28 2009
parent reply grauzone <none example.net> writes:
John Reimer wrote:
 ddl does not work for memory sharing like normal dll's, where multiple 
 applications have access to a single dll at runtime.  It appears that 
 such support would be quite difficult to implement and moves in the 
 direction of operating system features.
Couldn't this be achieved by simply mmap()-ing the file contents into memory? mmap() normally shared the memory pages with other processes. Of course, this wouldn't work if the code both isn't position independent, and needs to be relocated to a different base address. But that's also the case with operating system supported dynamic shared objects.
 It does do runtime linking, however, which is extremely useful for 
 certain situations... specifically any sort of application that needs a 
 plugin architecture for D (ie.. it can link with libraries and object 
 files at runtime) that is gc and exception friendly.  
I never understood why this is needed. Can't they simply compile the plugins into the main program? When it's a commercial program, the DLL plugin approach probably wouldn't work anyway: in order to enable others to compile plugins, you would need to expose your internal "headers" (D modules). Note that unlike in languages like C/C++, this would cause internal modules to be exposed too, even if they are not strictly needed. What would you do to avoid this? Maintain a separate set of import modules? I think a purely extern(C) based interface would be better in these cases. In fact, if you rely on the D ABI for dynamic linking, you'll probably have the same trouble as with C++ dynamic linking. For example, BeOS had to go through this to make sure their C++ based API maintains ABI compatibility: http://homepage.corbina.net/~maloff/holy-wars/fbc.html I'm not sure if the D ABI improves the situation. At any rate, it doesn't sound like a good idea.
Jan 28 2009
next sibling parent reply =?UTF-8?B?QWxleGFuZGVyIFDDoW5law==?= writes:
grauzone wrote:
 John Reimer wrote:
 ddl does not work for memory sharing like normal dll's, where multiple 
 applications have access to a single dll at runtime.  It appears that 
 such support would be quite difficult to implement and moves in the 
 direction of operating system features.
Couldn't this be achieved by simply mmap()-ing the file contents into memory? mmap() normally shared the memory pages with other processes. Of course, this wouldn't work if the code both isn't position independent, and needs to be relocated to a different base address. But that's also the case with operating system supported dynamic shared objects.
 It does do runtime linking, however, which is extremely useful for 
 certain situations... specifically any sort of application that needs 
 a plugin architecture for D (ie.. it can link with libraries and 
 object files at runtime) that is gc and exception friendly.  
I never understood why this is needed. Can't they simply compile the plugins into the main program?
Well, compiling them directly into the main program kinda defeats the purpose of runtime-pluggable plugins, wouldn’t it?
 When it's a commercial program, the DLL plugin approach probably 
 wouldn't work anyway: in order to enable others to compile plugins, you 
 would need to expose your internal "headers" (D modules). Note that 
 unlike in languages like C/C++, this would cause internal modules to be 
 exposed too, even if they are not strictly needed. What would you do to 
 avoid this? Maintain a separate set of import modules?
Make use of .di files. You don’t have to distribute code.
Jan 28 2009
parent reply grauzone <none example.net> writes:
Alexander Pánek wrote:
 grauzone wrote:
 John Reimer wrote:
 ddl does not work for memory sharing like normal dll's, where 
 multiple applications have access to a single dll at runtime.  It 
 appears that such support would be quite difficult to implement and 
 moves in the direction of operating system features.
Couldn't this be achieved by simply mmap()-ing the file contents into memory? mmap() normally shared the memory pages with other processes. Of course, this wouldn't work if the code both isn't position independent, and needs to be relocated to a different base address. But that's also the case with operating system supported dynamic shared objects.
 It does do runtime linking, however, which is extremely useful for 
 certain situations... specifically any sort of application that needs 
 a plugin architecture for D (ie.. it can link with libraries and 
 object files at runtime) that is gc and exception friendly.  
I never understood why this is needed. Can't they simply compile the plugins into the main program?
Well, compiling them directly into the main program kinda defeats the purpose of runtime-pluggable plugins, wouldn’t it?
But why do you want them in the first place? For Open Source projects, this seems to be completely pointless to me.
 When it's a commercial program, the DLL plugin approach probably 
 wouldn't work anyway: in order to enable others to compile plugins, 
 you would need to expose your internal "headers" (D modules). Note 
 that unlike in languages like C/C++, this would cause internal modules 
 to be exposed too, even if they are not strictly needed. What would 
 you do to avoid this? Maintain a separate set of import modules?
Make use of .di files. You don’t have to distribute code.
D interface files (.di) files are what I meant by "import module", sorry about this. They are compiler specific, and the only intended purpose is speeding up the compilation. Quoting from the D site:
 D interface files bear some analogous similarities to C++ header
 files. But they are not required in the way that C++ header files are,
 and they are not part of the D language. They are a feature of the
 compiler, and serve only as an optimization of the build process.
http://www.digitalmars.com/d/2.0/dmd-linux.html#interface_files I looked at the Tango .di files, which are (I think) automatically generated by the D compiler. I noticed several things: - Private symbols are included, even private imports or private class members => they are exposed to the public, and changing them might break ABI compatibility under circumstances. - All transitive imports seem to be included => you either expose your internal modules as interface file, or your public modules must not (transitively) import private modules. Note that this forbids direct use of any private type or function. You'll probably have to program around using indirections like interfaces or abstract base classes. Also note that this is way harder as in C: unlike in D, there are incomplete types, and the implementation (.c files) is completely separate and can import any private headers. - Sometimes, the full code for methods or functions is included, although they are not templated in any way. I guess it's even possible that plugins will inline those functions. This means changing these functions could randomly break already compiled plugins. Of course, this can be fixed, but nobody has bothered yet. It probably shows that .di files weren't designed with ABI compatibility in mind. It seems that dynamic linking with D code is extremely fragile, and it requires serious extra effort to maintain ABI compatibility. Please correct me if I'm wrong.
Jan 28 2009
next sibling parent Don <nospam nospam.com> writes:
grauzone wrote:
 Alexander Pánek wrote:
 grauzone wrote:
 John Reimer wrote:
 ddl does not work for memory sharing like normal dll's, where 
 multiple applications have access to a single dll at runtime.  It 
 appears that such support would be quite difficult to implement and 
 moves in the direction of operating system features.
Couldn't this be achieved by simply mmap()-ing the file contents into memory? mmap() normally shared the memory pages with other processes. Of course, this wouldn't work if the code both isn't position independent, and needs to be relocated to a different base address. But that's also the case with operating system supported dynamic shared objects.
 It does do runtime linking, however, which is extremely useful for 
 certain situations... specifically any sort of application that 
 needs a plugin architecture for D (ie.. it can link with libraries 
 and object files at runtime) that is gc and exception friendly.  
I never understood why this is needed. Can't they simply compile the plugins into the main program?
Well, compiling them directly into the main program kinda defeats the purpose of runtime-pluggable plugins, wouldn’t it?
But why do you want them in the first place? For Open Source projects, this seems to be completely pointless to me.
 When it's a commercial program, the DLL plugin approach probably 
 wouldn't work anyway: in order to enable others to compile plugins, 
 you would need to expose your internal "headers" (D modules). Note 
 that unlike in languages like C/C++, this would cause internal 
 modules to be exposed too, even if they are not strictly needed. What 
 would you do to avoid this? Maintain a separate set of import modules?
Make use of .di files. You don’t have to distribute code.
D interface files (.di) files are what I meant by "import module", sorry about this. They are compiler specific, and the only intended purpose is speeding up the compilation. Quoting from the D site: > D interface files bear some analogous similarities to C++ header > files. But they are not required in the way that C++ header files are, > and they are not part of the D language. They are a feature of the > compiler, and serve only as an optimization of the build process. http://www.digitalmars.com/d/2.0/dmd-linux.html#interface_files
They were also intended to act as header files, to conceal implementation details.
 I looked at the Tango .di files, which are (I think) automatically 
 generated by the D compiler. I noticed several things:
 
 - Private symbols are included, even private imports or private class 
 members => they are exposed to the public, and changing them might break 
 ABI compatibility under circumstances.
 
 - All transitive imports seem to be included => you either expose your 
 internal modules as interface file, or your public modules must not 
 (transitively) import private modules. Note that this forbids direct use
 of any private type or function. You'll probably have to program around 
 using indirections like interfaces or abstract base classes. Also note 
 that this is way harder as in C: unlike in D, there are incomplete 
 types, and the implementation (.c files) is completely separate and can 
 import any private headers.
 
 - Sometimes, the full code for methods or functions is included, 
 although they are not templated in any way. I guess it's even possible 
 that plugins will inline those functions.
Yes, I think that's why they're included. This means changing these
 functions could randomly break already compiled plugins. Of course, this 
 can be fixed, but nobody has bothered yet. It probably shows that .di 
 files weren't designed with ABI compatibility in mind.
 
 It seems that dynamic linking with D code is extremely fragile, and it 
 requires serious extra effort to maintain ABI compatibility. Please 
 correct me if I'm wrong.
Jan 28 2009
prev sibling parent "Nick Sabalausky" <a a.a> writes:
"grauzone" <none example.net> wrote in message 
news:glpha0$dg3$1 digitalmars.com...
 Alexander Pánek wrote:
 grauzone wrote:
 John Reimer wrote:
 ddl does not work for memory sharing like normal dll's, where multiple 
 applications have access to a single dll at runtime.  It appears that 
 such support would be quite difficult to implement and moves in the 
 direction of operating system features.
Couldn't this be achieved by simply mmap()-ing the file contents into memory? mmap() normally shared the memory pages with other processes. Of course, this wouldn't work if the code both isn't position independent, and needs to be relocated to a different base address. But that's also the case with operating system supported dynamic shared objects.
 It does do runtime linking, however, which is extremely useful for 
 certain situations... specifically any sort of application that needs a 
 plugin architecture for D (ie.. it can link with libraries and object 
 files at runtime) that is gc and exception friendly.
I never understood why this is needed. Can't they simply compile the plugins into the main program?
Well, compiling them directly into the main program kinda defeats the purpose of runtime-pluggable plugins, wouldn't it?
But why do you want them in the first place? For Open Source projects, this seems to be completely pointless to me.
With runtime-pluggables: 1. A program can download, install and use a plugin without the program needing to be restarted. 2. Out of all existing plugins, any arbitrary subset can be enabled without requiring an exponential number of pre-compiled executables, without requiring the user (or the program itself) to recompile anything, and without needing to install any build tools on the user's system.
Jan 28 2009
prev sibling next sibling parent reply Mike Parker <aldacron gmail.com> writes:
grauzone wrote:

 
 When it's a commercial program, the DLL plugin approach probably 
 wouldn't work anyway: in order to enable others to compile plugins, you 
 would need to expose your internal "headers" (D modules). Note that 
This is exactly what id software did with their Quake games. They provided an SDK, which included the headers and source files required to make mods. It was possible to use plain C to make DLLs or the QuakeC language they developed. This was before scripting languages became big in the game industry. Personally, I would prefer a scripting language like Lua or Python over DLLs/object files for a plugin framework, no matter the application domain. The biggest reason is that it's easier to sandbox a script engine. But both approaches have their place.
 unlike in languages like C/C++, this would cause internal modules to be 
 exposed too, even if they are not strictly needed. What would you do to 
 avoid this? Maintain a separate set of import modules?
 
 I think a purely extern(C) based interface would be better in these cases.
 
 In fact, if you rely on the D ABI for dynamic linking, you'll probably 
 have the same trouble as with C++ dynamic linking. For example, BeOS had 
 to go through this to make sure their C++ based API maintains ABI 
 compatibility:
 
 http://homepage.corbina.net/~maloff/holy-wars/fbc.html
 
 I'm not sure if the D ABI improves the situation. At any rate, it 
 doesn't sound like a good idea.
Jan 28 2009
next sibling parent Mike Parker <aldacron gmail.com> writes:
Mike Parker wrote:
 grauzone wrote:
 
 When it's a commercial program, the DLL plugin approach probably 
 wouldn't work anyway: in order to enable others to compile plugins, 
 you would need to expose your internal "headers" (D modules). Note that 
This is exactly what id software did with their Quake games. They provided an SDK, which included the headers and source files required to make mods. It was possible to use plain C to make DLLs or the QuakeC language they developed. This was before scripting languages became big in the game industry. Personally, I would prefer a scripting language like Lua or Python over DLLs/object files for a plugin framework, no matter the application domain. The biggest reason is that it's easier to sandbox a script engine. But both approaches have their place.
And I should qualify that I think the DLL/object file approach is the way to go for compiler plugins.
 
 
 unlike in languages like C/C++, this would cause internal modules to 
 be exposed too, even if they are not strictly needed. What would you 
 do to avoid this? Maintain a separate set of import modules?

 I think a purely extern(C) based interface would be better in these 
 cases.

 In fact, if you rely on the D ABI for dynamic linking, you'll probably 
 have the same trouble as with C++ dynamic linking. For example, BeOS 
 had to go through this to make sure their C++ based API maintains ABI 
 compatibility:

 http://homepage.corbina.net/~maloff/holy-wars/fbc.html

 I'm not sure if the D ABI improves the situation. At any rate, it 
 doesn't sound like a good idea.
Jan 28 2009
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Mike Parker" <aldacron gmail.com> wrote in message 
news:glpfr2$b5o$1 digitalmars.com...
 grauzone wrote:

 When it's a commercial program, the DLL plugin approach probably wouldn't 
 work anyway: in order to enable others to compile plugins, you would need 
 to expose your internal "headers" (D modules). Note that
This is exactly what id software did with their Quake games. They provided an SDK, which included the headers and source files required to make mods. It was possible to use plain C to make DLLs or the QuakeC language they developed. This was before scripting languages became big in the game industry. Personally, I would prefer a scripting language like Lua or Python over DLLs/object files for a plugin framework, no matter the application domain. The biggest reason is that it's easier to sandbox a script engine. But both approaches have their place.
I disagree. Making all add-ons be interpreted scripts is one of the biggest reasons why Firefox (especially v2) is so absurdly slow (not that I'm a fan of IE, Opera or Safari). Also, the fact that the vast majority of scripting languages lack descent compile-time checking (such as static type checking or mandatory explicit declarations), or at least push it off as a secondary concern (modern ECMAScript), creates a situation where plugins have a tendancy to be unreliable. But you're right that sandboxing is a potential issue. (Although even a scripting engine can still potentially contain exploits.)
Jan 28 2009
parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
Nick Sabalausky wrote:
 [snip]
 
 I disagree. Making all add-ons be interpreted scripts is one of the biggest 
 reasons why Firefox (especially v2) is so absurdly slow (not that I'm a fan 
 of IE, Opera or Safari). Also, the fact that the vast majority of scripting 
 languages lack descent compile-time checking (such as static type checking 
 or mandatory explicit declarations), or at least push it off as a secondary 
 concern (modern ECMAScript), creates a situation where plugins have a 
 tendancy to be unreliable.
There's an interesting talk Steve Yeggie gave a while back: http://steve-yegge.blogspot.com/2008/05/dynamic-languages-strike-back.html There's also a video of it: http://au.youtube.com/watch?v=tz-Bb-D6teE -- Daniel
Jan 28 2009
parent reply "Nick Sabalausky" <a a.a> writes:
"Daniel Keep" <daniel.keep.lists gmail.com> wrote in message 
news:glqsa7$13e2$1 digitalmars.com...
 Nick Sabalausky wrote:
 [snip]

 I disagree. Making all add-ons be interpreted scripts is one of the 
 biggest
 reasons why Firefox (especially v2) is so absurdly slow (not that I'm a 
 fan
 of IE, Opera or Safari). Also, the fact that the vast majority of 
 scripting
 languages lack descent compile-time checking (such as static type 
 checking
 or mandatory explicit declarations), or at least push it off as a 
 secondary
 concern (modern ECMAScript), creates a situation where plugins have a
 tendancy to be unreliable.
There's an interesting talk Steve Yeggie gave a while back: http://steve-yegge.blogspot.com/2008/05/dynamic-languages-strike-back.html There's also a video of it: http://au.youtube.com/watch?v=tz-Bb-D6teE -- Daniel
Hmm, yea, interesting, although his performance arguments are all about "potential" that, as he points out, isn't going to be realized any time soon for most scripting languages, and his compile-time-checking arguments seem...broken.
Jan 28 2009
parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
Nick Sabalausky wrote:
 "Daniel Keep" <daniel.keep.lists gmail.com> wrote in message 
 news:glqsa7$13e2$1 digitalmars.com...
 Nick Sabalausky wrote:
 [snip]

 I disagree. Making all add-ons be interpreted scripts is one of the 
 biggest
 reasons why Firefox (especially v2) is so absurdly slow (not that I'm a 
 fan
 of IE, Opera or Safari). Also, the fact that the vast majority of 
 scripting
 languages lack descent compile-time checking (such as static type 
 checking
 or mandatory explicit declarations), or at least push it off as a 
 secondary
 concern (modern ECMAScript), creates a situation where plugins have a
 tendancy to be unreliable.
There's an interesting talk Steve Yeggie gave a while back: http://steve-yegge.blogspot.com/2008/05/dynamic-languages-strike-back.html There's also a video of it: http://au.youtube.com/watch?v=tz-Bb-D6teE -- Daniel
Hmm, yea, interesting, although his performance arguments are all about "potential" that, as he points out, isn't going to be realized any time soon for most scripting languages, and his compile-time-checking arguments seem...broken.
(Warning: personal rambling ahead.) Well, the performance is an interesting one. There's a guy who wrote LuaJIT (a version of Lua that uses JIT compilation.) He's currently working on LuaJIT 2.0 that uses trace trees (like Yeggie mentioned.) Last time I checked, he'd had to change his benchmarking scheme to eschew scripting languages and benchmark directly against C. I think scripting languages have a lot of headroom left, that's just waiting for better interpreters and saner language design (I doubt you'll see JS or Python ever reach Lua's speeds, at least not any time soon.) As for Firefox, it's worth pointing out that the current release of LuaJIT is between 0 and ~12 times faster than TraceMonkey. Just because JavaScript is a crippled old dog doesn't mean you should write off all dynamic languages. :D Personally, I think the best approach is to combine the two; write the hot spots in C, D or some other fast language, and all the glue in a dynamic language. For all it's expressiveness, there are just some things that are easier to do with a dynamic language like Python or Lua than with D. And as for the compile-time checking thing... I think it's a bit of a wash. There are times that static checking is helpful... and times when it's too restrictive. All I know is that it's so much easier to debug something when you can get an interactive interpreter inside the debugger... -- Daniel
Jan 29 2009
next sibling parent reply grauzone <none example.net> writes:
Daniel Keep wrote:
 Personally, I think the best approach is to combine the two; write the
 hot spots in C, D or some other fast language, and all the glue in a
 dynamic language.  For all it's expressiveness, there are just some
 things that are easier to do with a dynamic language like Python or Lua
 than with D.
I agree, but it's hard to link all these languages together. They use different types, require bindings to import functions and classes, and all that. I heard writing Python modules in C is really hard.
 And as for the compile-time checking thing... I think it's a bit of a
 wash.  There are times that static checking is helpful... and times when
 it's too restrictive.  All I know is that it's so much easier to debug
 something when you can get an interactive interpreter inside the debugger...
Most debuggers seem to support a subset of C for evaluating expressions or even calling methods. For example, in gdb, you can write the command "print x->y->z". I don't think there's any substantial difference between dynamic and static languages here. It's all a question of effort, and static languages are just a bit behind. For example, if you had a compiler as library (wow, that's actually the subject of this thread), it wouldn't be so hard anymore to implement a read-eval-print-loop for a static language. Actually, Scala is a statically typed language, that provides such an interactive interpreter.
Jan 29 2009
parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
grauzone wrote:
 Daniel Keep wrote:
 Personally, I think the best approach is to combine the two; write the
 hot spots in C, D or some other fast language, and all the glue in a
 dynamic language.  For all it's expressiveness, there are just some
 things that are easier to do with a dynamic language like Python or Lua
 than with D.
I agree, but it's hard to link all these languages together. They use different types, require bindings to import functions and classes, and all that. I heard writing Python modules in C is really hard.
Not in my experience. Pyrex makes it embarrassingly easy. I had a GUI app written in Python; at one point, I decided to improve performance and so I did a bit of profiling. The big hot spot was in the rendering code, so I re-wrote that part in C and used Pyrex to make the bridge. Compiled to a native Python extension module, and replaced the old pure Python version with it. Restart app and boom, it runs faster without having to touch a line of code anywhere else. Also, don't forget that Python has ctypes, which lets it dynamically bind to C libraries without having to actually write a wrapper. Now, going the OTHER way is a nightmare. [1]
 Most debuggers seem to support a subset of C for evaluating expressions
 or even calling methods. For example, in gdb, you can write the command
 "print x->y->z".
 
 I don't think there's any substantial difference between dynamic and
 static languages here. It's all a question of effort, and static
 languages are just a bit behind. For example, if you had a compiler as
 library (wow, that's actually the subject of this thread), it wouldn't
 be so hard anymore to implement a read-eval-print-loop for a static
 language. Actually, Scala is a statically typed language, that provides
 such an interactive interpreter.
Perhaps, but I'm yet to see any statically typed language that let me replace objects or even whole functions while the program's running. I'm not saying it can't be done, just that it's a lot harder for a statically typed and compiled language than one with dynamic lookup. -- Daniel [1] Actually, back when I was still playing around with Python bindings for D, one thing I started hacking on was a module for runtime codegen. The idea was to fire up the Python interpreter, poke around inside a Python module to find out what it might look like as a native function, then generate a stub that went from native code to Python code and back again. ... and then make it a DDL module so that you could treat Python libraries as if they were native code.
Jan 29 2009
parent bearophile <bearophileHUGS lycos.com> writes:
Daniel Keep:
 Not in my experience.  Pyrex makes it embarrassingly easy.
Today you use Cython, that is an improved variant of Pyrex. You can also use Pyd with D, that I have seen is quite easy to use. IHMO it deserves to be more widely known and used. Bye, bearophile
Jan 29 2009
prev sibling parent reply Christopher Wright <dhasenan gmail.com> writes:
Daniel Keep wrote:
 Personally, I think the best approach is to combine the two; write the
 hot spots in C, D or some other fast language, and all the glue in a
 dynamic language.  For all it's expressiveness, there are just some
 things that are easier to do with a dynamic language like Python or Lua
 than with D.
I keep hearing this stated, but I don't see very many use cases put forth where it is clearly better to use a dynamic language. For me, I'm not familiar with any dynamic languages to a sufficient degree to ever get an advantage from using them, but if I see a sufficient use case, I'll change that.
Jan 29 2009
parent reply Bill Baxter <wbaxter gmail.com> writes:
On Fri, Jan 30, 2009 at 8:40 AM, Christopher Wright <dhasenan gmail.com> wrote:
 Daniel Keep wrote:
 Personally, I think the best approach is to combine the two; write the
 hot spots in C, D or some other fast language, and all the glue in a
 dynamic language.  For all it's expressiveness, there are just some
 things that are easier to do with a dynamic language like Python or Lua
 than with D.
I keep hearing this stated, but I don't see very many use cases put forth where it is clearly better to use a dynamic language. For me, I'm not familiar with any dynamic languages to a sufficient degree to ever get an advantage from using them, but if I see a sufficient use case, I'll change that.
Since Variants are kind of the basic variable type in dynamic languages, basically anywhere you can benefit from variants is a win for dynamic langs. More generally there are times when you can't really know what types things are going to be till run time. For instance dealing with a database. In a dynamic lang you can create new aggregate types on the fly that are just like built-in types. Or say you have an algorithm that can work on lots of different kinds of data types. But you need to load the data from a file at runtime, including the type. To do that with C++/D templates you have to pre-instantiate every possible type you want to support, even though most runs of your program will use only one or two instantiations of it. With a dynamic language you don't have to pre-instantiate anything, so such things are a lot easier. In GUIs when you need to loosely couple different components, dynamic langs can make life a lot easier. With std.signals in D, for instance, you have to be so anal about the call signatures. And you may have trouble calling a delegate that takes a bool when the signal sends out an int, etc Such things are rarely any problem for a dynamic language. Variable numbers of args are also generally very easy to use. Basically imagine any annoyance you have with D or C++. With a dynamic language it pretty much disappears. :-) (But of course you will encounter new issues, like performance, or higher testing burden because no types are statically checked). One thing, though, that I noticed using Python, is that a lot of what is considered so easy-to-use about dynamic languages is just that people are more willing to use expensive (but elegant) abstractions. If you're willing to do things in a slightly less efficient way, you can make the code look pretty darn elegant. I think this is the basis of much of Bearophile's libs. Most of the time you really don't need that extra performance. With a dynamic language there's no way to really get the performance, anyway, so you might as well use something elegant. People seem to have a harder time throwing away their desire for efficiency when they know their language *is* capable of being efficient. I'm certainly included in that group. I would have a hard time convincing myself to code in D similar to how I code in Python. It just feels so sloppy to me when it's in D, because I know D can do better. But in truth, most of the code I write would probably perform fine even with large helpings of extra slop. --bb
Jan 29 2009
next sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Bill Baxter (wbaxter gmail.com)'s article
 One thing, though, that I noticed using Python, is that a lot of what
 is considered so easy-to-use about dynamic languages is just that
 people are more willing to use expensive (but elegant) abstractions.
 If you're willing to do things in a slightly less efficient way, you
 can make the code look pretty darn elegant.  I think this is the basis
 of much of Bearophile's libs.  Most of the time you really don't need
 that extra performance.  With a dynamic language there's no way to
 really get the performance, anyway, so you might as well use something
 elegant.   People seem to have a harder time throwing away their
 desire for efficiency when they know their language *is* capable of
 being efficient.  I'm certainly included in that group.   I would have
 a hard time convincing myself to code in D similar to how I code in
 Python.  It just feels so sloppy to me when it's in D, because I know
 D can do better.   But in truth, most of the code I write would
 probably perform fine even with large helpings of extra slop.
 --bb
I feel the exact same way, but I thought I was just crazy. I don't use Python on a regular basis because most of the code I write has at least enough parts that have to be fast that it makes more sense to just write the whole thing in D. When I did use it a few times out of curiosity, I was like wow, some of this stuff is so much more elegant than the way I would do this in D. Then I thought for a little while longer and was like, "wait a minute, I _could_ do it that way in D, it's just that I never would because when I'm in D coding mode I'm so used to thinking about efficiency." Now, when I'm coding a part of a D program where performance isn't going to matter, like a function that gets run once at startup, I actively try to force myself _not_ to think about efficiency and to just keep it simple and elegant.
Jan 29 2009
parent bearophile <bearophileHUGS lycos.com> writes:
Bill Baxter:

One thing, though, that I noticed using Python, is that a lot of what is
considered so easy-to-use about dynamic languages is just that people are more
willing to use expensive (but elegant) abstractions. If you're willing to do
things in a slightly less efficient way, you can make the code look pretty darn
elegant.<
It's also a matter of handy syntax too, and having some type inferencing (or dynamic typing. Type inferencing may require a more complex type system, and a more complex and often slower compiler). I want to add two notes: 1) My work on ShedSkin (a partial Python => C++ compiler) has shown me that you can have a very handy syntax in statically typed C++-like language too. For example in Python you have list comps and generator expressions that are handy syntax, you need more code to write them in D: foo = [[x*y for x in someiterable] for y in somestring.split()] foo = ([x*y for x in someiterable] for y in somestring.split()) Other handy syntax is from iterators: def bar(): for x in xrange(10): yield x*x I miss those three things in D still. In my libs there are ways to do similar things, but they become quite more hairy and full of {}(), so in the end you may want to not use them at all in a real program. I hope D developers will eventually add list comps, iterators, and some other syntactic sugar to D. 2) Haskell compilers (and the Stalin Scheme compiler) show that if build a complex enough compiler, it can often be able to compile such abstractions to fast enough code. Not as fast as C code, but good enough for every noncritical spots in your program.
People seem to have a harder time throwing away their desire for efficiency
when they know their language *is* capable of being efficient.<
This requires you to know other languages, and to perform some self-control. Self-control is often one of the main things hat tell apart an adult professional person from a young newbie, outside the field of programming too. I think it can use to perform some "katas", shape exercises where you write the same code in various ways (and maybe you also benchmark them). ---------------------- dsimcha:
"wait a minute, I _could_ do it that way in D, it's just that I never would
because when I'm in D coding mode I'm so used to thinking about efficiency."<
That's why it's good to learn to program in different languages (Prolog, Lisp Mozart, a data flow language, and little else). You may need a life to learn them all :-) And when you know them, you may be too much old to do something good, that's why life is evil :-)
Now, when I'm coding a part of a D program where performance isn't going to
matter, like a function that gets run once at startup, I actively try to force
myself _not_ to think about efficiency and to just keep it simple and elegant.<
Because:
We should forget about small efficiencies, say about 97% of the time: premature
optimization is the root of all evil.<
:-) Bye, bearophile
Jan 30 2009
prev sibling parent reply Yigal Chripun <yigal100 gmail.com> writes:
Bill Baxter wrote:
 On Fri, Jan 30, 2009 at 8:40 AM, Christopher Wright<dhasenan gmail.com>  wrote:
 Daniel Keep wrote:
 Personally, I think the best approach is to combine the two; write the
 hot spots in C, D or some other fast language, and all the glue in a
 dynamic language.  For all it's expressiveness, there are just some
 things that are easier to do with a dynamic language like Python or Lua
 than with D.
I keep hearing this stated, but I don't see very many use cases put forth where it is clearly better to use a dynamic language. For me, I'm not familiar with any dynamic languages to a sufficient degree to ever get an advantage from using them, but if I see a sufficient use case, I'll change that.
Since Variants are kind of the basic variable type in dynamic languages, basically anywhere you can benefit from variants is a win for dynamic langs. More generally there are times when you can't really know what types things are going to be till run time. For instance dealing with a database. In a dynamic lang you can create new aggregate types on the fly that are just like built-in types. Or say you have an algorithm that can work on lots of different kinds of data types. But you need to load the data from a file at runtime, including the type. To do that with C++/D templates you have to pre-instantiate every possible type you want to support, even though most runs of your program will use only one or two instantiations of it. With a dynamic language you don't have to pre-instantiate anything, so such things are a lot easier. In GUIs when you need to loosely couple different components, dynamic langs can make life a lot easier. With std.signals in D, for instance, you have to be so anal about the call signatures. And you may have trouble calling a delegate that takes a bool when the signal sends out an int, etc Such things are rarely any problem for a dynamic language. Variable numbers of args are also generally very easy to use. Basically imagine any annoyance you have with D or C++. With a dynamic language it pretty much disappears. :-) (But of course you will encounter new issues, like performance, or higher testing burden because no types are statically checked). One thing, though, that I noticed using Python, is that a lot of what is considered so easy-to-use about dynamic languages is just that people are more willing to use expensive (but elegant) abstractions. If you're willing to do things in a slightly less efficient way, you can make the code look pretty darn elegant. I think this is the basis of much of Bearophile's libs. Most of the time you really don't need that extra performance. With a dynamic language there's no way to really get the performance, anyway, so you might as well use something elegant. People seem to have a harder time throwing away their desire for efficiency when they know their language *is* capable of being efficient. I'm certainly included in that group. I would have a hard time convincing myself to code in D similar to how I code in Python. It just feels so sloppy to me when it's in D, because I know D can do better. But in truth, most of the code I write would probably perform fine even with large helpings of extra slop. --bb
Static languages can have Variant/box types that'll give most of the same functionality of dynamic languages. so, instead of instantiating list!(int), list!(string), etc, you can get one list!(Variant).. The real difference is that static languages have mostly read-only RTTI. (Java provides a very limited capability to reload classes, IIRC) a scripting language allows you to manipulate Types and instances at run-time, for example you can add/remove/alter methods in a class and affect all instances of that class, or alter a specific instance of a class. This cannot be done in a static language.
Jan 29 2009
next sibling parent reply grauzone <none example.net> writes:
Yigal Chripun wrote:
 Static languages can have Variant/box types that'll give most of the 
 same functionality of dynamic languages. so, instead of instantiating 
 list!(int), list!(string), etc, you can get one list!(Variant)..

It would be nice if D would be extended to provide all these things. I guess the only reason this hasn't been done is the space overhead of full RTTI information. (And dynamic method invocation might require a lot of hackery to manually copy the method arguments on the stack.)
 The real difference is that static languages have mostly read-only RTTI. 
 (Java provides a very limited capability to reload classes, IIRC)
 a scripting language allows you to manipulate Types and instances at 
 run-time, for example you can add/remove/alter methods in a class and 
 affect all instances of that class, or alter a specific instance of a 
 class. This cannot be done in a static language.
Some static languages allow it to statically add your own methods or variables to a foreign class. An example would be AspectJ, which, among other things, adds this feature to Java. This is still not as dynamic as in dynamic type systems, but I wonder if you really need more? There's something similar in current static languages: you can extend the global data segment with global variables. Global variables don't have to be all in a single source file. Instead, the linker takes care of collecting global variables from object files and allocating space for them in the data segment. A dynamic linker can do this even while a program is running! Thread local storage (with global __thread variables) does the same thing per thread. Threads can be created and destroyed at runtime. Dynamic linking is still supported, I think.
Jan 30 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
grauzone wrote:
 Yigal Chripun wrote:
 Static languages can have Variant/box types that'll give most of the
 same functionality of dynamic languages. so, instead of instantiating
 list!(int), list!(string), etc, you can get one list!(Variant)..

 templates.
It would be nice if D would be extended to provide all these things. I guess the only reason this hasn't been done is the space overhead of full RTTI information. (And dynamic method invocation might require a lot of hackery to manually copy the method arguments on the stack.)
 The real difference is that static languages have mostly read-only
 RTTI. (Java provides a very limited capability to reload classes, IIRC)
 a scripting language allows you to manipulate Types and instances at
 run-time, for example you can add/remove/alter methods in a class and
 affect all instances of that class, or alter a specific instance of a
 class. This cannot be done in a static language.
Some static languages allow it to statically add your own methods or variables to a foreign class. An example would be AspectJ, which, among other things, adds this feature to Java.
from what little I know about AOP - it's done at link time, not runtime. Java does however provide very limited support for this - for example, while debugging your code, you can change an implementation of a method and reload the containing class so instances of that class *that where created after the change* will call the new implementation. this allows to make small changes while debugging without the need to re-run the program each time. But you can't do things like: Class a = new Class; class b = new Class; a.getClass.addMethod(foo); b.foo(); note that this is done at run-time, not link time.
 This is still not as dynamic as in dynamic type systems, but I wonder if
 you really need more?

 There's something similar in current static languages: you can extend
 the global data segment with global variables. Global variables don't
 have to be all in a single source file. Instead, the linker takes care
 of collecting global variables from object files and allocating space
 for them in the data segment. A dynamic linker can do this even while a
 program is running!

 Thread local storage (with global __thread variables) does the same
 thing per thread. Threads can be created and destroyed at runtime.
 Dynamic linking is still supported, I think.
Can you explain more? I don't see how globals and threads can allow me to do the above code snippet.
Jan 31 2009
parent grauzone <none example.net> writes:
Yigal Chripun wrote:
 grauzone wrote:
 Yigal Chripun wrote:
 Static languages can have Variant/box types that'll give most of the
 same functionality of dynamic languages. so, instead of instantiating
 list!(int), list!(string), etc, you can get one list!(Variant)..

 templates.
It would be nice if D would be extended to provide all these things. I guess the only reason this hasn't been done is the space overhead of full RTTI information. (And dynamic method invocation might require a lot of hackery to manually copy the method arguments on the stack.)
 The real difference is that static languages have mostly read-only
 RTTI. (Java provides a very limited capability to reload classes, IIRC)
 a scripting language allows you to manipulate Types and instances at
 run-time, for example you can add/remove/alter methods in a class and
 affect all instances of that class, or alter a specific instance of a
 class. This cannot be done in a static language.
Some static languages allow it to statically add your own methods or variables to a foreign class. An example would be AspectJ, which, among other things, adds this feature to Java.
from what little I know about AOP - it's done at link time, not runtime. Java does however provide very limited support for this - for example, while debugging your code, you can change an implementation of a method and reload the containing class so instances of that class *that where created after the change* will call the new implementation. this allows to make small changes while debugging without the need to re-run the program each time. But you can't do things like: Class a = new Class; class b = new Class; a.getClass.addMethod(foo); b.foo(); note that this is done at run-time, not link time.
Yes. But when do you need to be fully dynamic? I claim that this is unneeded in most cases. Most time, you probably only want to extend a class by your own methods and members. To me, it looks like the biggest fundamental advantage of dynamic languages is only to be able to omit type signatures.
 This is still not as dynamic as in dynamic type systems, but I wonder if
 you really need more?

 There's something similar in current static languages: you can extend
 the global data segment with global variables. Global variables don't
 have to be all in a single source file. Instead, the linker takes care
 of collecting global variables from object files and allocating space
 for them in the data segment. A dynamic linker can do this even while a
 program is running!

 Thread local storage (with global __thread variables) does the same
 thing per thread. Threads can be created and destroyed at runtime.
 Dynamic linking is still supported, I think.
Can you explain more? I don't see how globals and threads can allow me to do the above code snippet.
First, this was only meant as some kind of analogy between dynamic languages and the AOP techniques mentioned above. Like you can dynamically add global variables to your program by loading a dynamic shared object (.so/.dll), although there aren't functions like addGlobalVariable(char[] name, size_t size). A thread is almost like an object, and declaring __thread variables is like adding fields to this object. Second, concerning actual implementations, you maybe could turn every class into a separate data section. Then it would only require a new linker symbol to extend a class by a field. The linker would take care of doing the actual class layout by assigning each linker symbol an address. This address is used as field offset. But I don't know enough about linkers to tell if this would actually work. And there are still some problems, like the question what happens with dynamic linking.
Jan 31 2009
prev sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Yigal Chripun (yigal100 gmail.com)'s article
 Static languages can have Variant/box types that'll give most of the
 same functionality of dynamic languages. so, instead of instantiating
 list!(int), list!(string), etc, you can get one list!(Variant)..

 The real difference is that static languages have mostly read-only RTTI.
 (Java provides a very limited capability to reload classes, IIRC)
 a scripting language allows you to manipulate Types and instances at
 run-time, for example you can add/remove/alter methods in a class and
 affect all instances of that class, or alter a specific instance of a
 class. This cannot be done in a static language.
Out of curiosity, does anyone actually use Variant in D? When I was new to the language, I thought it was a great idea, but then I discovered D templates, so now I never use it.
Jan 30 2009
next sibling parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
dsimcha wrote:
 == Quote from Yigal Chripun (yigal100 gmail.com)'s article
 Static languages can have Variant/box types that'll give most of the
 same functionality of dynamic languages. so, instead of instantiating
 list!(int), list!(string), etc, you can get one list!(Variant)..

 The real difference is that static languages have mostly read-only RTTI.
 (Java provides a very limited capability to reload classes, IIRC)
 a scripting language allows you to manipulate Types and instances at
 run-time, for example you can add/remove/alter methods in a class and
 affect all instances of that class, or alter a specific instance of a
 class. This cannot be done in a static language.
Out of curiosity, does anyone actually use Variant in D? When I was new to the language, I thought it was a great idea, but then I discovered D templates, so now I never use it.
The only case where I've really used it* was, not coincidentally, the reason I wrote it in the first place: a generalised CVar system for a game engine. Really, you can pretty easily get away with never needing it. If you want runtime polymorphism and you're only storing class instances, then you can just use Object instead. Variant is really only useful if you want to store non-class types as well without having to Box them, or you really want value semantics. Still, it's cool that it works as well as it does... :D -- Daniel * I'm referring to Tango's Variant, not Phobos'.
Jan 30 2009
parent reply Bill Baxter <wbaxter gmail.com> writes:
On Fri, Jan 30, 2009 at 11:35 PM, Daniel Keep
<daniel.keep.lists gmail.com> wrote:
 Out of curiosity, does anyone actually use Variant in D?  When I was new to the
 language, I thought it was a great idea, but then I discovered D templates, so
now
 I never use it.
The only case where I've really used it* was, not coincidentally, the reason I wrote it in the first place: a generalised CVar system for a game engine. Really, you can pretty easily get away with never needing it. If you want runtime polymorphism and you're only storing class instances, then you can just use Object instead. Variant is really only useful if you want to store non-class types as well without having to Box them, or you really want value semantics. Still, it's cool that it works as well as it does... :D -- Daniel * I'm referring to Tango's Variant, not Phobos'.
Does Tango's Variant have a fixed type? It seems the std2 Variant doesn't really care what the type of the thing you stuff in it is, as long as it fits in the memory space allotted. How is that useful? What's the use case for needing something that can be either 2.4f or "fred"? (Sorry I don't know what a "CVar" system is...) What I actually needed was something with a fixed, internal type that could expose its value in a flexible way via templated get/set routines. But for me a float property is never going to mutate into a string property. --bb
Jan 30 2009
parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
Bill Baxter wrote:
 Does Tango's Variant have a fixed type?
No; it's like Phobos'.
 It seems the std2 Variant doesn't really care what the type of the
 thing you stuff in it is, as long as it fits in the memory space
 allotted.   How is that useful?   What's the use case for needing
 something that can be either 2.4f or "fred"?    (Sorry I don't know
 what a "CVar" system is...)
Think about Quake, or anything based on Id's engines. CVars are basically global variables you can set from the in-game console. The original problem was this: "I want a hash indexed by string going to... um... er... anything!" And thus, Variant was born. Incidentally, I've since changed to a design involving callbacks, so there you go. I suppose that it's, in a way, like D's support for typesafe variadic functions; except it only takes one value. :P
 What I actually needed was something
 with a fixed, internal type that could expose its value in a flexible
 way via templated get/set routines.  But for me a float property is
 never going to mutate into a string property.
 
 --bb
OK, your turn: why would you want something that wraps a single type in itself and has to be accessed via templates? Couldn't you just... use the type you want with templates? -- Daniel
Jan 30 2009
parent reply Bill Baxter <wbaxter gmail.com> writes:
On Sat, Jan 31, 2009 at 11:48 AM, Daniel Keep
<daniel.keep.lists gmail.com> wrote:
 Bill Baxter wrote:
 Does Tango's Variant have a fixed type?
No; it's like Phobos'.
 It seems the std2 Variant doesn't really care what the type of the
 thing you stuff in it is, as long as it fits in the memory space
 allotted.   How is that useful?   What's the use case for needing
 something that can be either 2.4f or "fred"?    (Sorry I don't know
 what a "CVar" system is...)
Think about Quake, or anything based on Id's engines. CVars are basically global variables you can set from the in-game console. The original problem was this: "I want a hash indexed by string going to... um... er... anything!" And thus, Variant was born. Incidentally, I've since changed to a design involving callbacks, so there you go.
Ok I see.
 I suppose that it's, in a way, like D's support for typesafe variadic
 functions; except it only takes one value. :P

 What I actually needed was something
 with a fixed, internal type that could expose its value in a flexible
 way via templated get/set routines.  But for me a float property is
 never going to mutate into a string property.

 --bb
OK, your turn: why would you want something that wraps a single type in itself and has to be accessed via templates? Couldn't you just... use the type you want with templates?
I'm using it like a property in a GUI. The property is float or bool or whatever, I just need to be able to get and set the value in some generic way. So at least it's gotta have conversions to and from string -- to for showing in the GUI, from for turning the user's edit back into a value. And additionally it would be nice for it to not be too much a stickler for exact type -- to be able to convert to and from int even if its really a float, for example. I thought that was the kind of problem Variant was supposed to solve, but seems not quite so. But my original vision was that the base Property type would have template methods that pretty much like Variant's. get(T) opAssign(T). I don't see how to make that work without having a base class that contains the actual value in somehting like a Variant. So right now a sketch of the design is an abstract Property with concrete Properties with specific type derived from that. Like so: class Property { Variant value; T get(T) { value.get!(T)(); } void opAssign(T)(T v) { value = v; } abstract void fromString(string str); } class PropertyT(T) : Property { void fromString(string str) { ... } } So really it's pretty much like your CVar case. But the difference is a Property should only be one particular thing for its entire life, whereas the thing pointed to by "foo" in your hash map can change type. --bb
Jan 30 2009
parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
Bill Baxter wrote:
 [snip]
This is really interesting, since you've almost described the solution I moved to after Variants (with rather ironically still uses Variants, but that's just an implementation detail now :P). The problem for me was that I wanted to be able to expose various internal bits of state to the console by string name. I also wanted to KNOW when something had been changed. The console had to be able to get, set and copy those values as both strings and their native types. Basically, random parts of the engine and game code could register a block of callback functions with the central cvar registry; the struct looks like this: struct ConfigVarCallbacks { Variant delegate() getValue; void delegate(Variant) setValue; Variant delegate(char[]) fromRepr; char[] delegate(Variant) toRepr; } The above was automatically generated by templates. Incidentally, THAT is was spurred me to write Tango's to!(T) template. :D It was used like this (mostly; I think I lost the implementation of cvars, but the templates and CTFE underlying it are still there): class Window { mixin(cvars( ` uint width : window.width = 800; uint height : window.height = 600; ` )); this() { _cvars_register; } } This would generate the storage for the cvar as a member of Window, generate the callback functions, and then do all the registration boilerplate. With that, you could talk to the config object to play with those values. { config.setVarString("window.width", "1024"); config.setVar("window.height", 768u); writefln("Window size: %sx%s", config.getVar("window.width").get!(uint), config.getVarString("window.height")); } And of course, the console can now get/set values from raw strings, or copy values between cvars in their native type via Variants. The one thing that this system never accounted for was user-created cvars which could potentially be... Variants. :P -- Daniel
Jan 30 2009
parent Bill Baxter <wbaxter gmail.com> writes:
On Sat, Jan 31, 2009 at 2:52 PM, Daniel Keep
<daniel.keep.lists gmail.com> wrote:
 Bill Baxter wrote:
 [snip]
Thanks for taking the time to explain your design. Some interesting differences there, like your exposing the variant directly instead of forwarding to it. I was trying to think how I could make my Property interface into an actual D interface when templates are part of the interface, and those can't be virtual. But the answer is just what you're doing: put the templates inside a struct -- expose the Variant in the interface and let *it* provide the templates.
 This is really interesting, since you've almost described the solution I
 moved to after Variants (with rather ironically still uses Variants, but
 that's just an implementation detail now :P).
Sounds like it's really the same problem, so maybe not so surprising the solution looks similar. You've got a console from which users can view and set state. I've got a properties panel. Basically the same thing. --bb
Jan 31 2009
prev sibling parent grauzone <none example.net> writes:
dsimcha wrote:
 == Quote from Yigal Chripun (yigal100 gmail.com)'s article
 Static languages can have Variant/box types that'll give most of the
 same functionality of dynamic languages. so, instead of instantiating
 list!(int), list!(string), etc, you can get one list!(Variant)..

 The real difference is that static languages have mostly read-only RTTI.
 (Java provides a very limited capability to reload classes, IIRC)
 a scripting language allows you to manipulate Types and instances at
 run-time, for example you can add/remove/alter methods in a class and
 affect all instances of that class, or alter a specific instance of a
 class. This cannot be done in a static language.
Out of curiosity, does anyone actually use Variant in D? When I was new to the language, I thought it was a great idea, but then I discovered D templates, so now I never use it.
I use a Box (Variant with less features) for some kind of command line parser. First, one can register a callback to handle a command. This callback takes a Box[] to carry the command arguments. When you register a command, you pass a TypeInfo[] to tell the parser how many argument there are and what type they must have (the Box[] types will be exactly the same as in TypeInfo[]). The catch is, that the parser can automatically display help messages and useful error messages if parsing fails. The command callback doesn't need to care about this. Second, you can register argument parsers. An argument parser is just a simple callback again. It takes a string and returns a Box. So you can add parsers for types, which are completely unknown to the command line parser code. This is nice and simple. How would you do it without Box?
Jan 31 2009
prev sibling next sibling parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
grauzone wrote:
 John Reimer wrote:
 ddl does not work for memory sharing like normal dll's, where multiple
 applications have access to a single dll at runtime.  It appears that
 such support would be quite difficult to implement and moves in the
 direction of operating system features.
Couldn't this be achieved by simply mmap()-ing the file contents into memory? mmap() normally shared the memory pages with other processes.
I'm sure users of DDL would love for you to submit a patch. :)
 Of course, this wouldn't work if the code both isn't position
 independent, and needs to be relocated to a different base address. But
 that's also the case with operating system supported dynamic shared
 objects.
 
 It does do runtime linking, however, which is extremely useful for
 certain situations... specifically any sort of application that needs
 a plugin architecture for D (ie.. it can link with libraries and
 object files at runtime) that is gc and exception friendly.  
I never understood why this is needed. Can't they simply compile the plugins into the main program?
A plugin architecture, by definition, is to let third parties add code to your application. This rather precludes being able to simply link it in.
 When it's a commercial program, the DLL plugin approach probably
 wouldn't work anyway: in order to enable others to compile plugins, you
 would need to expose your internal "headers" (D modules). Note that
 unlike in languages like C/C++, this would cause internal modules to be
 exposed too, even if they are not strictly needed. What would you do to
 avoid this? Maintain a separate set of import modules?
As Alexander Pánek said; you can just use .di files, which the compiler can create from existing .d modules. Heck, you can just use regular .d modules and stub out the implementations. There's no reason why you'd need to release your code to third parties. Templates are a different matter, but then C++ has the same problem. Whether or not you want to release your templates as part of the SDK really depends on what they are. Templates + interfaces make a good pair.
 I think a purely extern(C) based interface would be better in these cases.
 
 In fact, if you rely on the D ABI for dynamic linking, you'll probably
 have the same trouble as with C++ dynamic linking. For example, BeOS had
 to go through this to make sure their C++ based API maintains ABI
 compatibility:
 
 http://homepage.corbina.net/~maloff/holy-wars/fbc.html
 
 I'm not sure if the D ABI improves the situation. At any rate, it
 doesn't sound like a good idea.
There's some interesting problems at that article that maybe we should ask Walter about. For one, being able to control virtual function ordering is an interesting idea. But by and large, these are the same problems you'll get with ANY ABI. If the size of a struct changes in a C ABI, you're hosed just as bad as if the size of a struct in D, or the fields of a class. -- Daniel
Jan 28 2009
parent reply grauzone <none example.net> writes:
Daniel Keep wrote:
 
 grauzone wrote:
 John Reimer wrote:
 ddl does not work for memory sharing like normal dll's, where multiple
 applications have access to a single dll at runtime.  It appears that
 such support would be quite difficult to implement and moves in the
 direction of operating system features.
Couldn't this be achieved by simply mmap()-ing the file contents into memory? mmap() normally shared the memory pages with other processes.
I'm sure users of DDL would love for you to submit a patch. :)
 Of course, this wouldn't work if the code both isn't position
 independent, and needs to be relocated to a different base address. But
 that's also the case with operating system supported dynamic shared
 objects.

 It does do runtime linking, however, which is extremely useful for
 certain situations... specifically any sort of application that needs
 a plugin architecture for D (ie.. it can link with libraries and
 object files at runtime) that is gc and exception friendly.  
I never understood why this is needed. Can't they simply compile the plugins into the main program?
A plugin architecture, by definition, is to let third parties add code to your application. This rather precludes being able to simply link it in.
But why go through the trouble of doing dynamic linking? You still can have a plugin architecture without dynamic linking.
 When it's a commercial program, the DLL plugin approach probably
 wouldn't work anyway: in order to enable others to compile plugins, you
 would need to expose your internal "headers" (D modules). Note that
 unlike in languages like C/C++, this would cause internal modules to be
 exposed too, even if they are not strictly needed. What would you do to
 avoid this? Maintain a separate set of import modules?
As Alexander Pánek said; you can just use .di files, which the compiler can create from existing .d modules.
See my reply to his posting.
 Heck, you can just use regular .d modules and stub out the
 implementations.  There's no reason why you'd need to release your code
 to third parties.
Sure, if you like tedious work. And you have to be extremely careful. Writing an extern(C) based interface will be not much more work, but the result will be much more robust, and give you some extra advantages, like enabling other languages to link to it.
 Templates are a different matter, but then C++ has the same problem.
 Whether or not you want to release your templates as part of the SDK
 really depends on what they are.  Templates + interfaces make a good pair.
generics, though. Of course, they are not as flexible as D or C++ templates.
 I think a purely extern(C) based interface would be better in these cases.

 In fact, if you rely on the D ABI for dynamic linking, you'll probably
 have the same trouble as with C++ dynamic linking. For example, BeOS had
 to go through this to make sure their C++ based API maintains ABI
 compatibility:

 http://homepage.corbina.net/~maloff/holy-wars/fbc.html

 I'm not sure if the D ABI improves the situation. At any rate, it
 doesn't sound like a good idea.
There's some interesting problems at that article that maybe we should ask Walter about. For one, being able to control virtual function ordering is an interesting idea. But by and large, these are the same problems you'll get with ANY ABI. If the size of a struct changes in a C ABI, you're hosed just as bad as if the size of a struct in D, or the fields of a class.
I'm not sure, but I think you could solve many of these problems by using linker symbols instead of compile time constants for sizes and field offsets. Still doesn't work in D, because CTFE still needs them as real compile time constants. And it'd probably be less efficient.
   -- Daniel
Jan 28 2009
parent "Nick Sabalausky" <a a.a> writes:
"grauzone" <none example.net> wrote in message 
news:glphtn$g0f$1 digitalmars.com...
 Daniel Keep wrote:
 Templates are a different matter, but then C++ has the same problem.
 Whether or not you want to release your templates as part of the SDK
 really depends on what they are.  Templates + interfaces make a good 
 pair.
though. Of course, they are not as flexible as D or C++ templates.
designed in such a way that objects of the generic type 'T' can only be used in ways explicitly allowed by T's explicit constraints. For instance, performing a comparison on an object of type T is a compile-time error unless the programmer places a constraint on T that T must be something that implements the IComparable interface (which includes all of the primitives). As far as I can tell, the only limitation this inherently forces on generics is that the language/library must provide a constraint for anything that a generics is just simply MS's constant refusal to provide either an IArithmetic counterpart to IComparable or individual operator constraints. generics is an inherent consequence of its compiled generics being exportable.
Jan 28 2009
prev sibling parent Christopher Wright <dhasenan gmail.com> writes:
grauzone wrote:
 John Reimer wrote:
 ddl does not work for memory sharing like normal dll's, where multiple 
 applications have access to a single dll at runtime.  It appears that 
 such support would be quite difficult to implement and moves in the 
 direction of operating system features.
Couldn't this be achieved by simply mmap()-ing the file contents into memory? mmap() normally shared the memory pages with other processes.
DDL, after loading the library into RAM, has to fix up type information. This requires altering the loaded data. Therefore, DDL can't share everything, and the things it can share are fragmented.
Jan 28 2009