www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Is metaprogramming useful?

reply "Frank Benoit (keinfarbton)" <benoit tionex.removethispart.de> writes:
In generic programming we can do wonderful things like container classes
and template functions.

Using more advanced feature will result in metaprogramming, like the
compile time regex.

I wonder if compile time regex is really a thing of practical use.

If I a lot of compile time regex pattern, it is very nice to have
compiler error for invalid patterns.

But disadvantages are:
- compile time is increasing fast
- code size is increasing with every new pattern (is it?).

With a parser like Spirit in C++ it would be the same. In the Spirit
FAQ, a 78 rule parser was reported with 2 hours compile time.

Well, D might be faster, but it shows that the compile time can increase
very fast.

In both cases an external regex or parser generator would make more sense.

Now my questions:
1.) Is metaprogramming really useful or only kind of hype?
2.) Are there examples where a metaprogramming solution is the best way
to solve it?
Nov 27 2006
next sibling parent reply Sean Kelly <sean f4.ca> writes:
Frank Benoit (keinfarbton) wrote:
 1.) Is metaprogramming really useful or only kind of hype?
In-language code generation has an advantage over using a standalone preprocessor to perform the same task. And generated code has the potential to contain fewer bugs than hand-written code, as well as reduce coding time for a project. Finally, template code in general combined with common optimizer techniques can result in extremely fast code because it is essentially equivalent to hand-optimized code in many cases. How fast? Bjarne did a presentation at SDWest regarding some real-world applications where C++ was shown to outperform hand-tuned C and even FORTRAN by an order of magnitude for numeric calculations, and the reason was entirely attributed to the extensive use of template code. Granted, metaprogramming is but a subset of template programming, but sometimes the line between the two is fairly blurry.
 2.) Are there examples where a metaprogramming solution is the best way
 to solve it?
Blitz++ uses metaprogramming to gain a measurable speed increase in executable code. But I think the real power of metaprogramming is contingent upon how well it integrates with the run-time language. In C++, this integration is really fairly poor. In D it's much better, but could still be improved. The ideal situation would be if the syntax of a normal function call could be optimized using metaprogramming if some of the arguments were constants, rather than requiring a special syntax to do so. Two fairly decent examples of metaprogramming in C++ are expression templates and lambda functions (Boost). Sean
Nov 27 2006
next sibling parent "Craig Black" <cblack ara.com> writes:
Good response.  This info should be on a publicly available FAQ.

With respect to compile time regex, the full power of this will not be 
available until D can better optimize the code that is generated.  I've 
never seen any benchmark comparisons with the run-time regexes and I assume 
its because it doesn't end up being much faster. (Correct me if I'm wrong 
here.)  If this is the case, it is not because metaprogramming is not 
useful, it is because DMD doesn't do certain optimizations yet.

That said, there will always be a trade off between run-time and 
compile-time performance when dealing with metaprogramming.  In a systems 
programming language, usually run-time performance is more important.  Thus 
metaprogramming definitely has merit in a language like D.  However, if you 
want faster compiles, use run-time code.

-Craig

"Sean Kelly" <sean f4.ca> wrote in message 
news:ekerg7$32b$1 digitaldaemon.com...
 Frank Benoit (keinfarbton) wrote:
 1.) Is metaprogramming really useful or only kind of hype?
In-language code generation has an advantage over using a standalone preprocessor to perform the same task. And generated code has the potential to contain fewer bugs than hand-written code, as well as reduce coding time for a project. Finally, template code in general combined with common optimizer techniques can result in extremely fast code because it is essentially equivalent to hand-optimized code in many cases. How fast? Bjarne did a presentation at SDWest regarding some real-world applications where C++ was shown to outperform hand-tuned C and even FORTRAN by an order of magnitude for numeric calculations, and the reason was entirely attributed to the extensive use of template code. Granted, metaprogramming is but a subset of template programming, but sometimes the line between the two is fairly blurry.
 2.) Are there examples where a metaprogramming solution is the best way
 to solve it?
Blitz++ uses metaprogramming to gain a measurable speed increase in executable code. But I think the real power of metaprogramming is contingent upon how well it integrates with the run-time language. In C++, this integration is really fairly poor. In D it's much better, but could still be improved. The ideal situation would be if the syntax of a normal function call could be optimized using metaprogramming if some of the arguments were constants, rather than requiring a special syntax to do so. Two fairly decent examples of metaprogramming in C++ are expression templates and lambda functions (Boost). Sean
Nov 27 2006
prev sibling parent Sean Kelly <sean f4.ca> writes:
Just a few corrections, as I was still waking up when I wrote this.

Sean Kelly wrote:
 Frank Benoit (keinfarbton) wrote:
 1.) Is metaprogramming really useful or only kind of hype?
In-language code generation has an advantage over using a standalone preprocessor to perform the same task. And generated code has the potential to contain fewer bugs than hand-written code, as well as reduce coding time for a project. Finally, template code in general combined with common optimizer techniques can result in extremely fast code because it is essentially equivalent to hand-optimized code in many cases.
Equivalent to or better than. Inlining is a huge part of why template code is as fast as it is, and even hand tuning typically results in little manually inlined code--it's too difficult to maintain.
 How fast?  Bjarne did a presentation at SDWest regarding some 
 real-world applications where C++ was shown to outperform hand-tuned C 
 and even FORTRAN by an order of magnitude for numeric calculations, and 
 the reason was entirely attributed to the extensive use of template 
 code.
I believe it actually outperformed hand-tuned C by an order of magnitude and FORTRAN by a smaller margin, but it was still faster. Bjarne attributed the results to templates, but inlining obviously played a huge part. Sean
Nov 28 2006
prev sibling next sibling parent Don Clugston <dac nospam.com.au> writes:
Frank Benoit (keinfarbton) wrote:
 In generic programming we can do wonderful things like container classes
 and template functions.
 
 Using more advanced feature will result in metaprogramming, like the
 compile time regex.
 
 I wonder if compile time regex is really a thing of practical use.
 
 If I a lot of compile time regex pattern, it is very nice to have
 compiler error for invalid patterns.
 
 But disadvantages are:
 - compile time is increasing fast
 - code size is increasing with every new pattern (is it?).
Not necessarily. With the compile-time regex I'm working in, all it does is convert one human-readable string literal (eg, "ab+") into a byte-code string literal. There's no code generation. Executable code size should decrease very slightly, because the regex compilation code isn't included, and there's a bit less memory manipulation. However, the size of the obj file does increase. If 'early discard' was implemented for templates*, compilation speed would be extremely fast, and there'd be no effect on obj file size. * 'early discard' = if a template contains only const values, evaluate it, then immediately delete it from the symbol table. Apply this recursively.
 With a parser like Spirit in C++ it would be the same. In the Spirit
 FAQ, a 78 rule parser was reported with 2 hours compile time.
 
 Well, D might be faster, but it shows that the compile time can increase
 very fast.
AFAIK, D is fast because import system is so much faster than the #include system, and the name lookup rules are simpler. It's like the difference between quicksort and bubblesort; as the complexity increases, the advantage of D becomes greater and greater.
 In both cases an external regex or parser generator would make more sense.
 
 Now my questions:
 1.) Is metaprogramming really useful or only kind of hype?
 2.) Are there examples where a metaprogramming solution is the best way
 to solve it?
I hope to provide some <g>.
Nov 27 2006
prev sibling next sibling parent reply Kirk McDonald <kirklin.mcdonald gmail.com> writes:
Frank Benoit (keinfarbton) wrote:
 In generic programming we can do wonderful things like container classes
 and template functions.
 
 Using more advanced feature will result in metaprogramming, like the
 compile time regex.
 
 I wonder if compile time regex is really a thing of practical use.
 
 If I a lot of compile time regex pattern, it is very nice to have
 compiler error for invalid patterns.
 
 But disadvantages are:
 - compile time is increasing fast
 - code size is increasing with every new pattern (is it?).
 
 With a parser like Spirit in C++ it would be the same. In the Spirit
 FAQ, a 78 rule parser was reported with 2 hours compile time.
 
 Well, D might be faster, but it shows that the compile time can increase
 very fast.
 
 In both cases an external regex or parser generator would make more sense.
 
 Now my questions:
 1.) Is metaprogramming really useful or only kind of hype?
Naturally, I will speak to what I know, which is Pyd. A goal of Pyd is to wrap as much of the D language as possible, as easily as possible. This requires meta-programming techniques: Template functions, function meta-info, typeof, tuples. Thanks to these techniques, I can completely wrap a D function without knowing its return type, argument types, or even its name, by just saying: def!(foo); "Wrapping" a function implies generating a function (with C linkage) that accepts (in the case of the Python API) some PyObject* arguments and returns a PyObject*; somehow calls the function with the PyObject* arguments; and somehow converts the function's return value to a PyObject*. All of this can occur automatically, given only an alias to the function. The generated code is probably only slightly slower than if the wrapper had been written by hand (though I have yet to test Pyd's performance). It might even be faster, if the hand-written code uses PyArg_ParseTuple to convert the arguments. (Which determines the types to convert to at runtime, using a format string. Pyd determines the types at compile-time.) And, regarding compilation times: Pyd compiles nigh-instantly. Boost::Python is notorious for long (sometimes in the range of hours) compilation times. D seems to be just plain better at this than C++.
 2.) Are there examples where a metaprogramming solution is the best way
 to solve it?
I would say that using Pyd (or even Boost::Python) is much more pleasant than the raw Python/C API. As soon as you start working with metaprogramming, you start wanting compile-time equivalents of things you have at runtime. I've started working on a compile-time writef-style formatter, for instance. (Due to bug 586, it only works with types at the moment, and not other compile-time values like ints and function aliases. But you can specify field widths, and so forth.) This thing is based heavily on stuff in Don Clugston's meta library (have you had the same idea, Don?), and serves to unify many of the string-conversion templates that are floating around (Don's Nameof, that itoa template, and so on). -- Kirk McDonald Pyd: Wrapping Python with D http://pyd.dsource.org
Nov 27 2006
next sibling parent "Frank Benoit (keinfarbton)" <benoit tionex.removethispart.de> writes:
Thanks for all your answers.
I think be imagination of what template are good for is getting more clear.
Nov 27 2006
prev sibling parent reply Don Clugston <dac nospam.com.au> writes:
Kirk McDonald wrote:
 Frank Benoit (keinfarbton) wrote:
 In generic programming we can do wonderful things like container classes
 and template functions.

 Using more advanced feature will result in metaprogramming, like the
 compile time regex.

 I wonder if compile time regex is really a thing of practical use.

 If I a lot of compile time regex pattern, it is very nice to have
 compiler error for invalid patterns.

 But disadvantages are:
 - compile time is increasing fast
 - code size is increasing with every new pattern (is it?).

 With a parser like Spirit in C++ it would be the same. In the Spirit
 FAQ, a 78 rule parser was reported with 2 hours compile time.

 Well, D might be faster, but it shows that the compile time can increase
 very fast.

 In both cases an external regex or parser generator would make more 
 sense.

 Now my questions:
 1.) Is metaprogramming really useful or only kind of hype?
Naturally, I will speak to what I know, which is Pyd. A goal of Pyd is to wrap as much of the D language as possible, as easily as possible. This requires meta-programming techniques: Template functions, function meta-info, typeof, tuples. Thanks to these techniques, I can completely wrap a D function without knowing its return type, argument types, or even its name, by just saying: def!(foo); "Wrapping" a function implies generating a function (with C linkage) that accepts (in the case of the Python API) some PyObject* arguments and returns a PyObject*; somehow calls the function with the PyObject* arguments; and somehow converts the function's return value to a PyObject*. All of this can occur automatically, given only an alias to the function. The generated code is probably only slightly slower than if the wrapper had been written by hand (though I have yet to test Pyd's performance). It might even be faster, if the hand-written code uses PyArg_ParseTuple to convert the arguments. (Which determines the types to convert to at runtime, using a format string. Pyd determines the types at compile-time.) And, regarding compilation times: Pyd compiles nigh-instantly. Boost::Python is notorious for long (sometimes in the range of hours) compilation times. D seems to be just plain better at this than C++.
 2.) Are there examples where a metaprogramming solution is the best way
 to solve it?
I would say that using Pyd (or even Boost::Python) is much more pleasant than the raw Python/C API. As soon as you start working with metaprogramming, you start wanting compile-time equivalents of things you have at runtime. I've started working on a compile-time writef-style formatter, for instance. (Due to bug 586, it only works with types at the moment, and not other compile-time values like ints and function aliases. But you can specify field widths, and so forth.) This thing is based heavily on stuff in Don Clugston's meta library (have you had the same idea, Don?),
Indeed I have. The tuple stuff makes this all really appealing. In particular, I've played around with a writef() equivalent which checks the parameters for 'easy' cases, and splits it off into: -> single string only -> strings, chars, and integers -> floating point, arrays, and objects. This way you can avoid linking in the floating-point conversion code if you're not using it, keeping the code size down. In fact, it ought to be possible to remove the usage of TypeInfo completely, moving it all into compile-time. and serves
 to unify many of the string-conversion templates that are floating 
 around (Don's Nameof, that itoa template,  and so on).
That was mine as well, BTW.
Nov 28 2006
parent Paolo Invernizzi <arathorn NOSPAM_fastwebnet.it> writes:
We really need a partial rewrite of Phobos, at least the low/level 
always used stuff, in a template-based way.

It would be good to have it in the 1.0, at least because a lot of D 
newbie will LOOK INTO the standard library sources searching for great D 
code.

my 2c
---
Paolo Invernizzi

Don Clugston wrote:
 Indeed I have. The tuple stuff makes this all really appealing. In 
 particular, I've played around with a writef() equivalent which checks 
 the parameters for 'easy' cases, and splits it off into:
 -> single string only
 -> strings, chars, and integers
 -> floating point, arrays, and objects.
 
 This way you can avoid linking in the floating-point conversion code if 
 you're not using it, keeping the code size down. In fact, it ought to be 
 possible to remove the usage of TypeInfo completely, moving it all into 
 compile-time.
Nov 28 2006
prev sibling parent reply Steve Horne <stephenwantshornenospam100 aol.com> writes:
On Mon, 27 Nov 2006 14:28:15 +0100, "Frank Benoit (keinfarbton)"
<benoit tionex.removethispart.de> wrote:

Well, D might be faster, but it shows that the compile time can increase
very fast.
When doing the metaprogramming, the compiler is basically acting as an interpreter as opposed to a compiler. It is nothing more advanced or freaky than that. A lot of work is already done in interpreted languages already. In fact, since that term is often applied to languages that actually compile to virtual machine bytecode (e.g. Python) and since other languages that compile to virtual machine bytecode are considered compilers (e.g. Java) there is no clear line between compilers and interpreters these days. Interpreters often do a bit of pre-compiling (to intermediate code), and compilers often do a bit of interpreting (precalculating constant expressions, as well as metaprogramming). The reason C++ metaprogramming is slow is (1) because it has never been a big priority for compiler developers, and (2) because C++ template metaprogramming forces programmers to work in indirect and inefficient ways, substituting specialisation for simple conditionals and recursion for simple loops. Support metaprogramming properly and it need be no slower than, for instance, using a scripting language for code generation - an annoyingly common thing. In fact, the compiler could simply generate a compiled code generator, run it, and then pick up the output for the next compilation stage if performance was that big a deal. This could give better performance than writing code generation tools in C++, since the compiler knows its own internal representations and could create compiled code generators that generate this directly rather than generating source code. Getting back to scripting languages, Perl was originally described as 'programmers glue' or 'programmers duct tape' since it was written as an extension to awk, which was widely used for simple code generation. The web page generation thing obviously became the real Perl killer app, but that was a bit later. So clearly code generation is quite a common necessity, or why would people write tools to make it easier? People have even used XSLT to generate source code - strange, but true. But using a scripting language for code generation is a bad thing, since it means you have to learn two languages to develop one application. Why not just have all the tools you need in one language?
In both cases an external regex or parser generator would make more sense.

Now my questions:
1.) Is metaprogramming really useful or only kind of hype?
2.) Are there examples where a metaprogramming solution is the best way
to solve it?
These are really the same questions, since metaprogramming is useful if and only if it is the best way to solve certain problems. So... Consider that the whole point of a library is that as much complexity as possible should be handled once (in the library itself) rather than over-and-over again by the users of the library. But at the same time, that shouldn't add unnecessary run-time overheads. And also, you shouldn't duplicate code - ideally, each algorithm should be written once with all the needed flexibility - not in several separately maintained versions. Well, even generic data structures can reach a point where you need 'metaprogramming' to handle them properly - where you want lots of flexibility. Managing bloat can lead to even more complexity, since different applications require different trade-offs between code size and speed, so that can be a few more options there. How many options can there be? Well, maybe you have a particular data structure that you need, but sometimes you need it in memory and sometimes you need it in a disk file. Sometimes, you even need it split between memory and a file (e.g. specialised caching, or handling transactions with commit and rollback). Sometimes you need maximum performance and want to do any moves using memcpy, other times you want to support arbitrary objects and cannot assume that memcpy is safe so you need to use copy constructors and assignment operators etc. And maybe there are optional features to your data structures - e.g. it's a tree data structure that can hold certain kinds of summary information in nodes to optimise certain search operations, but only if that summary information is useful for the current application. For example, you might want efficient subscripted access to an associative container - easy if each node knows how many items are in its subtree, impossible otherwise. My experience is that doing this kind of thing in C++ is easy enough when dealing with the search and iteration algorithms, but insert and delete algorithms can get very complex to manage because there are so many variations. You end up with fragments of each algorithm, each with multiple variants, selected through specialisation. But thats a nightmare, so in practice you end up writing several separate libraries, increasing the bloat, and losing some of the flexibility that comes from having all the options in one library. Not to mention the maintenance headache from having several copies of each algorithm. The D way - each algorithm remaining a single algorithm, but with a few static-if blocks - is a lot easier. For some things, you can even prototype with run-time ifs and convert to static-ifs later just by adding the word 'static' here and there. Of course this sounds like just making libraries overcomplex, but the whole point of a library is that as much complexity as possible should be handled once (in the library itself) rather than over-and-over again by the users of the library. But at the same time, that shouldn't add unnecessary run-time overheads. But then again, the truth is that most programmers simply won't write libraries to be that flexible because of the complexity. That isn't how programming should be, though, it's just a limitation of current programming languages - you can't reasonably do some of the things you should be able to do easily. Right now, I don't think I'd use a metaprogramming-based regular expression or parser library, and certainly not in C++. A lot of current metaprogramming stuff is experimental, and not really to be used for serious work. Some of those experiments remind me of the old obfuscated C contest. But todays experiments are important - without them, tomorrow will be no better than today. Or perhaps you'd rather be driving a horse and cart, since after all the earliest cars were clearly impractical and obviously horses were fine up until then, which PROOVES there could never be a real need for cars, eh! ;-) -- Remove 'wants' and 'nospam' from e-mail.
Nov 27 2006
next sibling parent reply "Jarrett Billingsley" <kb3ctd2 yahoo.com> writes:
"Steve Horne" <stephenwantshornenospam100 aol.com> wrote in message 
news:tqrmm25d9217h5r45hem4jnp2g1hi6oh7o 4ax.com...

 When doing the metaprogramming, the compiler is basically acting as an
 interpreter as opposed to a compiler. It is nothing more advanced or
 freaky than that.

 A lot of work is already done in interpreted languages already. In
 fact, since that term is often applied to languages that actually
 compile to virtual machine bytecode (e.g. Python) and since other
 languages that compile to virtual machine bytecode are considered
 compilers (e.g. Java) there is no clear line between compilers and
 interpreters these days. Interpreters often do a bit of pre-compiling
 (to intermediate code), and compilers often do a bit of interpreting
 (precalculating constant expressions, as well as metaprogramming).

 The reason C++ metaprogramming is slow is (1) because it has never
 been a big priority for compiler developers, and (2) because C++
 template metaprogramming forces programmers to work in indirect and
 inefficient ways, substituting specialisation for simple conditionals
 and recursion for simple loops.

 Support metaprogramming properly and it need be no slower than, for
 instance, using a scripting language for code generation - an
 annoyingly common thing. In fact, the compiler could simply generate a
 compiled code generator, run it, and then pick up the output for the
 next compilation stage if performance was that big a deal. This could
 give better performance than writing code generation tools in C++,
 since the compiler knows its own internal representations and could
 create compiled code generators that generate this directly rather
 than generating source code.
A little off the topic, but not really, as it spins off of the idea of a compiler as an interpreter. I always thought it would be an interesting exercise to make a language where metaprogramming is not only possible, but nearly as full-featured as the actual code. That is, templates (or whatever they'd be called, because they'd be far more advanced) would be a script for the compiler, which could even be compiled to bytecode. The input would be language constructs -- symbols, statements, expressions -- and the output would be code which could then be compiled. It'd basically be a scriptable compiler. The problem that I always run into in this thought experiment is the syntax. I mean, for a lot of cases, template syntax is great. When you're writing a simple type-agnostic stack class, you just need to be able to substitute T for the type of the data everywhere; it's simple, straightforward, and easy to understand. Using another syntax for that would be kind of clumsy. Maybe, then, this metaprogramming language would have several different constructs, just as the real language does. It might have a templating system for replacing types, symbols, and constants in code. It might then have other constructs for dynamic code modification and generation. // The namespace is where the function will be generated metafunction WrapFunction(symbol func, namespace owner = func.namespace) { assert(func.isFunction, "WrapFunction: '" ~ func.nameof ~ "' is not a function"); // Generation of identifiers from strings symbol wrappedName = symbol(func.nameof ~ "_wrapped"); // Create a new function, and set its first param name to 's' int function(MDState) wrappedFunc = new function int(MDState); wrappedFunc.params[0].name = identifier("s"); // An array of symbols for calling the real function symbol[] params; // The code for the generated function Statement[] code; foreach(i, param; func.params) { // Add the new param name to the symbol array params ~= symbol("param" ~ i); // VarDeclaration takes type, symbol of the name to declare, // and an initialization expression. // DotExp takes the two sides of the dot. // TemplateFunctionCallExp takes a name, a list of types, and a // list of params (not used here). code ~= new VarDeclaration(param.typeof, params[$ - 1], new DotExp(symbol("s"), new TemplateFunctionCallExp(symbol("pop"), param.typeof))); } // Get the function return (if any) if(is(func.returnType == void) == false) { // Get the return type in the local "ret" and push it, then // return 1 code ~= new VarDeclaration(func.returnType, symbol("ret"), new CallExp(func, expand(params)); code ~= new CallExp(new DotExp(symbol("s"), symbol("push")), symbol("ret")); code ~= new ReturnExp(1); } else { // Just call the function and return 0 code ~= new CallExp(func, expand(params)); code ~= new ReturnExp(0); } // Set the wrapped function's code to the code array wrappedFunc.code = code; // Put the function in the owner namespace's symbol table owner[wrappedName] = wrappedFunc; } ... int fork(int x, float y) { writefln("fork: x = ", x, " y = ", y); return 65; } WrapFunction!(fork); That call would produce the code: int fork_wrapped(MDState s) { int param0 = s.pop!(int)(); float param1 = s.pop!(float)(); int ret = fork(param0, param1); s.push(ret); return 1; } ;)
Nov 27 2006
parent reply Brad Anderson <brad dsource.org> writes:
Jarrett Billingsley wrote:
 A little off the topic, but not really, as it spins off of the idea of a 
 compiler as an interpreter.  I always thought it would be an interesting 
 exercise to make a language where metaprogramming is not only possible, but 
 nearly as full-featured as the actual code.  That is, templates (or whatever 
 they'd be called, because they'd be far more advanced) would be a script for 
 the compiler, which could even be compiled to bytecode.  The input would be 
 language constructs -- symbols, statements, expressions -- and the output 
 would be code which could then be compiled.  It'd basically be a scriptable 
 compiler.
Poor Lisp. It just sits there, 50 years old, debugged, optimized, and ready to go, while the imperative languages try to inch closer over the decades. So OOP comes along. Lisp adds it (in a superior way, imo) So AOP is (or will be) hot. Lisp adds it. Metaprogramming? The MetaObject Protocol appears, to complement the already amazing macro facility. I'm not knocking the imperative languages, as a lot of people know them and use them successfully. I am more amazed at how such a valuable toolset is consistently under-used and its functionality is rewritten from scratch. As this thread talks about metaprogramming and syntax, the elegance of Lisp's code being the same as its data is relevant. To be the ultimate expression of metaprogramming in this way, you cannot do it in a new language. You'd end up merely with another flavor/dialect of Lisp. Greenspun's 10th Rule of Programming: Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp. BA P.S. Please no ignorant replies about Lisp is interpreted or Lisp is slower than the imperative languages.
Nov 28 2006
next sibling parent reply Georg Wrede <georg.wrede nospam.org> writes:
Brad Anderson wrote:
 Jarrett Billingsley wrote:
 
A little off the topic, but not really, as it spins off of the idea of a 
compiler as an interpreter.  I always thought it would be an interesting 
exercise to make a language where metaprogramming is not only possible, but 
nearly as full-featured as the actual code.  That is, templates (or whatever 
they'd be called, because they'd be far more advanced) would be a script for 
the compiler, which could even be compiled to bytecode.  The input would be 
language constructs -- symbols, statements, expressions -- and the output 
would be code which could then be compiled.  It'd basically be a scriptable 
compiler.
You've just described Lisp. And very precisely, at that. And today's Common Lisp implementations blend compiler and interpreter so thoroughly that you don't know which is going on when.
 Poor Lisp.  It just sits there, 50 years old, debugged, optimized, and ready
 to go, while the imperative languages try to inch closer over the decades.
Somehow I've always felt that it's mostly not about other languages copy catting features. In Lisp's case it's more like, there's a Right Way to do things, and Lisp has found many of them right from the start.
 So OOP comes along.  Lisp adds it (in a superior way, imo)
 So AOP is (or will be) hot.  Lisp adds it.
 Metaprogramming?  The MetaObject Protocol appears, to complement the already
 amazing macro facility.
The most amazing thing about Lisp is how simply trivial all the hard stuff is. So, someone invents OOP, no problem, let's write a complete OOP framework, in just 200 lines of code. AOP anyone, hey, that'll be ready by tomorrow morning. And Metaprogramming? Ha, even the trivial (3kloc, in D) Lisp-in-D implementation (dLISP) on dsource has defmacro. (Which actually works!)
 I'm not knocking the imperative languages, as a lot of people know them and
 use them successfully.  I am more amazed at how such a valuable toolset is
 consistently under-used and its functionality is rewritten from scratch.
There's a single reason for it. Lisp is /scary/. It's scary in itself, but even worse, folks get scared that it undresses them. ("Start mucking with Lisp, and they might see that you're not as smart as you've led everyone to believe.") And the parentheses are simply a convenient excuse for abstinence.
 As this thread talks about metaprogramming and syntax, the elegance of Lisp's
 code being the same as its data is relevant.  To be the ultimate expression of
 metaprogramming in this way, you cannot do it in a new language.  You'd end up
 merely with another flavor/dialect of Lisp.
Well, Lisp is the attractor towards which the D meta language is moving, whether we want it, or not. We've even moved quite fast lately. And, we're actually much nearer to Lisp now than most folks here even suspect!
 Greenspun's 10th Rule of Programming:  Any sufficiently complicated C or
 Fortran program contains an ad hoc, informally-specified, bug-ridden, slow
 implementation of half of Common Lisp.
It's not a coincidence that I installed Allegro Common Lisp (from Franz Inc, a demo version of an excellent commercial implementation) on my Fedora last week. And dLISP for comparison.
Nov 28 2006
parent reply Lutger <lutger.blijdestijn gmail.com> writes:
Georg Wrede wrote:
 Brad Anderson wrote:
 Greenspun's 10th Rule of Programming:  Any sufficiently complicated C or
 Fortran program contains an ad hoc, informally-specified, bug-ridden, 
 slow
 implementation of half of Common Lisp.
It's not a coincidence that I installed Allegro Common Lisp (from Franz Inc, a demo version of an excellent commercial implementation) on my Fedora last week. And dLISP for comparison.
Sorry to ask it here, but how do you evaluate dLISP? Is it complete enough? I'm learning lisp atm, it sounds attractive to embed a lisp interpreter in D programs and use them together. This should be very easy with dLISP I think.
Nov 29 2006
parent Georg Wrede <georg.wrede nospam.org> writes:
Lutger wrote:
 Georg Wrede wrote:
 Brad Anderson wrote:

 Greenspun's 10th Rule of Programming:  Any sufficiently complicated C or
 Fortran program contains an ad hoc, informally-specified, bug-ridden, 
 slow implementation of half of Common Lisp.
It's not a coincidence that I installed Allegro Common Lisp (from Franz Inc, a demo version of an excellent commercial implementation) on my Fedora last week. And dLISP for comparison.
Sorry to ask it here, but how do you evaluate dLISP? Is it complete enough?
Well, it's a toy, almost like a proof of concept. It's not robust at all, so a lot of errors, especially in the input, simply crash it. And no documentation to brag with. And the version on dsource is broken, it doesn't even compile. I've fixed mine, with hints from the discussion forum (thread heading "Ping") and some additional fixes. Anyway, I wouldn't recommend it for learning, since I believe that one should always learn with good tools. Get Allegro instead. It's free for private use! There exists a complete package for anybody who really wants to learn Common Lisp, called Lispbox. (http://www.gigamonkeys.com/lispbox/) It's an all-in-one package that's ready to use right "off the box". While the download is big (80Megs), it runs fine on my 800MHz 256MB Linux laptop. There are downloads for Linux, OS-X, and Windows.
 I'm learning lisp atm, it sounds attractive to embed a lisp interpreter 
 in D programs and use them together. This should be very easy with dLISP 
 I think.
dLISP is a good choice if one wants to embed Lisp in one's own application. It's got the basics, and whatever you need more you can write in D or Lisp. (I said earlier that it's not robust. But the code is clear, logical, and it looks final. The lack of robustness has more to do with not having error checking than bugs, probably because this may have been a proof of concept type effort.) It's little and lightweight, and the source code is small enough to let you study and comprehend it "in full", which always leads to a better and more robust end result for your embedding project. dLISP is easily capable of handling what you'd need, say, as the macro language in your own text editor, game logic, smarter user interfaces to your existing programs, and such.
Nov 29 2006
prev sibling next sibling parent reply Steve Horne <stephenwantshornenospam100 aol.com> writes:
On Tue, 28 Nov 2006 09:31:23 -0500, Brad Anderson <brad dsource.org>
wrote:

P.S.  Please no ignorant replies about Lisp is interpreted or Lisp is slower
than the imperative languages.
No - I already did that a few years ago on comp.lang.python. I'm less ignorant these days. I have tried to learn Scheme on many occasions, though, and only ever got so far. Sure, it has that key mechanism there, but learning to use the language is a bit like learning C++ by first learning how to write asm blocks, and finding no teacher who is ever willing to teach you the high level tools you need for real everyday work. Fine, you can do anything, but why do you always have to reinvent all those wheels? The thing is that in the real world, most programmers need a standard, familiar dialect which has all the everyday high level tools available from the start. If OOP is an immediately useful concept, programmers should be using OOP from day one, not learning how to reinvent OOP from the basic building blocks. So, has anyone created widely-used standard librarys for high level programming? On that XLR site referenced by Leandro in the "Thesis on metaprogramming in D" thread, a reference is made to the problem of dialects in what they call concept-oriented programming. They seem to be getting at some other dialect-related issue, but one dialect issue is simply that if everyone has the ability to invent their own type of object orientation, their own type of generic, etc etc then there you don't really have a single language. XLR takes the view that a standard implementation is provided for each concept, and you should only roll your own concept if you need it, but has that standardisation been done for Scheme or other Lisp-alikes? Of course in a sense you have a new dialect every time you switch libraries in any language. The dialect of C++ where you use GTK or QT for GUI stuff is very different from the dialect where you use Win32, for instance. But there is at least common ground - terms like 'class' and 'object' have exactly the same meaning. Whenever I go looking for this standard set of concepts for Scheme or whatever, I can't find it. There's a very low level core library, in the sense of only providing very basic tools, and that's it. Of course there are plenty of libraries out there, but... 1. They tend to be higher level building blocks rather than useable concepts. You get things like PGG for partial evaluation and Essence for LR parsing, which are great if you want to create your own extension to Scheme with higher level concepts, but the leap to actually creating usable, standardised higher level programming tools using these things does not seem to have been made. 2. They don't seem to be widely used standards. They seem to be mainly academic experiments. For instance, you can find academic papers about them descibing the theory behind them and conclusions resulting from implementing them and testing them out with contrived examples, but no user guides or tutorials. So sure, you can in theory make Lisp work more-or-less like Pascal or C, but with the option to work differently when needed, but that's a lot of work to do from the basic Lisp-alike building blocks. So where is that standard high level programming concepts library? -- Remove 'wants' and 'nospam' from e-mail.
Nov 28 2006
parent reply Georg Wrede <georg.wrede nospam.org> writes:
Steve Horne wrote:
 Brad Anderson <brad dsource.org> wrote:
 
 P.S.  Please no ignorant replies about Lisp is interpreted or Lisp
 is slower than the imperative languages.
No - I already did that a few years ago on comp.lang.python. I'm less ignorant these days. I have tried to learn Scheme on many occasions, though, and only ever got so far. Sure, it has that key mechanism there, but learning to use the language is a bit like learning C++ by first learning how to write asm blocks, and finding no teacher who is ever willing to teach you the high level tools you need for real everyday work. Fine, you can do anything, but why do you always have to reinvent all those wheels?
I took a university class in Scheme some 15 years ago. It was rewarding, but like you, I felt it was pretty hard. And it's true, recursion may not be the answer to everything under the sun.
 The thing is that in the real world, most programmers need a
 standard, familiar dialect which has all the everyday high level
 tools available from the start. If OOP is an immediately useful
 concept, programmers should be using OOP from day one, not learning
 how to reinvent OOP from the basic building blocks. So, has anyone
 created widely-used standard librarys for high level programming?
The very point of Common Lisp is *precicely* what you wrote here! CL tries to be a practical language for people doing real-world programming. (Sound familiar?) And they explcitly take distance to Scheme, which they consider more Purist, Academic, and for theoreticians. (The latter of course disagree, as usual.)
Nov 29 2006
parent Steve Horne <stephenwantshornenospam100 aol.com> writes:
On Thu, 30 Nov 2006 00:44:17 +0200, Georg Wrede
<georg.wrede nospam.org> wrote:

 The thing is that in the real world, most programmers need a
 standard, familiar dialect which has all the everyday high level
 tools available from the start. If OOP is an immediately useful
 concept, programmers should be using OOP from day one, not learning
 how to reinvent OOP from the basic building blocks. So, has anyone
 created widely-used standard librarys for high level programming?
The very point of Common Lisp is *precicely* what you wrote here! CL tries to be a practical language for people doing real-world programming. (Sound familiar?) And they explcitly take distance to Scheme, which they consider more Purist, Academic, and for theoreticians. (The latter of course disagree, as usual.)
And this too may be a big clue to why me and Brad aren't making sense to each other. I somehow had the impression that Common Lisp was created for standardisation reasons but at least as academic as any other dialect. Possibly I read some stuff by your theoreticians who disagree at some point. Well, it's good to finally understand the nature of my ignorance - now I can do something about it! -- Remove 'wants' and 'nospam' from e-mail.
Nov 29 2006
prev sibling next sibling parent reply renoX <renosky free.fr> writes:
== Extrait de l'article de « Brad Anderson (brad dsource.org) »
 Poor Lisp.  [cut]  I am more amazed at how such a valuable toolset is
 consistently under-used and its functionality is rewritten from scratch.
Well, I'm not surprised myself: Lisp syntax is not user-friendly, people wants to use language with easy-to-read syntax, not a language with an abstract-syntax-tree syntax. So in all likelihood Lisp will stay mostly unused in the future. Back on the subject of metaprogamming, one thing which makes me cautious about metaprogramming is debugging: when there is a problem debugging generated code is a nightmare usually.. renoX
Nov 29 2006
next sibling parent Sean Kelly <sean f4.ca> writes:
renoX wrote:
 == Extrait de l'article de « Brad Anderson (brad dsource.org) »
 Poor Lisp.  [cut]  I am more amazed at how such a valuable toolset is
 consistently under-used and its functionality is rewritten from scratch.
Well, I'm not surprised myself: Lisp syntax is not user-friendly, people wants to use language with easy-to-read syntax, not a language with an abstract-syntax-tree syntax. So in all likelihood Lisp will stay mostly unused in the future. Back on the subject of metaprogamming, one thing which makes me cautious about metaprogramming is debugging: when there is a problem debugging generated code is a nightmare usually..
It's gotten pretty good in C++ and will get better with concept checking. And D is already better at reporting compile-time errors, since we have the use of static if, static assert, and pragma(msg). Once we get an instantiation trace when static asserts fail I think we'll be in good shape there. Run-time template debugging is obviously a bit behind, but that's only a matter of time. Sean
Nov 29 2006
prev sibling parent reply Georg Wrede <georg.wrede nospam.org> writes:
renoX wrote:
 Back on the subject of metaprogamming, one thing which makes me cautious about
 metaprogramming is debugging: when there is a problem debugging generated code
is
 a nightmare usually..
It's not metaprogramming itself, it's the bad implementations that make it hard. If we had Perfect Metaprogramming(TM) in D, then I could do the following: I'm coding some stuff and I notice that what I'd really want is a new keyword, "unless", that would make it so much easier for me to write this application clearly. I decide to create it. I want to use it like this unless (fullMoonTonight) { doRegularStuff() } So, to create such a thing in D, I'd write something like define("unless", "(", BooleanExpression, ")", BlockStatement) { if(!BooleanExpression) BlockSTatement; } Now, if I'd made errors in writing the meta code, then the compiler would error me, of course. No biggie. And since the D compiler would understand what's going on (as opposed to the C preprosessor or compiler), the error messages would be what we're used to in D. Later, when I actually use the "unless" construct, again the error messages would be normal because D now understands what "unless" is all about. --- People say GC has to be slow, but that is mostly because it used to only exist in languages that were slow to begin with (and badly implemented). The same thing with metaprogramming. C++ has given it such a bad rep, it'll take years before folks learn away from their fears.
Nov 29 2006
next sibling parent reply "Andrey Khropov" <andkhropov_nosp m_mtu-net.ru> writes:
Georg Wrede wrote:

 It's not metaprogramming itself, it's the bad implementations that make it
 hard.
 
 If we had Perfect Metaprogramming(TM) in D, then I could do the following:
 
   I'm coding some stuff and I notice that what I'd really want
   is a new keyword, "unless", that would make it so much easier
   for me to write this application clearly. I decide to create it.
 
   I want to use it like this
 
     unless (fullMoonTonight) { doRegularStuff() }
 
   So, to create such a thing in D, I'd write something like
 
     define("unless", "(", BooleanExpression, ")", BlockStatement)
     {
       if(!BooleanExpression) BlockSTatement;
     }
 
 Now, if I'd made errors in writing the meta code, then the compiler would
 error me, of course. No biggie. And since the D compiler would understand
 what's going on (as opposed to the C preprosessor or compiler), the error
 messages would be what we're used to in D.
 
 Later, when I actually use the "unless" construct, again the error messages
 would be normal because D now understands what "unless" is all about.
Well, I'm sorry I'm saying that again but it's almost exactly the way Nemerle does it. And it actually has that unless macro :-) Here's the actual code from the compiler svn: ------------------------------------------------------------------- macro unless (cond, body) syntax ("unless", "(", cond, ")", body) { <[ match ($cond) { | false => $body : void | _ => () } ]> } ------------------------------------------------------------------- but it's defined using pattern matching. anyway it can be defined your way: ------------------------------------------------------------------- macro unless (cond, body) syntax ("unless", "(", cond, ")", body) { <[ when( !($cond) ) $body ]> } ------------------------------------------------------------------- ('when' is used in Nemerle for 'if without else')
 ---
 
 People say GC has to be slow, but that is mostly because it used to only
 exist in languages that were slow to begin with (and badly implemented).
Sad to say, but GC in D is still conservative and hence slow :-( -- AKhropov
Nov 29 2006
parent Georg Wrede <georg.wrede nospam.org> writes:
Andrey Khropov wrote:
 Georg Wrede wrote:
 
 
It's not metaprogramming itself, it's the bad implementations that make it
hard.

If we had Perfect Metaprogramming(TM) in D, then I could do the following:

  I'm coding some stuff and I notice that what I'd really want
  is a new keyword, "unless", that would make it so much easier
  for me to write this application clearly. I decide to create it.

  I want to use it like this

    unless (fullMoonTonight) { doRegularStuff() }

  So, to create such a thing in D, I'd write something like

    define("unless", "(", BooleanExpression, ")", BlockStatement)
    {
      if(!BooleanExpression) BlockSTatement;
    }

Now, if I'd made errors in writing the meta code, then the compiler would
error me, of course. No biggie. And since the D compiler would understand
what's going on (as opposed to the C preprosessor or compiler), the error
messages would be what we're used to in D.

Later, when I actually use the "unless" construct, again the error messages
would be normal because D now understands what "unless" is all about.
Well, I'm sorry I'm saying that again but it's almost exactly the way Nemerle does it.
Hmm. I'n not sorry at all! :-) There's confluence in the air....
 And it actually has that unless macro :-) 
 Here's the actual code from the compiler svn:
 
 -------------------------------------------------------------------
 macro  unless (cond, body)
 syntax ("unless", "(", cond, ")", body) 
 {
     <[ match ($cond) { | false => $body : void | _ => () } ]>
 }
 -------------------------------------------------------------------
 
 but it's defined using pattern matching.
 
 anyway it can be defined your way:
 
 -------------------------------------------------------------------
 macro  unless (cond, body)
 syntax ("unless", "(", cond, ")", body) 
 {
     <[ when( !($cond) ) $body ]>
 }
 -------------------------------------------------------------------
 
 ('when' is used in Nemerle for 'if without else') 
 
 
---

People say GC has to be slow, but that is mostly because it used to only
exist in languages that were slow to begin with (and badly implemented).
Sad to say, but GC in D is still conservative and hence slow :-(
Nov 29 2006
prev sibling next sibling parent reply "Jarrett Billingsley" <kb3ctd2 yahoo.com> writes:
"Georg Wrede" <georg.wrede nospam.org> wrote in message 
news:456E1768.7090504 nospam.org...

 If we had Perfect Metaprogramming(TM) in D, then I could do the following:

   I'm coding some stuff and I notice that what I'd really want
   is a new keyword, "unless", that would make it so much easier
   for me to write this application clearly. I decide to create it.

   I want to use it like this

     unless (fullMoonTonight) { doRegularStuff() }

   So, to create such a thing in D, I'd write something like

     define("unless", "(", BooleanExpression, ")", BlockStatement)
     {
       if(!BooleanExpression) BlockSTatement;
     }

 Now, if I'd made errors in writing the meta code, then the compiler would 
 error me, of course. No biggie. And since the D compiler would understand 
 what's going on (as opposed to the C preprosessor or compiler), the error 
 messages would be what we're used to in D.

 Later, when I actually use the "unless" construct, again the error 
 messages would be normal because D now understands what "unless" is all 
 about.
Unfortunately, that then breaks the separation of the lexical/syntactic/semantic passes. The define statements have to be syntaxed and semantic'ed before any other code can be even syntaxed. One way around this would be to have some kind of "d metamodule" file which would define all the metacode. Those metamodules would be compiled first, allowing the compiler to compile the normal D code. Scary.
Nov 29 2006
parent reply Georg Wrede <georg.wrede nospam.org> writes:
Jarrett Billingsley wrote:
 "Georg Wrede" <georg.wrede nospam.org> wrote in message 
 news:456E1768.7090504 nospam.org...
 
 
If we had Perfect Metaprogramming(TM) in D, then I could do the following:

  I'm coding some stuff and I notice that what I'd really want
  is a new keyword, "unless", that would make it so much easier
  for me to write this application clearly. I decide to create it.

  I want to use it like this

    unless (fullMoonTonight) { doRegularStuff() }

  So, to create such a thing in D, I'd write something like

    define("unless", "(", BooleanExpression, ")", BlockStatement)
    {
      if(!BooleanExpression) BlockSTatement;
    }

Now, if I'd made errors in writing the meta code, then the compiler would 
error me, of course. No biggie. And since the D compiler would understand 
what's going on (as opposed to the C preprosessor or compiler), the error 
messages would be what we're used to in D.

Later, when I actually use the "unless" construct, again the error 
messages would be normal because D now understands what "unless" is all 
about.
Unfortunately, that then breaks the separation of the lexical/syntactic/semantic passes. The define statements have to be syntaxed and semantic'ed before any other code can be even syntaxed.
Not necessarily. If we wanted the metalanguage to let us do a hairy definition that, say, lets us use ][ as an operator, then yes, it breaks the separation. But if we restrict ourselves to the kind of things done in the "unless" example, then there is no risk of this breakage. You can think of it like this: had Walter wanted to implement "unless" last week in D, we'd have no problem today with the separation, right? So, if our metalanguage lets us do only things "Walter could have done without touching the parser", then this separation issue doesn't exist.
 One way around this would be to have some kind of "d metamodule" file which 
 would define all the metacode.  Those metamodules would be compiled first, 
 allowing the compiler to compile the normal D code.  Scary. 
It doesn't have to be scary. All it means is that some things (like "unless" above) are valid somewhere in the code, and not valid somewhere else. The word for that is scope. We do it currently with functions, and nobody complains. So you could look at the meta definition like just an ordinary function definition. I honestly think that if we'd had this in D all along, folks would hardly notice.
Nov 30 2006
parent reply "Jarrett Billingsley" <kb3ctd2 yahoo.com> writes:
"Georg Wrede" <georg.wrede nospam.org> wrote in message 
news:456EBAFF.10000 nospam.org...

 Not necessarily.

 If we wanted the metalanguage to let us do a hairy definition that, say, 
 lets us use ][ as an operator, then yes, it breaks the separation.

 But if we restrict ourselves to the kind of things done in the "unless" 
 example, then there is no risk of this breakage.
How so? The 'define' statement would be a statement like anything else, which would not be recognized until the syntactic pass. Since the 'unless' construct makes a construct that doesn't exist in the language, how would D know how to parse 'unless' statements unless it parses and semantics the 'define' statement first? void foo(bool cond) { unless(cond) { writefln("not cond"); } } define("unless", "(", BooleanExpression, ")", BlockStatement) { if(!BooleanExpression) BlockStatement; } When D does the semantic pass on this, it semantics 'foo' first, and so would fail to parse the 'unless' statement correctly. Although, now that I think of it, D's semantic pass is split up, and the first pass only semantics the function signature and not the body, so in this case, the 'define' would be semantic'ed before foo's body, and it'd work. So.. :) Well there! I made your job easier :) Although I wonder if there'd be odd corner cases regarding things like templates.
 You can think of it like this: had Walter wanted to implement "unless" 
 last week in D, we'd have no problem today with the separation, right? So, 
 if our metalanguage lets us do only things "Walter could have done without 
 touching the parser", then this separation issue doesn't exist.

 One way around this would be to have some kind of "d metamodule" file 
 which would define all the metacode.  Those metamodules would be compiled 
 first, allowing the compiler to compile the normal D code.  Scary.
It doesn't have to be scary. All it means is that some things (like "unless" above) are valid somewhere in the code, and not valid somewhere else. The word for that is scope. We do it currently with functions, and nobody complains. So you could look at the meta definition like just an ordinary function definition. I honestly think that if we'd had this in D all along, folks would hardly notice.
Nov 30 2006
parent Georg Wrede <georg.wrede nospam.org> writes:
Jarrett Billingsley wrote:
 "Georg Wrede" <georg.wrede nospam.org> wrote in message 
 news:456EBAFF.10000 nospam.org...
 
Not necessarily.

If we wanted the metalanguage to let us do a hairy definition that, say, 
lets us use ][ as an operator, then yes, it breaks the separation.

But if we restrict ourselves to the kind of things done in the "unless" 
example, then there is no risk of this breakage.
How so? The 'define' statement would be a statement like anything else, which would not be recognized until the syntactic pass. Since the 'unless' construct makes a construct that doesn't exist in the language, how would D know how to parse 'unless' statements unless it parses and semantics the 'define' statement first? void foo(bool cond) { unless(cond) { writefln("not cond"); } } define("unless", "(", BooleanExpression, ")", BlockStatement) { if(!BooleanExpression) BlockStatement; } When D does the semantic pass on this, it semantics 'foo' first, and so would fail to parse the 'unless' statement correctly. Although, now that I think of it, D's semantic pass is split up, and the first pass only semantics the function signature and not the body, so in this case, the 'define' would be semantic'ed before foo's body, and it'd work. So.. :)
Whew!
 Well there!  I made your job easier :)  Although I wonder if there'd be odd 
 corner cases regarding things like templates.
Today you can define a function and instantiate it within a template body, and you can instantiate that template within another function body. So far, I see no fundamental reason why meta-definitions should be any different.
You can think of it like this: had Walter wanted to implement "unless" 
last week in D, we'd have no problem today with the separation, right? So, 
if our metalanguage lets us do only things "Walter could have done without 
touching the parser", then this separation issue doesn't exist.


One way around this would be to have some kind of "d metamodule" file 
which would define all the metacode.  Those metamodules would be compiled 
first, allowing the compiler to compile the normal D code.  Scary.
It doesn't have to be scary. All it means is that some things (like "unless" above) are valid somewhere in the code, and not valid somewhere else. The word for that is scope. We do it currently with functions, and nobody complains. So you could look at the meta definition like just an ordinary function definition. I honestly think that if we'd had this in D all along, folks would hardly notice.
Nov 30 2006
prev sibling parent reply Don Clugston <dac nospam.com.au> writes:
Georg Wrede wrote:
 renoX wrote:
 Back on the subject of metaprogamming, one thing which makes me 
 cautious about
 metaprogramming is debugging: when there is a problem debugging 
 generated code is
 a nightmare usually..
It's not metaprogramming itself, it's the bad implementations that make it hard. If we had Perfect Metaprogramming(TM) in D, then I could do the following: I'm coding some stuff and I notice that what I'd really want is a new keyword, "unless", that would make it so much easier for me to write this application clearly. I decide to create it. I want to use it like this unless (fullMoonTonight) { doRegularStuff() } So, to create such a thing in D, I'd write something like define("unless", "(", BooleanExpression, ")", BlockStatement) { if(!BooleanExpression) BlockSTatement; }
What's wrong with void unless(bool cond, lazy void delegate() blockstatement) { if (!cond) blockstatement(); } ? Sure, you have to write unless(fullMoonTonight, { doRegularStuff(); }); but if there was a way to have trailing delegates, you could have the exact syntax you wanted. (The interaction between lazy parameters and template metaprogramming is a really interesting unexplored area. What can be achieved with tuples containing lazy code blocks???).
 ---
 
 People say GC has to be slow, but that is mostly because it used to only 
 exist in languages that were slow to begin with (and badly implemented).
 
 The same thing with metaprogramming. C++ has given it such a bad rep, 
 it'll take years before folks learn away from their fears.
Absolutely.
Nov 30 2006
parent Georg Wrede <georg.wrede nospam.org> writes:
Don Clugston wrote:
 Georg Wrede wrote:
 If we had Perfect Metaprogramming(TM) in D, then I could do the 
 following:
What's wrong with void unless(bool cond, lazy void delegate() blockstatement) { if (!cond) blockstatement(); }
What's wrong is that it's besides the point. :-) Your solution is a good one, it works today, and looks uncomplicated. Of course, like you said, one has to get accustomed to a slightly unusual syntax. (Well, at least unusual for a simple conditional.) Which would of course be more hassle for the user than if(!fullMoonTonight){}, so the point gets lost. But even that's beside the point. The point had already got derailed because I jumped to specifics too soon. ;-) --- But it did bring up an interesting remark by Andrey about Nemerle. Got me thinking, some implementations of Lisp are compile-only (that is, they never interpret), and there probably are other compiled languages too (Don't know about Nemerle, gotta check it out some day), that support the kind of stuff that I did in the example. So there should exist no a priori reason why it couldn't be done in D? (Of course the mere proposition scares everybody gutless here, but I'm talking _technical_ reason. Compare also to my reply to Jarrett, right next to this post in this thread.)
Nov 30 2006
prev sibling parent reply "Jarrett Billingsley" <kb3ctd2 yahoo.com> writes:
"Brad Anderson" <brad dsource.org> wrote in message 
news:ekhh7s$2e7$1 digitaldaemon.com...

 Poor Lisp.  It just sits there, 50 years old, debugged, optimized, and 
 ready
 to go, while the imperative languages try to inch closer over the decades.
In that case.. it'd be another interesting experiment to try to come up with a new syntax for Lisp that appeals to more programmers than it does now ;) I really can't get past the parentheses. I know Georg said it's an excuse, but I really, truly cannot understand most Lisp code because I can't tell which right paren out of a group of six is closing which left paren. I'm sure bracket highlighting in a code editor can help, but why should that be necessary? I'm sure a good deal of those parens can be stripped out, or replaced by other brackets, or just moved around to get a more algebraic syntax.
Nov 29 2006
next sibling parent reply Don Clugston <dac nospam.com.au> writes:
Jarrett Billingsley wrote:
 "Brad Anderson" <brad dsource.org> wrote in message 
 news:ekhh7s$2e7$1 digitaldaemon.com...
 
 Poor Lisp.  It just sits there, 50 years old, debugged, optimized, and 
 ready
 to go, while the imperative languages try to inch closer over the decades.
In that case.. it'd be another interesting experiment to try to come up with a new syntax for Lisp that appeals to more programmers than it does now ;) I really can't get past the parentheses. I know Georg said it's an excuse, but I really, truly cannot understand most Lisp code because I can't tell which right paren out of a group of six is closing which left paren. I'm sure bracket highlighting in a code editor can help, but why should that be necessary? I'm sure a good deal of those parens can be stripped out, or replaced by other brackets, or just moved around to get a more algebraic syntax.
I completely agree. Lisp has a terrible "Hello, World" problem. The first Lisp program I saw had a mass of parentheses, and introduced the functions 'car' and 'cdr' (The year is 1952, apparently). For a newbie, Lisp debugging involves counting parentheses. The failure of Lisp to gain traction is a great demonstration of the importance of syntactic sugar. Poor old Lisp. Forth was another great language for metaprogramming. Even the language primitives were written in Forth, except for a few dozen lines of asm. Completely unmaintainable, though -- asm is much easier.
Nov 29 2006
parent reply Brad Anderson <brad dsource.org> writes:
Don Clugston wrote:
 Jarrett Billingsley wrote:
 "Brad Anderson" <brad dsource.org> wrote in message
 news:ekhh7s$2e7$1 digitaldaemon.com...

 Poor Lisp.  It just sits there, 50 years old, debugged, optimized,
 and ready
 to go, while the imperative languages try to inch closer over the
 decades.
In that case.. it'd be another interesting experiment to try to come up with a new syntax for Lisp that appeals to more programmers than it does now ;) I really can't get past the parentheses. I know Georg said it's an excuse, but I really, truly cannot understand most Lisp code because I can't tell which right paren out of a group of six is closing which left paren. I'm sure bracket highlighting in a code editor can help, but why should that be necessary? I'm sure a good deal of those parens can be stripped out, or replaced by other brackets, or just moved around to get a more algebraic syntax.
I completely agree. Lisp has a terrible "Hello, World" problem.
I understand I'm reaching fanboi status here, and I'll stop soon. But: doesn't seem too awful.
 The failure of Lisp to gain traction is a great demonstration of the
 importance of syntactic sugar. Poor old Lisp.
I don't think this is the primary reason. As mentioned before, syntax is a part of it, but so is the total power given to the programmer. This power leads to a lack of standard or cohesive libs, b/c it's so easy to make it exactly the way you want it. I imagine that if some of the D power users wrapped themselves in Lisp for a while, they'd be able to do for themselves what they beg Walter to do for them in D. BA
Nov 29 2006
parent reply Steve Horne <stephenwantshornenospam100 aol.com> writes:
On Wed, 29 Nov 2006 11:47:10 -0500, Brad Anderson <brad dsource.org>
wrote:

I don't think this is the primary reason.  As mentioned before, syntax is a
part of it, but so is the total power given to the programmer.  This power
leads to a lack of standard or cohesive libs, b/c it's so easy to make it
exactly the way you want it.  I imagine that if some of the D power users
wrapped themselves in Lisp for a while, they'd be able to do for themselves
what they beg Walter to do for them in D.
Not really. There are things you just can't do with Scheme macros. Associativity and precedence, for instance. This means that if you want to do these things, you have to go the Von Neumann route - treat code as data and manipulate it at compile time using Scheme functions. That means you have to deal with parsing to ASTs, manipulating ASTs, and back-end code generation. In short, you have to design a language and write a compiler. And you have to do it without the benefit of those high level tools, since you haven't written them yet - the bootstrap thing. You can get Scheme libraries for parsing and so on, so you're not quite working from scratch, but you are working from a level that's not substantially different than using Yacc and C. Except, of course, that you've already got some of those higher level tools in C, and if you need something higher level than that you could always use C++ or some other language that has parsing tools available for it. I shouldn't need to point out that designing a language and writing a compiler from scratch isn't everyones favorite passtime. Having a standard one as part of the library, maybe with support for extending the dialect it provides - that sounds promissing. And while Lisp implemented in D (as in dLisp) is a good thing if you have the need, the way to make me really sit up and take notice is to show me D implemented in Scheme. -- Remove 'wants' and 'nospam' from e-mail.
Nov 29 2006
parent reply Brad Anderson <brad dsource.org> writes:
Steve Horne wrote:
 On Wed, 29 Nov 2006 11:47:10 -0500, Brad Anderson <brad dsource.org>
 wrote:
 
 I don't think this is the primary reason.  As mentioned before, syntax is a
 part of it, but so is the total power given to the programmer.  This power
 leads to a lack of standard or cohesive libs, b/c it's so easy to make it
 exactly the way you want it.  I imagine that if some of the D power users
 wrapped themselves in Lisp for a while, they'd be able to do for themselves
 what they beg Walter to do for them in D.
Not really. There are things you just can't do with Scheme macros. Associativity and precedence, for instance. This means that if you want to do these things, you have to go the Von Neumann route - treat code as data and manipulate it at compile time using Scheme functions.
I'm not following. Do you have definitions or examples of these? I did find this... http://lambda-the-ultimate.org/node/1605 Not trying to be thick, BA
Nov 29 2006
next sibling parent reply Steve Horne <stephenwantshornenospam100 aol.com> writes:
On Wed, 29 Nov 2006 14:11:01 -0500, Brad Anderson <brad dsource.org>
wrote:

Steve Horne wrote:
 There are things you just can't do with Scheme macros. Associativity
 and precedence, for instance. This means that if you want to do these
 things, you have to go the Von Neumann route - treat code as data and
 manipulate it at compile time using Scheme functions.
I'm not following. Do you have definitions or examples of these?
No, but it's implicit in the subset of the Scheme language that I understand. The term to look up is 'quoting'. A quoted expression may look like code, but to Scheme it is just a list of tokens. You pass that list as a parameter to a function that can make sense of it, and you have a new language extension. And the translation should happen at compile time, though at this point we are running into the limits of my knowledge of Scheme.
http://lambda-the-ultimate.org/node/1605
I can't seem to access that ATM, but I'll give it another go later. Going purely on the URL, though, lambdas (first class functions) aren't really the issue here. It's a powerful tool - one thats widely imitated these days - but it isn't a metaprogramming thing. -- Remove 'wants' and 'nospam' from e-mail.
Nov 29 2006
parent reply Brad Anderson <brad dsource.org> writes:
Steve Horne wrote:
 On Wed, 29 Nov 2006 14:11:01 -0500, Brad Anderson <brad dsource.org>
 wrote:
 
 Steve Horne wrote:
 There are things you just can't do with Scheme macros. Associativity
 and precedence, for instance. This means that if you want to do these
 things, you have to go the Von Neumann route - treat code as data and
 manipulate it at compile time using Scheme functions.
I'm not following. Do you have definitions or examples of these?
No, but it's implicit in the subset of the Scheme language that I understand.
Okay, I have worked with Common Lisp, and not much with Scheme. Although for Scheme, I've done a bit while reading Structure and Interpretation of Computer Programs, an excellent book.
 
 The term to look up is 'quoting'. A quoted expression may look like
 code, but to Scheme it is just a list of tokens. You pass that list as
 a parameter to a function that can make sense of it, and you have a
 new language extension. And the translation should happen at compile
 time, though at this point we are running into the limits of my
 knowledge of Scheme.
 
 http://lambda-the-ultimate.org/node/1605
I can't seem to access that ATM, but I'll give it another go later. Going purely on the URL, though, lambdas (first class functions) aren't really the issue here. It's a powerful tool - one thats widely imitated these days - but it isn't a metaprogramming thing.
Understood. lambda-the-ultimate.org is a programming language discussion site, iirc. Here's the google cache: http://216.239.51.104/search?q=cache:o2sGhHoc57cJ:lambda-the-ultimate.org/node/1605+lisp+associativity&hl=en&gl=us&ct=clnk&cd=6 BA
Nov 29 2006
parent Steve Horne <stephenwantshornenospam100 aol.com> writes:
I have the definite feeling that I'm confusing myself at the moment
:-(


On Wed, 29 Nov 2006 15:57:03 -0500, Brad Anderson <brad dsource.org>
wrote:

Understood.  lambda-the-ultimate.org is a programming language discussion
site, iirc.  Here's the google cache:

http://216.239.51.104/search?q=cache:o2sGhHoc57cJ:lambda-the-ultimate.org/node/1605+lisp+associativity&hl=en&gl=us&ct=clnk&cd=6
OK. On a quick scan through that, there doesn't seem to be anything to say that Scheme macros can do associativity and precedence. That's fine by me, as I don't feel quite as stupid as I did a minute ago ;-) It's an interesting link. There's a lot of languages mentioned that I have only a very superficial knowledge of - e.g. I've played with Prolog a bit, but although I knew there's parsing stuff there, I never used it. The 'take a look at prolog' bit of your link makes it look interesting, though. Defining Haskell operators seemed easy, but there was something that worried me about it - can't remember what. A major issue mentioned on that link is having different precedence and associativity in different bits of the code. In Scheme, using quoting, that's not a problems of course - any more than if quoting meant text strings as opposed to lists of tokens. The area where a particular syntax applies is delimited. A related issue *may* have been one of my Haskell concerns, though even if it was there's a danger that I was reasoning from ignorance. -- Remove 'wants' and 'nospam' from e-mail.
Nov 29 2006
prev sibling parent reply Steve Horne <stephenwantshornenospam100 aol.com> writes:
On Wed, 29 Nov 2006 14:11:01 -0500, Brad Anderson <brad dsource.org>
wrote:

Steve Horne wrote:
 On Wed, 29 Nov 2006 11:47:10 -0500, Brad Anderson <brad dsource.org>
 wrote:
 
 I don't think this is the primary reason.  As mentioned before, syntax is a
 part of it, but so is the total power given to the programmer.  This power
 leads to a lack of standard or cohesive libs, b/c it's so easy to make it
 exactly the way you want it.  I imagine that if some of the D power users
 wrapped themselves in Lisp for a while, they'd be able to do for themselves
 what they beg Walter to do for them in D.
Not really. There are things you just can't do with Scheme macros. Associativity and precedence, for instance. This means that if you want to do these things, you have to go the Von Neumann route - treat code as data and manipulate it at compile time using Scheme functions.
I'm not following. Do you have definitions or examples of these? I did find this...
Sorry, I'm being stupid. On reflection, you're asking for examples of things you can't do with Scheme macros. Well, it's hard to provide examples of things that can't be done beyond listing them. Disproof by example is much stronger than proof by example. If your link gives examples of precedence and associativity using macros, well, that just makes me twice stupid. The claim about associativity and precedence, though, just fell out of my reading of the Scheme manual. At the time, I could see no way to do it. I'm aware that it is possible to build them in by creating an unambiguous set of BNF rules for a grammar (as opposed to the more normal approach of using disambiguating rules) but I couldn't see a way to do either. I had the distinct impression that the matching is always from left to right. Based on that, you can write (1 + 2 * 3) if you want, but the result will be 9, not 7. -- Remove 'wants' and 'nospam' from e-mail.
Nov 29 2006
parent reply Brad Anderson <brad dsource.org> writes:
Steve Horne wrote:
 On Wed, 29 Nov 2006 14:11:01 -0500, Brad Anderson <brad dsource.org>
 wrote:
 
 Steve Horne wrote:
 On Wed, 29 Nov 2006 11:47:10 -0500, Brad Anderson <brad dsource.org>
 wrote:

 I don't think this is the primary reason.  As mentioned before, syntax is a
 part of it, but so is the total power given to the programmer.  This power
 leads to a lack of standard or cohesive libs, b/c it's so easy to make it
 exactly the way you want it.  I imagine that if some of the D power users
 wrapped themselves in Lisp for a while, they'd be able to do for themselves
 what they beg Walter to do for them in D.
Not really. There are things you just can't do with Scheme macros. Associativity and precedence, for instance. This means that if you want to do these things, you have to go the Von Neumann route - treat code as data and manipulate it at compile time using Scheme functions.
I'm not following. Do you have definitions or examples of these? I did find this...
Sorry, I'm being stupid. On reflection, you're asking for examples of things you can't do with Scheme macros. Well, it's hard to provide examples of things that can't be done beyond listing them. Disproof by example is much stronger than proof by example. If your link gives examples of precedence and associativity using macros, well, that just makes me twice stupid. The claim about associativity and precedence, though, just fell out of my reading of the Scheme manual. At the time, I could see no way to do it. I'm aware that it is possible to build them in by creating an unambiguous set of BNF rules for a grammar (as opposed to the more normal approach of using disambiguating rules) but I couldn't see a way to do either. I had the distinct impression that the matching is always from left to right. Based on that, you can write (1 + 2 * 3) if you want, but the result will be 9, not 7.
in a Lisp-like language, using prefix notation, this is: (+ 1 (* 2 3)) => 7 or (* (+ 1 2) 3) => 9
 On a quick scan through that, there doesn't seem to be anything to say
 that Scheme macros can do associativity and precedence. That's fine by
 me, as I don't feel quite as stupid as I did a minute ago  ;-)
There's no need for precedence, because it's always explicit. This thing is a dead horse. I suspect we are confusing each other, as you say. You should look at Common Lisp a bit, esp. the link I posted to Peter Seibel's book. Even a brief read may help you understand a bit of what I'm saying about metaprogramming. BA
Nov 29 2006
next sibling parent reply Brad Anderson <brad dsource.org> writes:
Brad Anderson wrote:
 Steve Horne wrote:
 On Wed, 29 Nov 2006 14:11:01 -0500, Brad Anderson <brad dsource.org>
 wrote:

 Steve Horne wrote:
 On Wed, 29 Nov 2006 11:47:10 -0500, Brad Anderson <brad dsource.org>
 wrote:

 I don't think this is the primary reason.  As mentioned before, syntax is a
 part of it, but so is the total power given to the programmer.  This power
 leads to a lack of standard or cohesive libs, b/c it's so easy to make it
 exactly the way you want it.  I imagine that if some of the D power users
 wrapped themselves in Lisp for a while, they'd be able to do for themselves
 what they beg Walter to do for them in D.
Not really. There are things you just can't do with Scheme macros. Associativity and precedence, for instance. This means that if you want to do these things, you have to go the Von Neumann route - treat code as data and manipulate it at compile time using Scheme functions.
I'm not following. Do you have definitions or examples of these? I did find this...
Sorry, I'm being stupid. On reflection, you're asking for examples of things you can't do with Scheme macros. Well, it's hard to provide examples of things that can't be done beyond listing them. Disproof by example is much stronger than proof by example. If your link gives examples of precedence and associativity using macros, well, that just makes me twice stupid. The claim about associativity and precedence, though, just fell out of my reading of the Scheme manual. At the time, I could see no way to do it. I'm aware that it is possible to build them in by creating an unambiguous set of BNF rules for a grammar (as opposed to the more normal approach of using disambiguating rules) but I couldn't see a way to do either. I had the distinct impression that the matching is always from left to right. Based on that, you can write (1 + 2 * 3) if you want, but the result will be 9, not 7.
in a Lisp-like language, using prefix notation, this is: (+ 1 (* 2 3)) => 7 or (* (+ 1 2) 3) => 9
 On a quick scan through that, there doesn't seem to be anything to say
 that Scheme macros can do associativity and precedence. That's fine by
 me, as I don't feel quite as stupid as I did a minute ago  ;-)
There's no need for precedence, because it's always explicit. This thing is a dead horse. I suspect we are confusing each other, as you say. You should look at Common Lisp a bit, esp. the link I posted to Peter Seibel's book. Even a brief read may help you understand a bit of what I'm saying about metaprogramming. BA
Sorry, gotta kick the horse one more time. http://plaza.ufl.edu/lavigne/infix.lisp BA
Nov 29 2006
parent Steve Horne <stephenwantshornenospam100 aol.com> writes:
On Wed, 29 Nov 2006 16:30:28 -0500, Brad Anderson <brad dsource.org>
wrote:

Sorry, gotta kick the horse one more time.

http://plaza.ufl.edu/lavigne/infix.lisp
Well... Of course that macro is actually using quoted expressions and a 'compiler' in its implementation - using a macro to hide the 'compiler' is cheating a bit, really. And it's neither trivial nor, AFAICS, a standard library - just one persons how-to-do-it example. I confess it's certainly shorter than I expected, but that's partly because it is just a precedence parser - it only handles precedence and associativity. But then that's probably no bad thing - concepts should be kept separate rather than being bundled, which is probably a major failing in how I was looking at this (the idea of a huge extensible dialect library, rather than a small concept library). And given the point Georg made about Common Lisp vs. Scheme, which certainly suggests that some of my resistance is misplaced... I don't think you've quite proven me wrong (yet), but clearly my position is a lot weaker than I thought and probably unsustainable. And since my only remaining defence is actually a point you appear to agree with (your link isn't a standard, which relates back to the coherence thing you mentioned) I have no choice but to withdraw what I said. The worrying thing is that I think I'm still on probation from a certain Lisp-related-ignorance incident on comp.lang.python a few years back - please don't tell on me! -- Remove 'wants' and 'nospam' from e-mail.
Nov 29 2006
prev sibling parent reply Steve Horne <stephenwantshornenospam100 aol.com> writes:
OK - I think I get the source of this confusion, and surprise
surprise, it's my fault. So...


On Wed, 29 Nov 2006 16:28:29 -0500, Brad Anderson <brad dsource.org>
wrote:

 Based on that, you can write (1 + 2 * 3) if you want, but the result
 will be 9, not 7.
 
in a Lisp-like language, using prefix notation, this is: (+ 1 (* 2 3)) => 7
Yes, but if you say that Lisp is evil because you have to use prefix notation, you get people jumping up to say 'no you don't - just define your own syntax as a library'. That's basically where my point starts, in a wider sense that starts outside this thread, which I obviously didn't make clear. Hence the confusion - my key point is about syntax, not semantics. Getting back to what you said... : I imagine that if some of the D power users : wrapped themselves in Lisp for a while, they'd be able to do for themselves : what they beg Walter to do for them in D. In semantic terms that's true, but in syntax terms it's not - at least not without going to the extreme compiler building stuff I mentioned earlier. Lisp may provide the ability to set up whatever semantics you want, but if you have to access that through a syntax that you just can't get on with, that's a serious handicap. And the 'what they beg Walter to do for them' is probably as much about syntax as semantics.
This thing is a dead horse.  I suspect we are confusing each other, as you
say.  You should look at Common Lisp a bit, esp. the link I posted to Peter
Seibel's book.  Even a brief read may help you understand a bit of what I'm
saying about metaprogramming.
I think I already know what you were saying to start with, and agree with the Greenspun's 10th Rule bit. And your lack of coherence bit is similar to a point I already made somewhere about the lack of standard implementations of high level concepts in Scheme (or at least of ones I can find). It's the just-use-Lisp tone of what you said that I object to. Learn from Lisp, yes, absolutely. Using it, though... -- Remove 'wants' and 'nospam' from e-mail.
Nov 29 2006
parent "Andrey Khropov" <andkhropov_nosp m_mtu-net.ru> writes:
Steve Horne wrote:

 : I imagine that if some of the D power users
 : wrapped themselves in Lisp for a while, they'd be able to do for themselves
 : what they beg Walter to do for them in D.
 
 In semantic terms that's true, but in syntax terms it's not - at least
 not without going to the extreme compiler building stuff I mentioned
 earlier. Lisp may provide the ability to set up whatever semantics you
 want, but if you have to access that through a syntax that you just
 can't get on with, that's a serious handicap. And the 'what they beg
 Walter to do for them' is probably as much about syntax as semantics.
I agree. Good point. I have to admit - Lisp in the most powerful language I'm aware of. But it doesn't have much syntax sugar for common operations. And everything looks the same. Take this chunk from spectralnorm program from the shootout for example: --------------------------------------------------- (dotimes (i n) (incf vBv (* (aref u i) (aref v i))) (incf vv (* (aref v i) (aref v i)))) --------------------------------------------------- I think D is a bit more readable and concise: --------------------------------------------------- foreach(i, vi; v) { vBv += u[i] * vi; vv += vi * vi; } --------------------------------------------------- -- AKhropov
Nov 29 2006
prev sibling parent reply Brad Anderson <brad dsource.org> writes:
Jarrett Billingsley wrote:
 "Brad Anderson" <brad dsource.org> wrote in message 
 news:ekhh7s$2e7$1 digitaldaemon.com...
 
 Poor Lisp.  It just sits there, 50 years old, debugged, optimized, and 
 ready
 to go, while the imperative languages try to inch closer over the decades.
In that case.. it'd be another interesting experiment to try to come up with a new syntax for Lisp that appeals to more programmers than it does now ;)
But the existing prefix notation is exactly why it can be extended so many ways with macros. Change that and you lose most of, or at least a lot of, the metaprogramming facilities (see Dylan). You don't even have Lisp anymore. It's why I'm skeptical of how far imperative languages can go with metaprogramming before it turns into an awful beast. I kind of hope I'm wrong and D can pull a lot of it off. As Georg said, Lisp is scary in its breadth and capability. As Steve Horne lamented, there are no high-level standard libs, because it's so easy to roll your own. Any standard lib that's come along has not served the needs of all, so they roll their own anyway. Those two things have slowed the adoption of Lisp. Potentially syntax, too, see below. My original post in this thread was more of an observation that languages keep getting more sophisticated, because the power users drive the compiler writers further and further. D is here because C++ can't grow as nimbly as it used to, and Walter has some cool ideas to add to it. And here's Lisp, sitting there, arguably the most powerful language ever created, and people are afraid of its power, its syntax, its (insert excuse here) so they choose to reimplement. I'd hate for people to get pretty far, hit a wall on something, and then look and see how easy it is to do in Lisp. On the other hand, I would never try to quash the efforts people in D-land make. It's just an observation and a curiosity on my part. Maybe Lisp isn't all that I'm giving it credit for.
 
 I really can't get past the parentheses.  I know Georg said it's an excuse, 
 but I really, truly cannot understand most Lisp code because I can't tell 
 which right paren out of a group of six is closing which left paren.  I'm 
 sure bracket highlighting in a code editor can help, but why should that be 
 necessary?  I'm sure a good deal of those parens can be stripped out, or 
 replaced by other brackets, or just moved around to get a more algebraic 
 syntax. 
That's probably because you're comfortable with the imperative langs, and 'C' style. Once people get over DVORAK keyboards, they claim the difference is amazing. S-expressions and prefix notation really is a powerful thing once you see all the benefits. You don't have to do quoted blocks like Nemerle. And most people get past their difficulties of parens with indentation The parens on the end don't really matter. It's where the indenting starts. And of course, good editors take care of the parens and the indenting. Even without the metaprogramming facilities the prefix notation allows, you end up with programs that speak more directly to the problem you're trying to solve. That may just be functional languages in general, though, ending up with code that seemingly is a language designed for your domain. For the uninformed, check out this great online book. I've only given two chapters: a whirlwind tour, and macros. Even if you're hooked on D, this makes you think differently about programming, and got me thinking of cool uses for delegates in D and much more. Quick Intro: http://gigamonkeys.com/book/practical-a-simple-database.html Macros: http://gigamonkeys.com/book/macros-defining-your-own.html For D to approach some of this extensibility would be phenomenal, and as Georg suggested, we may be closer than anyone suspects.. BA
Nov 29 2006
parent Steve Horne <stephenwantshornenospam100 aol.com> writes:
On Wed, 29 Nov 2006 11:25:43 -0500, Brad Anderson <brad dsource.org>
wrote:

But the existing prefix notation is exactly why it can be extended so many
ways with macros.  Change that and you lose most of, or at least a lot of, the
metaprogramming facilities (see Dylan).
So don't change it. Just add a standard syntax-sugar library on top for expressions with precedence and associativity (which sadly Lisp - or at least Scheme - macros can't handle, but which can be handled by using a more Von Neumann approach). Nemerle has been mentioned recently, and I've been reading up a bit today, and my impression is very positive. You start out, from the beginning, using real world high level practical tools. There's a heavy functional flavour, so it helps to have played with something like Haskell in the past, but get past the "this is different" and there is real workhorse stuff. And of course that goes beyond the fact that this is a usable high-level language out of the box. Making it a .NET language is both an obvious plus point and my main reservation. It means there is a solid set of libraries to use, without the need for a whole bunch of Nemerle-specific porting. The downside obviously being that it is limited to the .NET platform - no systems level coding etc. Anyway, it's not until you've got the tools to do 99% of your work that the 'by the way, the if/else, for loop etc etc are just standard library macros - you can do it different if you really need to' becomes an issue. Some people have mentioned a key problem with metaprogramming/code generation in terms of tools (e.g. the debugging issue). Well, I'm glad I've picked up the 'concept oriented' terminology from that XLR link because it helps me say this more easily... It doesn't matter whether a concept is implemented directly in the compiler or in a library. What matters is whether the tools understand the concept. If you have a standard set of concepts in a library that handle 99% of all requirements, tools like debuggers can be written to be aware of them, and so the problem only relates to the 1% of code. The principle is not so different from having source-level debugging instead of assembler-level debugging. And even for that 1%, the alternatives are all IMO just as bad as generated code anyway. Code that has been force-fitted to badly matched language concepts is hard to understand and maintain, just like the generated code. Of course if the library that describes a new concept could also give special instructions to the debugger on how to present it, along with perhaps documentation handling instructions etc etc, then that would be a very good thing. It would mean that you could treat a mature metaprogramming library much as you would a built-in compiler feature - so long as the library itself is working, you only worry about what you are doing with it, not the internals of how it works. -- Remove 'wants' and 'nospam' from e-mail.
Nov 29 2006
prev sibling parent reply Charles D Hixson <charleshixsn earthlink.net> writes:
Steve Horne wrote:
 On Mon, 27 Nov 2006 14:28:15 +0100, "Frank Benoit (keinfarbton)"
 <benoit tionex.removethispart.de> wrote:
 
 Well, D might be faster, but it shows that the compile time can increase
 very fast.
When doing the metaprogramming, the compiler is basically acting as an interpreter as opposed to a compiler. It is nothing more advanced or freaky than that.
Interesting...particularly as what I really want is basically the opposite. I expect PyD to just exactly fill my needs in a few more months. Currently I'm working in Python, and my only option has been Pyrex (not a bad choice in itself!), but PyD should be a much better choice, as D is a better underlying match to Python than C is, and what I'll be doing is largely translating chunks of Python into a compilable language for speed. Of course, what would be really interesting would be a runtime version of D, with runtime type assignment, etc., but without the kind of bloat that would occur if this were done with templates. (OTOH, I must admit that I'm guessing at the amount of bloat that a generic template would add. They don't add that much in Eiffel or Ada code, but they were impossibly bloated the last time I tried it in C++ [admittedly that's over a decade ago].)
Nov 27 2006
parent Sean Kelly <sean f4.ca> writes:
Charles D Hixson wrote:
 
 Of course, what would be really interesting would be a runtime version 
 of D, with runtime type assignment, etc., but without the kind of bloat 
 that would occur if this were done with templates.
 
 (OTOH, I must admit that I'm guessing at the amount of bloat that a 
 generic template would add.  They don't add that much in Eiffel or Ada 
 code, but they were impossibly bloated the last time I tried it in C++ 
 [admittedly that's over a decade ago].)
Things have improved greatly, though actual results still vary widely from compiler to compiler. Here's a link to the C++ performance report compiled a few years ago: http://www.research.att.com/~bs/performanceTR.pdf The crucial part of reducing code size is for the compiler to recognize that the code for many specializations is actually the same (pointers to different class types, for example), and to eliminate duplicates. This can be done manually in library code as well (containers might use a thin wrapper on top of a more traditional class that stores values as void*), if the compiler optimizations are not sufficient. Sean
Nov 28 2006