www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - project oriented

reply davidl <davidl nospam.org> writes:
The module package system still stays in the state of the C age. It's file  

namespace and distributed packaging is a must nowadays and the compiler  
should be project oriented and take project information as the compiling  
base. Also an IDE is quite useful for providing project templates.

-- 
ʹÓà Opera ¸ïÃüÐԵĵç×ÓÓʼþ¿Í»§³ÌÐò: http://www.opera.com/mail/
May 12 2009
next sibling parent Christopher Wright <dhasenan gmail.com> writes:
davidl wrote:
 The module package system still stays in the state of the C age. It's 

 The namespace and distributed packaging is a must nowadays and the 
 compiler should be project oriented and take project information as the 
 compiling base. Also an IDE is quite useful for providing project 
 templates.
That would most likely be a good thing. Things you could get from it: - internal classes can be enumerated at compile time (maybe eventually) - faster partial compilation - potential of virtual templates for internal classes (I can dream) Probably some others that I can't think of right now.
May 12 2009
prev sibling next sibling parent reply BCS <none anon.com> writes:
Hello davidl,

 The module package system still stays in the state of the C age. It's

 one. The  namespace and distributed packaging is a must nowadays and
 the compiler  should be project oriented and take project information
 as the compiling  base. Also an IDE is quite useful for providing
 project templates.
 
The up side to file based packaging is that the compiler can find the files without needing extra information. There are several tools that can build the compiler/build system needs to have a metadata file that list all the .d files to be built adding yet another piece of redundant complexity. a total pain as soon as you need to work with non-language aware tools.
May 12 2009
parent Graham St Jack <Graham.StJack internode.on.net> writes:
On Tue, 12 May 2009 21:12:51 +0000, BCS wrote:

 Hello davidl,
 
 The module package system still stays in the state of the C age. It's

 one. The  namespace and distributed packaging is a must nowadays and
 the compiler  should be project oriented and take project information
 as the compiling  base. Also an IDE is quite useful for providing
 project templates.
 
 
The up side to file based packaging is that the compiler can find the files without needing extra information. There are several tools that of system, the compiler/build system needs to have a metadata file that list all the .d files to be built adding yet another piece of redundant complexity. is a total pain as soon as you need to work with non-language aware tools.
Good point. I like the current system's simplicity, and changing it as suggested would add a lot of hassle.
May 14 2009
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
BCS:

 a total pain as soon as you need to work with non-language aware tools.
I think Microsoft thinks that an IDE is a part of a modern language. So they have tried to design a language that almost needs an IDE. Fortress language looks to need an IDE even more. There are languages (most Smalltalk, and some Forth and some Logo) that are merged with their development environment. Bye, bearophile
May 15 2009
parent reply Georg Wrede <georg.wrede iki.fi> writes:
bearophile wrote:
 BCS:

 a total pain as soon as you need to work with non-language aware tools.
I think Microsoft thinks that an IDE is a part of a modern language. So they have tried to design a language that almost needs an IDE. Fortress language looks to need an IDE even more. There are languages (most Smalltalk, and some Forth and some Logo) that are merged with their development environment.
Hmm. Come to think of it, that's not totally unreasonable. One might even admit, it's modern. In the Good Old Days (when it was usual for an average programmer to write parts of the code in ASM (that was the time before the late eighties -- be it Basic, Pascal, or even C, some parts had to be done in ASM to help a bearable user experience when the mainframes had less power than today's MP3 players), the ASM programing was very different on, say, Zilog, MOS, or Motorola processors. The rumor was that the 6502 was made for hand coded ASM, whereas the 8088 was more geared towards automatic code generation (as in C commpilers, etc.). My experiences of both certainly seemed to support this. Precisely the same thinking can be applied to programming languages and whether one should use them with an IDE or "independent tools". (At the risk of flame wars, opinion storms, etc.) I'd venture to say, that the D programming language is created for the Hand Coder. (Meaning somebody with an independent text editor (Notepad, vi, Emacs, or whatever), and a command line compile invocation. not familiar with the language itself), or Java, as an even better example. Java, as a language, is astonishingly trivial to learn. IMHO, it should take at most half the time that D1 does. The book "The Java Programming Language" (by Arnold and Gosling, 3p 1996), is a mere 300 pages, printed in a huge font, with plenty of space before and after subheadings, on thick paper (as opposed to the 4 other books published at the same time, that Sun presumed (quite right) folks would order together, so it wouldn't look inferior in the book shelf. But, to use Java at any productive rate, you simply have to have an IDE that helps with class and method completion, class tree inspection, and preferably two-way UML-tools. So, in a way, Microsoft may be right in assuming that (especially when their thinking anyway is that everybody sits at a computer that's totally dedicated to the user's current activity anyhow) preposterous horse power is (or, should be) available at the code editor. It's not unthinkable that this actually is The Way of The Future. ---- If we were smart with D, we'd find out a way of leapfrogging this or C++, more practical than Haskell, Scheme, Ruby, &co, and more maintainable than C or Perl, but which *still* is Human Writable. All we need is some outside-of-the-box thinking, and we might reap some overwhelming advantages when we combine *this* language with the IDEs and the horsepower that the modern drone takes for granted. Easier parsing, CTFE, actually usable templates, practical mixins, pure functions, safe code, you name it! We have all the bits and pieces to really make writing + IDE assisted program authoring, a superior reality. "Ain't nobody gonna catch us never!"
May 15 2009
next sibling parent Christopher Wright <dhasenan gmail.com> writes:
Georg Wrede wrote:
 If we were smart with D, we'd find out a way of leapfrogging this 

 or C++, more practical than Haskell, Scheme, Ruby, &co, and more 
 maintainable than C or Perl, but which *still* is Human Writable.
More importantly, it should be human readable.
May 16 2009
prev sibling next sibling parent reply BCS <none anon.com> writes:
Hello Georg,

 So, in a way, Microsoft may be right in assuming that (especially when
 their thinking anyway is that everybody sits at a computer that's
 totally dedicated to the user's current activity anyhow) preposterous
 horse power is (or, should be) available at the code editor.
I think that any real programing project now days (regardless of language) all done without one.
 
 It's not unthinkable that this actually is The Way of The Future.
 
 ----
 
 If we were smart with D, we'd find out a way of leapfrogging this

 or C++, more practical than Haskell, Scheme, Ruby, &co, and more
 maintainable than C or Perl, but which *still* is Human Writable. All
 we need is some outside-of-the-box thinking, and we might reap some
 overwhelming advantages when we combine *this* language with the IDEs
 and the horsepower that the modern drone takes for granted.
I think we /already/ have a language that will get there sooner or later. D is commuted to a path where that is it's only /logical/ conclusion.
 
 "Ain't nobody gonna catch us never!"
 
Well, not if we play our hand right. Nothing man ever made is invulnerable to man.
May 16 2009
parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
BCS escribió:
 Hello Georg,
 
 So, in a way, Microsoft may be right in assuming that (especially when
 their thinking anyway is that everybody sits at a computer that's
 totally dedicated to the user's current activity anyhow) preposterous
 horse power is (or, should be) available at the code editor.
I think that any real programing project now days (regardless of language) needs tools to help the programmer. The difference between D won't get much at all done without one.
I can't agree with this. Most of the time I use an IDE for the autocompletion, not much for the build-and-jump-to-error stuff. And I don't see D being easier with regards to remembering what's the name of that function, which members does a class have, in which module are all these. get away with pretty much everything, except you'll be slower at it (same goes for D without an IDE). Again, this also applies to Java. When I started using Java I used the command line and an editor with just syntax highlighting, and made programs of several classes without problem. Refactoring was a PITA, and I'm thinking it's like that in D nowadays. :-P
May 17 2009
next sibling parent reply Yigal Chripun <yigal100 gmail.com> writes:
Ary Borenszweig wrote:
 BCS escribió:
 Hello Georg,

 So, in a way, Microsoft may be right in assuming that (especially when
 their thinking anyway is that everybody sits at a computer that's
 totally dedicated to the user's current activity anyhow) preposterous
 horse power is (or, should be) available at the code editor.
I think that any real programing project now days (regardless of language) needs tools to help the programmer. The difference between D you won't get much at all done without one.
I can't agree with this. Most of the time I use an IDE for the autocompletion, not much for the build-and-jump-to-error stuff. And I don't see D being easier with regards to remembering what's the name of that function, which members does a class have, in which module are all these. get away with pretty much everything, except you'll be slower at it (same goes for D without an IDE). Again, this also applies to Java. When I started using Java I used the command line and an editor with just syntax highlighting, and made programs of several classes without problem. Refactoring was a PITA, and I'm thinking it's like that in D nowadays. :-P
An IDE provides for a different and IMO much better work flow than using a text editor. a programmer using a text editor + batch/command-line compiler implements a single threaded work flow: while (not finished) { 1. write code 2. run compiler 3. run debugger (optional) } an IDE allows a concurrent implementation: you have two threads that run simultaneously - a "Programmer" and an "IDE". the programmer "thread" writes code and at the same time the IDE "thread" parses it and provides feedback: marks syntax errors, provides suggestions, reminds you of missing imports, etc.. The second approach is clearly superior. btw, this has nothing to do with the language. For instance, there are eclipse/netbeans plugins for C++ and once clang is finished and integrated with those, C++ will have the full power of a modern IDE, just like Java or smalltalk have. Of course the language can be designed to make this easier, but it is just as possible for non cooperating languages like C++. IMO, designing the language to support this better work-flow is a good decision made by MS, and D should follow it instead of trying to get away without an IDE.
May 17 2009
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Yigal Chripun:
 IMO, designing the language to support this better work-flow is a good 
 decision made by MS, and D should follow it instead of trying to get 
 away without an IDE.
Do you have some more focused suggestions, then? Bye, bearophile
May 17 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
bearophile wrote:
 Yigal Chripun:
 IMO, designing the language to support this better work-flow is a good 
 decision made by MS, and D should follow it instead of trying to get 
 away without an IDE.
Do you have some more focused suggestions, then? Bye, bearophile
first and foremost, the attitude of people about IDEs needs to be changed. 1st rule of commerce is "the customer is always right", and the fact is that the industry is relying on tools such as IDEs. If we ignore this fact D will become another niche academic language that no one uses. second, D needs to update its stone age compilation model copied from we need to throw away the current legacy model. Java has convenient Jar files: you can package everything into nice modular packages with optional source code and documentation. similar stuff is done in .net.
May 17 2009
parent reply BCS <none anon.com> writes:
Hello Yigal,

 second, D needs to update its stone age compilation model copied from

 we need to throw away the current legacy model.
 Java has convenient Jar files: you can package everything into nice
 modular packages with optional source code and documentation.
 similar stuff is done in .net.
NO ABSOLUTELY NOT! (and I will /not/ apologies for yelling) I will fight that tooth and nail! One of the best thing about D IMNSHO is that a D program is "just a collection of text files". I can, without any special tools, dive in an view or edit any file I want. I can build with nothing but dmd and a command line. I can use the the source control system of my choice. And very importantly, the normal build model produces a stand alone OS native executable. (Note: the above reasons applies to a pure D app, as for non pure D apps, your toast anyway as D or the other language will have to fit in the opposite language's model and something will always leak. The best bet in that system is the simplest system possible and that to is "just text files".
May 17 2009
next sibling parent reply Jarrett Billingsley <jarrett.billingsley gmail.com> writes:
On Sun, May 17, 2009 at 7:15 PM, BCS <none anon.com> wrote:

 second, D needs to update its stone age compilation model copied from

 we need to throw away the current legacy model.
 Java has convenient Jar files: you can package everything into nice
 modular packages with optional source code and documentation.
 similar stuff is done in .net.
NO ABSOLUTELY NOT! (and I will /not/ apologies for yelling) I will fight that tooth and nail! One of the best thing about D IMNSHO is that a D program is "just a collection of text files". I can, without any special tools, dive in an view or edit any file I want. I can build with nothing but dmd and a command line. I can use the the source control system of my choice. And very importantly, the normal build model produces a stand alone OS native executable.
I don't think changing from a decades-old 'one object file per source file' compilation model will make you sacrifice any of that. He's proposing something else, like a custom object format. It has nothing to do with the way source is stored, or with how you invoke the compiler. Java hasn't destroyed any of that by using .class files, has it? We already have a proof-of-concept of this sort of thing for D: LDC. The LLVM intermediate form is far more amenable to cross-module and link-time optimization.
May 17 2009
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Jarrett Billingsley (jarrett.billingsley gmail.com)'s article
 On Sun, May 17, 2009 at 7:15 PM, BCS <none anon.com> wrote:
 second, D needs to update its stone age compilation model copied from

 we need to throw away the current legacy model.
 Java has convenient Jar files: you can package everything into nice
 modular packages with optional source code and documentation.
 similar stuff is done in .net.
NO ABSOLUTELY NOT! (and I will /not/ apologies for yelling) I will fight that tooth and nail! One of the best thing about D IMNSHO is that a D program is "just a collection of text files". I can, without any special tools, dive in an view or edit any file I want. I can build with nothing but dmd and a command line. I can use the the source control system of my choice. And very importantly, the normal build model produces a stand alone OS native executable.
I don't think changing from a decades-old 'one object file per source file' compilation model will make you sacrifice any of that. He's proposing something else, like a custom object format. It has nothing to do with the way source is stored, or with how you invoke the compiler. Java hasn't destroyed any of that by using .class files, has it? We already have a proof-of-concept of this sort of thing for D: LDC. The LLVM intermediate form is far more amenable to cross-module and link-time optimization.
And how about certain metaprogramming things that are otherwise infeasible? To me, the lack of ability to use templates to add virtual functions to classes seems like a pretty severe leak of D's compilation model into higher levels of abstraction. The same can be said for the lack of ability to get information about classes that inherit from a given class via compile time reflection. Change the compilation model to something that is modern and designed with these things in mind and the problem goes away. As an example use case, a few months back, I wrote a deep copy template that would generate functions to deep copy anything you threw at it using only compile time reflection and a little bit of RTTI. The only problem is that, because I could not get information about derived classes at compile time, I couldn't make it work with classes whose runtime type was a subtype of the compile time type of the reference.
May 17 2009
parent reply grauzone <none example.net> writes:
So what would you suggest to make the things you mentioned work? That was:
1. templated virtual functions
2. finding all derived classes (from other source files)

The problem is that D wants to support dynamic linking on the module 
level, more or less.

I still wonder how serialization is supposed to work. Yeah, we can get 
all information at compile time using __traits. But right now, we had to 
register all classes manually using a template function, like "void 
registerForSerialization(ClassType)();". What you'd actually need is to 
iterate over all classes in your project. So this _needs_ a solution.

My best bet would be to allow some kind of "module preprocessor": some 
templated piece of code is called each time a module is compiled. 
Something like "dmd a.d b.d c.d -preprocessor serialize.d". serialize.d 
would be able to iterate over the members of each module (a, b, c) at 
compiletime.


members and the like.)
May 17 2009
next sibling parent dsimcha <dsimcha yahoo.com> writes:
== Quote from grauzone (none example.net)'s article
 So what would you suggest to make the things you mentioned work? That was:
 1. templated virtual functions
 2. finding all derived classes (from other source files)
 The problem is that D wants to support dynamic linking on the module
 level, more or less.
Well, if you use dynamic linking then all bets are off. As long as you compile the project into a single binary using static linking, though, you're good. This would solve a large portion of the cases.
May 18 2009
prev sibling parent BCS <none anon.com> writes:
Hello grauzone,

 So what would you suggest to make the things you mentioned work? That
 was:
 1. templated virtual functions
 2. finding all derived classes (from other source files)
 The problem is that D wants to support dynamic linking on the module
 level, more or less.
 
 I still wonder how serialization is supposed to work. Yeah, we can get
 all information at compile time using __traits. But right now, we had
 to register all classes manually using a template function, like "void
 registerForSerialization(ClassType)();". What you'd actually need is
 to iterate over all classes in your project. So this _needs_ a
 solution.
I'm very interested on any ideas you have for this as I'm planning on writeing just smutchy a library. Currently I'm at the "list the problems I see" stage and I'll have a post on it some time soon.
May 18 2009
prev sibling parent reply Yigal Chripun <yigal100 gmail.com> writes:
BCS wrote:
 Hello Yigal,
 
 second, D needs to update its stone age compilation model copied from

 we need to throw away the current legacy model.
 Java has convenient Jar files: you can package everything into nice
 modular packages with optional source code and documentation.
 similar stuff is done in .net.
NO ABSOLUTELY NOT! (and I will /not/ apologies for yelling) I will fight that tooth and nail! One of the best thing about D IMNSHO is that a D program is "just a collection of text files". I can, without any special tools, dive in an view or edit any file I want. I can build with nothing but dmd and a command line. I can use the the source control system of my choice. And very importantly, the normal build model produces a stand alone OS native executable. (Note: the above reasons applies to a pure D app, as for non pure D apps, your toast anyway as D or the other language will have to fit in the opposite language's model and something will always leak. The best bet in that system is the simplest system possible and that to is "just text files".
hmm. that is *not* what I was suggesting. I was discussing the compilation model and the object file problems. D promises link-time compatibility with C but that's bullshit - you can't link on windows obj files for C and object files for D _unless_ you use the same compiler vendor (DMD & DMC) or you use some conversion tool and that doesn't always work. obj files are arcane. each Platform has its own format and some platforms have more than one format (windows). compare to Java where your class files will run on any machine. I'm not suggesting coping Java's model letter for letter or using a VM either, but rather using a better representation. one other thing, this thread discusses also the VS project files. This is completely irrelevant. those XML files are VS specific and their complexity is MS' problem. Nothing prevents a developer from using since VS comes with a command line compiler. the issue is not the build tool but rather the compilation model itself.
May 17 2009
next sibling parent reply grauzone <none example.net> writes:
Yigal Chripun wrote:
 BCS wrote:
 Hello Yigal,

 second, D needs to update its stone age compilation model copied from

 we need to throw away the current legacy model.
 Java has convenient Jar files: you can package everything into nice
 modular packages with optional source code and documentation.
 similar stuff is done in .net.
NO ABSOLUTELY NOT! (and I will /not/ apologies for yelling) I will fight that tooth and nail! One of the best thing about D IMNSHO is that a D program is "just a collection of text files". I can, without any special tools, dive in an view or edit any file I want. I can build with nothing but dmd and a command line. I can use the the source control system of my choice. And very importantly, the normal build model produces a stand alone OS native executable. (Note: the above reasons applies to a pure D app, as for non pure D apps, your toast anyway as D or the other language will have to fit in the opposite language's model and something will always leak. The best bet in that system is the simplest system possible and that to is "just text files".
hmm. that is *not* what I was suggesting. I was discussing the compilation model and the object file problems. D promises link-time compatibility with C but that's bullshit - you can't link on windows obj files for C and object files for D _unless_ you use the same compiler vendor (DMD & DMC) or you use some conversion tool and that doesn't always work.
Just because it doesn't work on your shitty (SCNR) platform, it doesn't mean it's wrong. On Unix, there's a single ABI for C, and linking Just Works (TM). But I kind of agree. The most useful thing about compiling each module to an object file is to enable separate compilation. But this is useless: it doesn't work because of bugs, it doesn't "scale" (because a single module is likely to have way too many transitive dependencies).
 I'm not suggesting coping Java's model letter for letter or using a VM 
 either, but rather using a better representation.
Ew, that's even worse. Java's model is right out retarded. I'd just compile a D project to a single (classic) object file. That would preserve C compatibility. Because the compiler knows _all_ D modules at compilation, we could enable some spiffy stuff, like virtual template functions or inter-procedural optimization.
May 17 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
grauzone wrote:
 
 Just because it doesn't work on your shitty (SCNR) platform, it doesn't 
 mean it's wrong. On Unix, there's a single ABI for C, and linking Just 
 Works (TM).
do YOU want D to succeed? that shitty platform is 90% of the market.
 
 But I kind of agree. The most useful thing about compiling each module 
 to an object file is to enable separate compilation. But this is 
 useless: it doesn't work because of bugs, it doesn't "scale" (because a 
 single module is likely to have way too many transitive dependencies).
 
 I'm not suggesting coping Java's model letter for letter or using a VM 
 either, but rather using a better representation.
Ew, that's even worse. Java's model is right out retarded. I'd just compile a D project to a single (classic) object file. That would preserve C compatibility. Because the compiler knows _all_ D modules at compilation, we could enable some spiffy stuff, like virtual template functions or inter-procedural optimization.
Instead of compiling per module, it should be more course grained like get a "module" file (IIRC), but that's a rare thing. usually you work with assemblies.
May 18 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
Yigal Chripun wrote:
 grauzone wrote:
 Just because it doesn't work on your shitty (SCNR) platform, it 
 doesn't mean it's wrong. On Unix, there's a single ABI for C, and 
 linking Just Works (TM).
do YOU want D to succeed? that shitty platform is 90% of the market.
 But I kind of agree. The most useful thing about compiling each module 
 to an object file is to enable separate compilation. But this is 
 useless: it doesn't work because of bugs, it doesn't "scale" (because 
 a single module is likely to have way too many transitive dependencies).

 I'm not suggesting coping Java's model letter for letter or using a 
 VM either, but rather using a better representation.
Ew, that's even worse. Java's model is right out retarded. I'd just compile a D project to a single (classic) object file. That would preserve C compatibility. Because the compiler knows _all_ D modules at compilation, we could enable some spiffy stuff, like virtual template functions or inter-procedural optimization.
Instead of compiling per module, it should be more course grained like get a "module" file (IIRC), but that's a rare thing. usually you work with assemblies.
oh, I forgot my last point: for C link-time compatibility you need to be able to _read_ C object files and link them to your executable. you gain little from the ability to _write_ object files. if you want to do a reverse integration (use D code in your C project) you can and IMO should have created a library anyway instead of using object files and the compiler should allow this as a separate option via a flag, e.g. --make-so or whatever
May 18 2009
parent reply Rainer Deyke <rainerd eldwood.com> writes:
Yigal Chripun wrote:
 oh, I forgot my last point:
 for C link-time compatibility you need to be able to _read_ C object
 files and link them to your executable. you gain little from the ability
 to _write_ object files.
You can transitivity. Two compilers for different languages that both produce C object files can link to each other; two compiler that can only read C object files cannot.
 if you want to do a reverse integration (use D code in your C project)
 you can and IMO should have created a library anyway instead of using
 object files and the compiler should allow this as a separate option via
 a flag, e.g. --make-so or whatever
If you can read and write compatible library files, you don't need to read or write compatible object files, since library files can take the place of object files. -- Rainer Deyke - rainerd eldwood.com
May 18 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
Rainer Deyke wrote:
 Yigal Chripun wrote:
 oh, I forgot my last point:
 for C link-time compatibility you need to be able to _read_ C object
 files and link them to your executable. you gain little from the ability
 to _write_ object files.
You can transitivity. Two compilers for different languages that both produce C object files can link to each other; two compiler that can only read C object files cannot.
good point.
 
 if you want to do a reverse integration (use D code in your C project)
 you can and IMO should have created a library anyway instead of using
 object files and the compiler should allow this as a separate option via
 a flag, e.g. --make-so or whatever
If you can read and write compatible library files, you don't need to read or write compatible object files, since library files can take the place of object files.
that's even better. just allow 2-way usage of C libs and that's it. no need to support the C object file formats directly.
May 18 2009
parent Daniel Keep <daniel.keep.lists gmail.com> writes:
Yigal Chripun wrote:
 Rainer Deyke wrote:
 ...

 If you can read and write compatible library files, you don't need to
 read or write compatible object files, since library files can take the
 place of object files.
that's even better. just allow 2-way usage of C libs and that's it. no need to support the C object file formats directly.
Ummm... IIRC, an .a file is just an archive of .o files. A .lib file in Windows is something similar. If you want to support C libraries, you need to support the object file format as well. -- Daniel
May 19 2009
prev sibling parent reply BCS <none anon.com> writes:
Hello Yigal,

 BCS wrote:
 
 Hello Yigal,
 
 second, D needs to update its stone age compilation model copied
 from

 but
 we need to throw away the current legacy model.
 Java has convenient Jar files: you can package everything into nice
 modular packages with optional source code and documentation.
 similar stuff is done in .net.
NO ABSOLUTELY NOT! (and I will /not/ apologies for yelling) I will fight that tooth and nail! One of the best thing about D IMNSHO is that a D program is "just a collection of text files". I can, without any special tools, dive in an view or edit any file I want. I can build with nothing but dmd and a command line. I can use the the source control system of my choice. And very importantly, the normal build model produces a stand alone OS native executable. (Note: the above reasons applies to a pure D app, as for non pure D apps, your toast anyway as D or the other language will have to fit in the opposite language's model and something will always leak. The best bet in that system is the simplest system possible and that to is "just text files".
hmm. that is *not* what I was suggesting. I was discussing the compilation model and the object file problems. D promises link-time compatibility with C but that's bullshit - you can't link on windows obj files for C and object files for D _unless_ you use the same compiler vendor (DMD & DMC) or you use some conversion tool and that doesn't always work.
aside from the CGG stack I'm not sure anyone can in general. But this is getting to be a minor point as VS/GCC are the only compilers I've ever seen used on on windows.
 one other thing, this thread discusses also the VS project files. This
 is completely irrelevant. those XML files are VS specific and their
 complexity is MS' problem. Nothing prevents a developer from using

 since VS comes with a command line compiler. the issue is not the
 build tool but rather the compilation model itself.
for the compiler to know where to resolve symbols. You might be able to get away with throwing every single .cs/.dll/whatever file in the project at the compiler all at once. (Now if you want to talk about archaic!) Aside from that, how can it find meta-data for your types?
May 18 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
BCS wrote:
 one other thing, this thread discusses also the VS project files. This
 is completely irrelevant. those XML files are VS specific and their
 complexity is MS' problem. Nothing prevents a developer from using

 since VS comes with a command line compiler. the issue is not the
 build tool but rather the compilation model itself.
information for the compiler to know where to resolve symbols. You might be able to get away with throwing every single .cs/.dll/whatever file in the project at the compiler all at once. (Now if you want to talk about archaic!) Aside from that, how can it find meta-data for your types?
saw this in Scons last time I looked.
May 18 2009
parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
Yigal Chripun wrote:
 BCS wrote:
 one other thing, this thread discusses also the VS project files. This
 is completely irrelevant. those XML files are VS specific and their
 complexity is MS' problem. Nothing prevents a developer from using

 since VS comes with a command line compiler. the issue is not the
 build tool but rather the compilation model itself.
information for the compiler to know where to resolve symbols. You might be able to get away with throwing every single .cs/.dll/whatever file in the project at the compiler all at once. (Now if you want to talk about archaic!) Aside from that, how can it find meta-data for your types?
saw this in Scons last time I looked.
Maybe you should back up your statements instead of just guessing. http://www.scons.org/wiki/CsharpBuilder *do not contain enough information*. about what other files it depends on. -- Daniel
May 19 2009
parent reply BCS <ao pathlink.com> writes:
Reply to Daniel,

 Yigal Chripun wrote:
 
 BCS wrote:
 
 one other thing, this thread discusses also the VS project files.
 This is completely irrelevant. those XML files are VS specific and
 their complexity is MS' problem. Nothing prevents a developer from

 sources since VS comes with a command line compiler. the issue is
 not the build tool but rather the compilation model itself.
 
information for the compiler to know where to resolve symbols. You might be able to get away with throwing every single .cs/.dll/whatever file in the project at the compiler all at once. (Now if you want to talk about archaic!) Aside from that, how can it find meta-data for your types?
I saw this in Scons last time I looked.
Maybe you should back up your statements instead of just guessing. http://www.scons.org/wiki/CsharpBuilder *do not contain enough information*. about what other files it depends on. -- Daniel
that would be about half way from D to using C without even a make file or build script.
May 19 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
BCS wrote:
 Reply to Daniel,
 
 Yigal Chripun wrote:

 BCS wrote:

 one other thing, this thread discusses also the VS project files.
 This is completely irrelevant. those XML files are VS specific and
 their complexity is MS' problem. Nothing prevents a developer from

 sources since VS comes with a command line compiler. the issue is
 not the build tool but rather the compilation model itself.
information for the compiler to know where to resolve symbols. You might be able to get away with throwing every single .cs/.dll/whatever file in the project at the compiler all at once. (Now if you want to talk about archaic!) Aside from that, how can it find meta-data for your types?
I saw this in Scons last time I looked.
Maybe you should back up your statements instead of just guessing. http://www.scons.org/wiki/CsharpBuilder *do not contain enough information*. about what other files it depends on. -- Daniel
but IMHO that would be about half way from D to using C without even a make file or build script.
first, thanks Daniel for the evidence I missed. BCS wrote that a programmer needs to compile all the source files at once to make it work without an IDE. as I already said, he's wrong, and Daniel provided the proof above. sure, you don't get the full power of an IDE that can track all the source files in the project for you. That just means that it's worth the money you pay for it. you can write makefiles or what ever (scons, rake, ant, ...) in the same way you'd do for C and C++. In other words: if you prefer commnad line tools you get the same experience and if you do use an IDE you get a *much* better experience. same goes for D - either write your own makefile or use rebuild which uses the compiler front-end to parse the source files just like you where in all of that, do you see any contradiction to what I said? again, I said the D compilation model is ancient legacy and should be replaced and that has nothing to do with the format you prefer for your build scripts.
May 19 2009
parent reply BCS <ao pathlink.com> writes:
Reply to Yigal,

 BCS wrote:
 
 Reply to Daniel,
 
 Yigal Chripun wrote:
 
 BCS wrote:
 
 one other thing, this thread discusses also the VS project files.
 This is completely irrelevant. those XML files are VS specific
 and their complexity is MS' problem. Nothing prevents a developer
 from using different build tools like make, rake or scons with

 issue is not the build tool but rather the compilation model
 itself.
 
information for the compiler to know where to resolve symbols. You might be able to get away with throwing every single .cs/.dll/whatever file in the project at the compiler all at once. (Now if you want to talk about archaic!) Aside from that, how can it find meta-data for your types?
think I saw this in Scons last time I looked.
Maybe you should back up your statements instead of just guessing. http://www.scons.org/wiki/CsharpBuilder files *do not contain enough information*. about what other files it depends on. -- Daniel
hand but IMHO that would be about half way from D to using C without even a make file or build script.
first, thanks Daniel for the evidence I missed. BCS wrote that a programmer needs to compile all the source files at once to make it work without an IDE. as I already said, he's wrong, and Daniel provided the proof above.
minor point; I said you have to give the compiler all the source files. You might not actually nned to compile them all, but without some external meta data, it still needs to be handled the full because it can't find them on it's own. And at that point you might as well compile them anyway.
 sure, you don't get the full power of an IDE that can track all the
 source files in the project for you. That just means that it's worth
 the money you pay for it.
 
 you can write makefiles or what ever (scons, rake, ant, ...) in the
 same way you'd do for C and C++. In other words:
 if you prefer commnad line tools you get the same experience and if
 you do use an IDE you get a *much* better experience.
 same goes for D - either write your own makefile or use rebuild which
 uses the compiler front-end to parse the source files just like you

 
where did I suggest that?
 where in all of that, do you see any contradiction to what I said?
 again, I said the D compilation model is ancient legacy and should be
 replaced and that has nothing to do with the format you prefer for
 your build scripts.
 
I think that you think I'm saying something other than what I'm trying to say. I'm struggling to make my argument clear but can't seem to put it in only to the compiler. My argument is that a D project can be done as nothing but a collection of possible, but from any practical standpoint it's not going to be done. There is going to be some extra files that list, in some form, extra information the compiler needs to resolve symbols and figure out where to look for stuff. to have (and the D doesn't) will be maintain by some sort of IDE. To put it quantitatively: productivity on a scale of 0 to whatever D w/o IDE -> 10 D w/ IDE -> 100+ "things we care about" list.
May 19 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
BCS wrote:

  > minor point; I said you have to give the compiler all the source files.
 You might not actually nned to compile them all, but without some 
 external meta data, it still needs to be handled the full because it 
 can't find them on it's own. And at that point you might as well compile 
 them anyway.
you are only considering small hobby projects. that's not true for big projects where you do not want to build all at once. Think of DWT for instance. besides, you do NOT need to provide all sources, not even just for partially processing them to find the symbols. compile a bunch of sources into an assembly all at once and you provide the list of other assemblies your code depends on. so the dependency is on the package level rather than on the file level. this make so much more sense since each assembly is a self contained unit of functionality.
 
 sure, you don't get the full power of an IDE that can track all the
 source files in the project for you. That just means that it's worth
 the money you pay for it.

 you can write makefiles or what ever (scons, rake, ant, ...) in the
 same way you'd do for C and C++. In other words:
 if you prefer commnad line tools you get the same experience and if
 you do use an IDE you get a *much* better experience.
 same goes for D - either write your own makefile or use rebuild which
 uses the compiler front-end to parse the source files just like you

where did I suggest that?
I replied to both you and Daniel. I think I was referring to what Daniel said here.
 
 where in all of that, do you see any contradiction to what I said?
 again, I said the D compilation model is ancient legacy and should be
 replaced and that has nothing to do with the format you prefer for
 your build scripts.
I think that you think I'm saying something other than what I'm trying to say. I'm struggling to make my argument clear but can't seem to put D is married only to the compiler.
I understand your thesis and disagree with it. what i'm saying is that a similar offering.
 
 My argument is that a D project can be done as nothing but a collection 

 theoretically possible, but from any practical standpoint it's not going 
 to be done. There is going to be some extra files that list, in some 
 form, extra information the compiler needs to resolve symbols and figure 
 out where to look for stuff. In any practical environment this extra bit 

 maintain by some sort of IDE.
this is wrong. you cannot have a big project based solely on .d files. look at DWT as an example. no matter what tool you use, let's say DSSS, it still has a config file of some sort which contains that additional meta-data. a DSSS config file might be shorter than what's required for rebuild which embeds the entire DMDFE. in practice, both languages need more than just the compiler.
 
 To put it quantitatively:
 
 productivity on a scale of 0 to whatever

 D w/o IDE -> 10

 D w/ IDE -> 100+
 




 very low on MS "things we care about" list.
 
May 20 2009
parent reply BCS <ao pathlink.com> writes:
Reply to Yigal,

 BCS wrote:
 
 minor point; I said you have to give the compiler all the source
 files. You might not actually nned to compile them all, but without
 some external meta data, it still needs to be handled the full
 because it can't find them on it's own. And at that point you might
 as well compile them anyway.
 
 you are only considering small hobby projects. that's not true for big
 projects where you do not want to build all at once. Think of DWT for
 instance. besides, you do NOT need to provide all sources, not even
 just for partially processing them to find the symbols.




 compile a bunch of sources into an assembly all at once and you provide
 the list of other assemblies your code depends on. so the dependency is
 on the package level rather than on the file level. this make so much
 more sense since each assembly is a self contained unit of
 functionality.
That is more or less what I thought it was. Also, that indicates that the all or nothing build" where a sub part of a program is either up to date, or rebuilt by recompiling everything in it.
 where in all of that, do you see any contradiction to what I said?
again, I said the D compilation model is ancient legacy and should be replaced and that has nothing to do with the format you prefer for your build scripts.
I think that you think I'm saying something other than what I'm trying to say. I'm struggling to make my argument clear but can't to VS and that D is married only to the compiler.
I understand your thesis and disagree with it. what i'm saying is that
Maybe I should have said it's married to having *an IDE*, it's just VS by default and design.
 VS is just a fancy text-editor with lots of bells and

 or a similar offering.
Last I heard Re-Sharper is a VS plugin, not an IDE in it's own right, and even if that has changed, it's still an IDE. Even so, my point is Any IDE vs. No IDE, so it dosn't address my point.
 My argument is that a D project can be done as nothing but a

 this is theoretically possible, but from any practical standpoint
 it's not going to be done. There is going to be some extra files that
 list, in some form, extra information the compiler needs to resolve
 symbols and figure out where to look for stuff. In any practical

 (and the D doesn't) will be maintain by some sort of IDE.
 
this is wrong. you cannot have a big project based solely on .d files. look at DWT as an example. no matter what tool you use, let's say DSSS, it still has a config file of some sort which contains that additional meta-data.
So DWT depends on DSSS's meta data. That's a design choice of DWT not D. designed so that they don't need that meta data where as, I will assert, -------------- I'm fine with any build system you want to have implemented as long as a tool stack can still be built that works like the current one. That is that it can practically: - support projects that need no external meta data - produce monolithic OS native binary executables - work with the only language aware tool being the compiler I don't expect it to requiter that projects be done that way and I wouldn't take any issue if a tool stack were built that didn't fit that list. What I /would/ take issue with is the the language (okay, or DMD in particular) were altered to the point that one or more of those *couldn't* be done.
May 20 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
BCS wrote:

 compile a bunch of sources into an assembly all at once and you provide
 the list of other assemblies your code depends on. so the dependency is
 on the package level rather than on the file level. this make so much
 more sense since each assembly is a self contained unit of
 functionality.
That is more or less what I thought it was. Also, that indicates that "big dumb all or nothing build" where a sub part of a program is either up to date, or rebuilt by recompiling everything in it.
along. However I disagree with your assertion that this model is bad. It makes much more sense than the C++/D model. the idea here is that each self contained sub-component is compiled by itself. this self contained component might as well be a single file, nothing in the above prevents this. consider a project with 100 files where you have one specific feature implemented by 4 tightly coupled classes which you put in separate files. each of the files depends on the rest. what's the best compiling strategy here? if you compile each file separately than you parse all 4 files for each object file which is completely redundant and makes little sense since you'll need to recompile all of them anyway because of their dependencies.
 
 Last I heard Re-Sharper is a VS plugin, not an IDE in it's own right, 
 and even if that has changed, it's still an IDE. Even so, my point is 
 Any IDE vs. No IDE, so it dosn't address my point.
 
My use of the term IDE here is a loose one. let me rephrase: yes, Re-sharper is a plugin for VS. without it VS provides just text-editing features and I don't consider it an IDE like eclipse is. Re-sharper provides all the features of a real IDE for VS. so, while it's "just" a plugin, it's more important than VS itself.
 So DWT depends on DSSS's meta data. That's a design choice of DWT not D. 


 be practically designed so that they don't need that meta data where as, 

 
 --------------
What I was saying was not specific for DWT but rather that _any_ reasonably big project will use such a system and it's simply not practical to do otherwise. how would you handle a project with a hundred files that takes 30 min. to compile without any tool whatsoever except the compiler itself?
 
 I'm fine with any build system you want to have implemented as long as a 
 tool stack can still be built that works like the current one. That is 
 that it can practically:
 
 - support projects that need no external meta data
 - produce monolithic OS native binary executables
 - work with the only language aware tool being the compiler
 
 I don't expect it to requiter that projects be done that way and I 
 wouldn't take any issue if a tool stack were built that didn't fit that 
 list. What I /would/ take issue with is the the language (okay, or DMD 
 in particular) were altered to the point that one or more of those 
 *couldn't* be done.
 
 
your points are skewed IMO.
 - support projects that need no external meta data
this is only practical for small projects and that works the same way in both languages.
 - produce monolithic OS native binary executables
that is unrelated to our topic. Yes .Net uses byte-code and not native executables. I never said I want this aspect to be brought to D.
 - work with the only language aware tool being the compiler
again, only practical for small-mid projects in both languages. would with C or D, and the is an output format for that that is not an assembly.
May 21 2009
next sibling parent reply BCS <ao pathlink.com> writes:
Reply to Yigal,

 BCS wrote:
 

 you compile a bunch of sources into an assembly all at once and you
 provide the list of other assemblies your code depends on. so the
 dependency is on the package level rather than on the file level.
 this make so much more sense since each assembly is a self contained
 unit of functionality.
 
That is more or less what I thought it was. Also, that indicates that the "big dumb all or nothing build" where a sub part of a program is either up to date, or rebuilt by recompiling everything in it.
if you compile each file separately than you parse all 4 files for each object file which is completely redundant and makes little sense since you'll need to recompile all of them anyway because of their dependencies.
All of the above is (as far as D goes) an implementation detail[*]. What and 2) the only practical way to build is from a config file [*] I am working very slowly on building a compiler and am thinking of building it so that along with object files, it generates "public export" (.pe) files that have a binary version of the public interface for the module. I'd set it up so that the compiler never parses more than one file per process. If you pass it more, it forks and when it runs into imports, it loads the .pe files after, if needed, forking off a process to generating it.
 without it VS provides just
 text-editing features and I don't consider it an IDE like eclipse is.
The IDE features I don't want the language to depend on are in VS so this whole side line is un-important.
 So DWT depends on DSSS's meta data. That's a design choice of DWT not


 don't) be practically designed so that they don't need that meta data

 do away with it.
 
 --------------
 
What I was saying was not specific for DWT but rather that _any_ reasonably big project will use such a system and it's simply not practical to do otherwise.
I assert that the above is false because...
 how would you handle a project with a hundred
 files that takes 30 min. to compile without any tool whatsoever
 except the compiler itself?
I didn't say that the only tool you can use is the compiler. I'm fine with bud/DSSS/rebuild being used. What I don't want, is a language that effectively _requiters_ that some config file be maintained along with the code files. I suspect that the bulk of pure D projects (including large ones) /could/ have been written so that they didn't need a dsss.conf file and many of those that do have a dsss.conf, I'd almost bet can be handed without it. IIRC, to be handed the full file list at some point).
 I'm fine with any build system you want to have implemented as long
 as a tool stack can still be built that works like the current one.
 That is that it can practically:
 
 - support projects that need no external meta data
 - produce monolithic OS native binary executables
 - work with the only language aware tool being the compiler
 
 I don't expect it to requiter that projects be done that way and I
 wouldn't take any issue if a tool stack were built that didn't fit
 that list. What I /would/ take issue with is the the language (okay,
 or DMD in particular) were altered to the point that one or more of
 those *couldn't* be done.
 
your points are skewed IMO.
 - support projects that need no external meta data
 
this is only practical for small projects and that works the same way in both languages.
As I said, I think this is false.
 - produce monolithic OS native binary executables
 
that is unrelated to our topic. Yes .Net uses byte-code and not native executables. I never said I want this aspect to be brought to D.
Mostly I'm interested in the monolithic bit (no DLL hell!) but I was just pulling out my laundry list.
 - work with the only language aware tool being the compiler
 
again, only practical for small-mid projects in both languages.
ditto the point on 1

 you would with C or D, and the is an output format for that that is
 not an assembly.
I think we won't converge on this. I think I'm seeing a tools dependency issue that I don't like in the design as dependent on the tools and don't see that as an issue. One of the major attractions for me to DMD is its build model so I tend to be very conservative and resistant to change on this point.
May 21 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
BCS wrote:
 Reply to Yigal,
 
 BCS wrote:


 you compile a bunch of sources into an assembly all at once and you
 provide the list of other assemblies your code depends on. so the
 dependency is on the package level rather than on the file level.
 this make so much more sense since each assembly is a self contained
 unit of functionality.
That is more or less what I thought it was. Also, that indicates that the "big dumb all or nothing build" where a sub part of a program is either up to date, or rebuilt by recompiling everything in it.
if you compile each file separately than you parse all 4 files for each object file which is completely redundant and makes little sense since you'll need to recompile all of them anyway because of their dependencies.
All of the above is (as far as D goes) an implementation detail[*]. What and 2) the only practical way to build is from a config file
 [*] I am working very slowly on building a compiler and am thinking of 
 building it so that along with object files, it generates "public 
 export" (.pe) files that have a binary version of the public interface 
 for the module. I'd set it up so that the compiler never parses more 
 than one file per process. If you pass it more, it forks and when it 
 runs into imports, it loads the .pe files after, if needed, forking off 
 a process to generating it.
sounds like an interesting idea - basically your compiler will generate
 
 without it VS provides just
 text-editing features and I don't consider it an IDE like eclipse is.
The IDE features I don't want the language to depend on are in VS so this whole side line is un-important.
 So DWT depends on DSSS's meta data. That's a design choice of DWT not


 don't) be practically designed so that they don't need that meta data

 do away with it.

 --------------
What I was saying was not specific for DWT but rather that _any_ reasonably big project will use such a system and it's simply not practical to do otherwise.
I assert that the above is false because...
 how would you handle a project with a hundred
 files that takes 30 min. to compile without any tool whatsoever
 except the compiler itself?
I didn't say that the only tool you can use is the compiler. I'm fine with bud/DSSS/rebuild being used. What I don't want, is a language that effectively _requiters_ that some config file be maintained along with the code files. I suspect that the bulk of pure D projects (including large ones) /could/ have been written so that they didn't need a dsss.conf file and many of those that do have a dsss.conf, I'd almost bet can be handed without it. IIRC, all that DSSS really needs is what some point).
you miss a critical issue here: DSSS/rebuild/etc can mostly be used without a config file _because_ they embed the DMDFE which generates that information (dependencies) for them. There is no conceptual difference between that and using an IDE. you just moved some functionality from the IDE to the build tool. both need to parse the code to get the dependencies.
 
 I'm fine with any build system you want to have implemented as long
 as a tool stack can still be built that works like the current one.
 That is that it can practically:

 - support projects that need no external meta data
 - produce monolithic OS native binary executables
 - work with the only language aware tool being the compiler

 I don't expect it to requiter that projects be done that way and I
 wouldn't take any issue if a tool stack were built that didn't fit
 that list. What I /would/ take issue with is the the language (okay,
 or DMD in particular) were altered to the point that one or more of
 those *couldn't* be done.
your points are skewed IMO.
 - support projects that need no external meta data
this is only practical for small projects and that works the same way in both languages.
As I said, I think this is false.
 - produce monolithic OS native binary executables
that is unrelated to our topic. Yes .Net uses byte-code and not native executables. I never said I want this aspect to be brought to D.
Mostly I'm interested in the monolithic bit (no DLL hell!) but I was just pulling out my laundry list.
 - work with the only language aware tool being the compiler
again, only practical for small-mid projects in both languages.
ditto the point on 1

 you would with C or D, and the is an output format for that that is
 not an assembly.
I think we won't converge on this. I think I'm seeing a tools dependency issue that I don't like in the already just as dependent on the tools and don't see that as an issue. One of the major attractions for me to DMD is its build model so I tend to be very conservative and resistant to change on this point.
you're right that we will not converge on this. you only concentrate on the monolithic executable case and ignore the fact that in real life assemblies, C/C++ dll/so/a or D DDLs. in any of those cases you sstill need to manage the sub components and their dependencies. one of the reasons for "dll hell" is because c/c++ do not handle this properly and that's what Java and .net and DDL try to solve. the dependency is already there for external tools to manage this complexity.
May 21 2009
parent reply BCS <ao pathlink.com> writes:
Reply to Yigal,

 BCS wrote:
 
 Reply to Yigal,
 
 if you compile each file separately than you parse all 4 files for
 each object file which is completely redundant and makes little
 sense since you'll need to recompile all of them anyway because of
 their dependencies.
 
All of the above is (as far as D goes) an implementation detail[*]. that way and 2) the only practical way to build is from a config file
I disagree, see below:
 [*] I am working very slowly on building a compiler and am thinking
 of building it so that along with object files, it generates "public
 export" (.pe) files that have a binary version of the public
 interface for the module. I'd set it up so that the compiler never
 parses more than one file per process. If you pass it more, it forks
 and when it runs into imports, it loads the .pe files after, if
 needed, forking off a process to generating it.
 
sounds like an interesting idea - basically your compiler will
Maybe that's the confusion: No it won't! that I'm referring to is the list of file that the compiler needs to look at. In D this information can be derived from the text of the import statements this can't be done even within a single assembly. Without me explicitly telling the compiler what files to look in, it cant find anything! It can't even just search the local dir for file that have what it's looking for because I could have old copies of the file laying around that shouldn't be used.
 I didn't say that the only tool you can use is the compiler. I'm fine
 with bud/DSSS/rebuild being used. What I don't want, is a language
 that effectively _requiters_ that some config file be maintained
 along with the code files. I suspect that the bulk of pure D projects
 (including large ones) /could/ have been written so that they didn't
 need a dsss.conf file and many of those that do have a dsss.conf, I'd
 almost bet can be handed without it. IIRC, all that DSSS really needs

 file list at some point).
 
you miss a critical issue here: DSSS/rebuild/etc can mostly be used without a config file _because_ they embed the DMDFE which generates that information (dependencies) for them. There is no conceptual difference between that and using an IDE. you just moved some functionality from the IDE to the build tool. both need to parse the code to get the dependencies.
is my point exactly.
 I think we won't converge on this.
 
 I think I'm seeing a tools dependency issue that I don't like in the

 already just as dependent on the tools and don't see that as an
 issue.
 
 One of the major attractions for me to DMD is its build model so I
 tend to be very conservative and resistant to change on this point.
 
you're right that we will not converge on this. you only concentrate on the monolithic executable case and ignore the fact that in real life assemblies, C/C++ dll/so/a or D DDLs.
Yes the common case, but that dosn't make it the right case. See below.
 in any of those cases you sstill need to manage the sub components and
 their dependencies.
 one of the reasons for "dll hell" is because c/c++ do not handle this
 properly and that's what Java and .net and DDL try to solve. the
 dependency is already there for external tools to manage this
 complexity.
I assert that very rare that a programs NEEDS to use a DLL/so/DDL type of system. The only unavoidable reasons to use them that I see are: 1) you are forced to use code that can't be had at compile time (rare outside of plugins and they don't count because they are not your code) 2) you have lots of code that is mostly never run and you can't load it all (and that sounds like you have bigger problems) 3) you are running into file size limits (outside of something like a kernel image, this is unlikely) 4) booting takes to long (and that says your doing something else wrong) It is my strongly held opinion that the primary argument for dlls and friends, code sharing, is attempting to solve a completely intractable problem. As soon as you bring in versioning, installers and uninstallers, the problem becomes flat out impossible to solve. (the one exception is for low level system things like KERNEL32.DLL and stdc*.so) In this day and age where HDD's are ready to be measured in TB and people ask how many Gigs of RAM you have, *who cares* about code sharing?
May 21 2009
next sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Thu, 21 May 2009 23:07:32 +0400, BCS <ao pathlink.com> wrote:

 Reply to Yigal,

 BCS wrote:

 Reply to Yigal,

 if you compile each file separately than you parse all 4 files for
 each object file which is completely redundant and makes little
 sense since you'll need to recompile all of them anyway because of
 their dependencies.
All of the above is (as far as D goes) an implementation detail[*]. that way and 2) the only practical way to build is from a config file
I disagree, see below:
 [*] I am working very slowly on building a compiler and am thinking
 of building it so that along with object files, it generates "public
 export" (.pe) files that have a binary version of the public
 interface for the module. I'd set it up so that the compiler never
 parses more than one file per process. If you pass it more, it forks
 and when it runs into imports, it loads the .pe files after, if
 needed, forking off a process to generating it.
sounds like an interesting idea - basically your compiler will
Maybe that's the confusion: No it won't! needs that I'm referring to is the list of file that the compiler needs to look at. In D this information can be derived from the text of the import statements in the .d files (well it also needs the the import Without me explicitly telling the compiler what files to look in, it cant find anything! It can't even just search the local dir for file that have what it's looking for because I could have old copies of the file laying around that shouldn't be used.
 I didn't say that the only tool you can use is the compiler. I'm fine
 with bud/DSSS/rebuild being used. What I don't want, is a language
 that effectively _requiters_ that some config file be maintained
 along with the code files. I suspect that the bulk of pure D projects
 (including large ones) /could/ have been written so that they didn't
 need a dsss.conf file and many of those that do have a dsss.conf, I'd
 almost bet can be handed without it. IIRC, all that DSSS really needs

 file list at some point).
you miss a critical issue here: DSSS/rebuild/etc can mostly be used without a config file _because_ they embed the DMDFE which generates that information (dependencies) for them. There is no conceptual difference between that and using an IDE. you just moved some functionality from the IDE to the build tool. both need to parse the code to get the dependencies.
that is my point exactly.
 I think we won't converge on this.
  I think I'm seeing a tools dependency issue that I don't like in the

 already just as dependent on the tools and don't see that as an
 issue.
  One of the major attractions for me to DMD is its build model so I
 tend to be very conservative and resistant to change on this point.
you're right that we will not converge on this. you only concentrate on the monolithic executable case and ignore the fact that in real life assemblies, C/C++ dll/so/a or D DDLs.
Yes the common case, but that dosn't make it the right case. See below.
 in any of those cases you sstill need to manage the sub components and
 their dependencies.
 one of the reasons for "dll hell" is because c/c++ do not handle this
 properly and that's what Java and .net and DDL try to solve. the
 dependency is already there for external tools to manage this
 complexity.
I assert that very rare that a programs NEEDS to use a DLL/so/DDL type of system. The only unavoidable reasons to use them that I see are: 1) you are forced to use code that can't be had at compile time (rare outside of plugins and they don't count because they are not your code) 2) you have lots of code that is mostly never run and you can't load it all (and that sounds like you have bigger problems) 3) you are running into file size limits (outside of something like a kernel image, this is unlikely) 4) booting takes to long (and that says your doing something else wrong)
5) The most common case - your program relies on some third-party middleware that doesn't provide any source code.
 It is my strongly held opinion that the primary argument for dlls and  
 friends, code sharing, is attempting to solve a completely intractable  
 problem. As soon as you bring in versioning, installers and  
 uninstallers, the problem becomes flat out impossible to solve. (the one  
 exception is for low level system things like KERNEL32.DLL and stdc*.so)

 In this day and age where HDD's are ready to be measured in TB and  
 people ask how many Gigs of RAM you have, *who cares* about code sharing?
I don't. But my Windows 7 does - it stores lots *lots* *LOTS* of copies of the same .dll in %WINDIR%\WinSxS so that anyone could use it - and it's size grows up to 10 Gigs and more! See http://blogs.msdn.com/e7/archive/2008/11/19/disk-space.aspx for details
May 21 2009
parent BCS <ao pathlink.com> writes:
Reply to Denis,

 I assert that very rare that a programs NEEDS to use a DLL/so/DDL type
 of system. The only unavoidable reasons to use them that I see are:
 
 1) you are forced to use code that can't be had at compile time (rare
 outside of plugins and they don't count because they are not your code)
 2) you have lots of code that is mostly never run and you can't load it
 all (and that sounds like you have bigger problems)
 3) you are running into file size limits (outside of something like a
 kernel image, this is unlikely)
 4) booting takes to long (and that says your doing something else
 wrong)
5) The most common case - your program relies on some third-party middleware that doesn't provide any source code.
They /sould/ ship static libs as well IMNSHO. Also the same aside as for
May 21 2009
prev sibling parent reply Yigal Chripun <yigal100 gmail.com> writes:
BCS wrote:

 I assert that very rare that a programs NEEDS to use a DLL/so/DDL type 
 of system. The only unavoidable reasons to use them that I see are:
 
 1) you are forced to use code that can't be had at compile time (rare 
 outside of plugins and they don't count because they are not your code)
 2) you have lots of code that is mostly never run and you can't load it 
 all (and that sounds like you have bigger problems)
 3) you are running into file size limits (outside of something like a 
 kernel image, this is unlikely)
 4) booting takes to long (and that says your doing something else wrong)
 
 It is my strongly held opinion that the primary argument for dlls and 
 friends, code sharing, is attempting to solve a completely intractable 
 problem. As soon as you bring in versioning, installers and 
 uninstallers, the problem becomes flat out impossible to solve. (the one 
 exception is for low level system things like KERNEL32.DLL and stdc*.so)
 
 In this day and age where HDD's are ready to be measured in TB and 
 people ask how many Gigs of RAM you have, *who cares* about code sharing?
 
 
so, in your opinion Office, photoshop, adobe acrobat, can all be provided as monolithic executables? that's just ridiculous. My work uses this monolithic model approach for some programs and this brings so much pain that you wouldn't believe. Now we're trying to slowly move away from this retarded model. I'm talking from experience here - the monolithic approach does NOT work. just so you'd understand the scale I'm talking about - our largest executable is 1.5 Gigs in size. you're wrong on both accounts, DLL type systems are not only the common case, they are the correct solution. the "DLL HELL" you're so afraid of is mostly solved by using jars/assemblies (smart dlls) that contain meta-data such as versions. this problem is also solved on Linux systems that use package managers, like Debian's APT. monolithic design like you suggest is in fact bad design that leads to things like - Windows Vista running slower on my 8-core machine than Window XP on my extremely weak laptop.
May 21 2009
next sibling parent reply Rainer Deyke <rainerd eldwood.com> writes:
Yigal Chripun wrote:
 just so you'd understand the scale I'm talking about - our largest
 executable is 1.5 Gigs in size.
How is 1.5 GB of dlls better than a 1.5 GB executable? (And don't forget, removing dead code across dll boundaries is a lot more difficult than removing it within a single executable, so you're more likely to have 3 GB of dlls.)
 you're wrong on both accounts, DLL type systems are not only the common
 case, they are the correct solution.
 the "DLL HELL" you're so afraid of is mostly solved by using
 jars/assemblies (smart dlls) that contain meta-data such as versions.
 this problem is also solved on Linux systems that use package managers,
 like Debian's APT.
You have a curious definition of "solved". Package managers work (sometimes, sort of) so long as you get all of your software from a single source and you never need a newer versions of your software that is not yet available in package form. I've got programs that I've almost given up on deploying at all because of assembly hell. Plain old DLLs weren't anywhere near as bad as that. My favorite deployment system is the application bundle under OS X. It's a directory that looks like a file. Beneath the covers it has frameworks and configuration files and multiple executables and all that crap, but to the user, it looks like a single file. You can copy it, rename it, move it (on a single computer or between computers), even delete it, and it just works. Too bad the system doesn't work under any other OS. -- Rainer Deyke - rainerd eldwood.com
May 21 2009
parent BCS <none anon.com> writes:
Hello Rainer,

 My favorite deployment system is the application bundle under OS X.
 It's a directory that looks like a file.  Beneath the covers it has
 frameworks and configuration files and multiple executables and all
 that
 crap, but to the user, it looks like a single file.  You can copy it,
 rename it, move it (on a single computer or between computers), even
 delete it, and it just works.  Too bad the system doesn't work under
 any
 other OS.
Oh man would I love to have that :) I've day dreamed of a system a lot like that. One thing I'd mandate if I ever designed my ideal system is that installation /in total/ is plunking in the dir, and removal is simply deleting it. Once it's gone, it's gone. Nothing may remain that can in ANY WAY effect other apps. That would implies that after each boot nothing is installed and everything is installed on launch.
May 22 2009
prev sibling parent BCS <none anon.com> writes:
Hello Yigal,

 BCS wrote:
 
 It is my strongly held opinion that the primary argument for dlls and
 friends, code sharing, is attempting to solve a completely
 intractable problem. As soon as you bring in versioning, installers
 and uninstallers, the problem becomes flat out impossible to solve.
 (the one exception is for low level system things like KERNEL32.DLL
 and stdc*.so)
 
so, in your opinion Office, photoshop, adobe acrobat, can all be provided as monolithic executables? that's just ridiculous.
 My work uses this monolithic model approach for some programs and this
 brings so much pain that you wouldn't believe.
How exactly?
 just so you'd understand the scale I'm talking about - our largest
 executable is 1.5 Gigs in size.
 you're wrong on both accounts, DLL type systems are not only the
 common case, they are the correct solution.
I didn't say that aren't common. I said it's a bad idea IMO.
 the "DLL HELL" you're so afraid of is mostly solved by using
 jars/assemblies (smart dlls) that contain meta-data such as versions.
 this problem is also solved on Linux systems that use package
 managers, like Debian's APT.
If you ignore system libraries like .NET its self, I'd almost bet that if you look at it long enough those systems, from a piratical standpoint, are almost the same as installing dll/so files to be used only by one program. That is that the average number of programs/applications that depend on any given file is 1. And as I all ready pointed out, I'll burn disk space to get the reliability that static linkage gets me. I seem to recall running into this issue with .NET assemblies and .so files within the last year.
 monolithic design like you suggest is in fact bad design that leads to
 things like - Windows Vista running slower on my 8-core machine than
 Window XP on my extremely weak laptop.
If the same design runs slower with static linkage than with dynamic linkage, then there is something wrong with the OS. I can say that with confidence because everything that a static version needs to do the dynamic version will also, and then a pile more.
May 21 2009
prev sibling parent reply Georg Wrede <georg.wrede iki.fi> writes:
Yigal Chripun wrote:
 What I was saying was not specific for DWT but rather that _any_ 
 reasonably big project will use such a system and it's simply not 
 practical to do otherwise. how would you handle a project with a hundred 
  files that takes 30 min. to compile without any tool whatsoever except 
 the compiler itself?
Make? And if you're smart, a verson control system. (Whether you use an IDE or not.)
May 21 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
Georg Wrede wrote:
 Yigal Chripun wrote:
 What I was saying was not specific for DWT but rather that _any_ 
 reasonably big project will use such a system and it's simply not 
 practical to do otherwise. how would you handle a project with a 
 hundred  files that takes 30 min. to compile without any tool 
 whatsoever except the compiler itself?
Make? And if you're smart, a verson control system. (Whether you use an IDE or not.)
Make _is_ a build tool
May 23 2009
parent reply Georg Wrede <georg.wrede iki.fi> writes:
Yigal Chripun wrote:
 Georg Wrede wrote:
 Yigal Chripun wrote:
 What I was saying was not specific for DWT but rather that _any_ 
 reasonably big project will use such a system and it's simply not 
 practical to do otherwise. how would you handle a project with a 
 hundred  files that takes 30 min. to compile without any tool 
 whatsoever except the compiler itself?
Make? And if you're smart, a verson control system. (Whether you use an IDE or not.)
Make _is_ a build tool
Yes. But since it's on every Unix since almost 40 years back, it doesn't count here. :-) Besides, it has tons of other uses, too. One might as well say that a text editor is a build tool. You construct (or erect) software with it. ;-)
May 23 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
Georg Wrede wrote:
 Yigal Chripun wrote:
 Georg Wrede wrote:
 Yigal Chripun wrote:
 What I was saying was not specific for DWT but rather that _any_ 
 reasonably big project will use such a system and it's simply not 
 practical to do otherwise. how would you handle a project with a 
 hundred  files that takes 30 min. to compile without any tool 
 whatsoever except the compiler itself?
Make? And if you're smart, a verson control system. (Whether you use an IDE or not.)
Make _is_ a build tool
Yes. But since it's on every Unix since almost 40 years back, it doesn't count here. :-) Besides, it has tons of other uses, too. One might as well say that a text editor is a build tool. You construct (or erect) software with it. ;-)
Nope. it does count as an external build tool
May 23 2009
parent reply BCS <none anon.com> writes:
Hello Yigal,

 Georg Wrede wrote:
 
 Yigal Chripun wrote:
 Make _is_ a build tool
 
Yes. But since it's on every Unix since almost 40 years back, it doesn't count here. :-) Besides, it has tons of other uses, too. One might as well say that a text editor is a build tool. You construct (or erect) software with it. ;-)
Nope. it does count as an external build tool
OK and so can bash because it can run scripts. But that's not the point. Neither make nor VS's equivalent is what this thread was about. At least not where I was involved. My point is that the design of some kind of external metadata file that contains information that can't be derived from the source code its self, where as with D, no such metadata is *needed*. If you wanted, you could build a tool to take D source code and generate a makefile or a bash build script from it
May 23 2009
parent reply Christopher Wright <dhasenan gmail.com> writes:
BCS wrote:
 Hello Yigal,
 
 Georg Wrede wrote:

 Yigal Chripun wrote:
 Make _is_ a build tool
Yes. But since it's on every Unix since almost 40 years back, it doesn't count here. :-) Besides, it has tons of other uses, too. One might as well say that a text editor is a build tool. You construct (or erect) software with it. ;-)
Nope. it does count as an external build tool
OK and so can bash because it can run scripts.
No, the main purpose of make is to build software. You probably wouldn't think to use a makefile to automate converting flac files to ogg files, for instance. Or look at bashburn -- it has a user interface (albeit using text menus rather than graphics). You might be able to do that with a makefile, but it would be seriously awkward, and you'd mainly be using shell scripting. And bash does not have any special features to assist in building software.
 But that's not the point. Neither make nor VS's equivalent is what this 
 thread was about. At least not where I was involved. My point is that 

 specific IDE) of some kind of external metadata file that contains 
 information that can't be derived from the source code its self, where 
 as with D, no such metadata is *needed*. If you wanted, you could build 
 a tool to take D source code and generate a makefile or a bash build 
 script from it
code, assuming there exists a directory containing all and only those source files that should end up in the resulting assembly. If you follow structure will match your namespaces besides. But this is not enforced.
May 24 2009
parent reply BCS <none anon.com> writes:
Hello Christopher,

 BCS wrote:
 
 But that's not the point. Neither make nor VS's equivalent is what
 this thread was about. At least not where I was involved. My point is


 contains information that can't be derived from the source code its
 self, where as with D, no such metadata is *needed*. If you wanted,
 you could build a tool to take D source code and generate a makefile
 or a bash build script from it
 
code, assuming there exists a directory containing all and only those source files that should end up in the resulting assembly.
I'm /not/ willing to assume that (because all to often it's not true) and you also need the list of other assemblies that should be included.
May 24 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
BCS wrote:
 Hello Christopher,
 
 BCS wrote:

 But that's not the point. Neither make nor VS's equivalent is what
 this thread was about. At least not where I was involved. My point is


 contains information that can't be derived from the source code its
 self, where as with D, no such metadata is *needed*. If you wanted,
 you could build a tool to take D source code and generate a makefile
 or a bash build script from it
code, assuming there exists a directory containing all and only those source files that should end up in the resulting assembly.
I'm /not/ willing to assume that (because all to often it's not true) and you also need the list of other assemblies that should be included.
you can't create a standalone executable in D just by parsing the D source files (for all the imports) if you need to link in external libs. you need to at least specify the lib name if it's on the linker's search path or provide the full path otherwise. Same thing with assemblies. better default.
May 24 2009
parent reply BCS <none anon.com> writes:
Hello Yigal,


 you can't create a standalone executable in D just by parsing the D
 source files (for all the imports) if you need to link in external libs.
 you need to at least specify the lib name if it's on the linker's
 search path or provide the full path otherwise.
pagma(lib, ...); //?
 Same thing with assemblies.


 the better default.
think we have found where we will have to disagree. Part of my reasoning is that in the normal case, for practical reasons, that file will have to be maintained by an IDE, thus /requiring/ development to be in an IDE of some kind. In D, that data in can normally be part of the source code, and only in unusual cases does it need to be formally codified.
May 24 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
BCS wrote:
 Hello Yigal,
 

 you can't create a standalone executable in D just by parsing the D
 source files (for all the imports) if you need to link in external libs.
 you need to at least specify the lib name if it's on the linker's
 search path or provide the full path otherwise.
pagma(lib, ...); //?
this. In general though this is a bad idea. why would you want to embed such outside data inside your code? info needed for building your task should not be part of the code. what if I want to rename the lib, do I have to recompile everything? what if I don't have the source? what if I want to change the version? what if I want to switch a vendor for this lib?
 
 Same thing with assemblies.


 the better default.
I think we have found where we will have to disagree. Part of my reasoning is that in the normal case, for practical reasons, that file will have to be maintained by an IDE, thus /requiring/ development to be in an IDE of some kind. In D, that data in can normally be part of the source code, and only in unusual cases does it need to be formally codified.
May 25 2009
parent reply BCS <ao pathlink.com> writes:
Reply to Yigal,

 BCS wrote:
 
 Hello Yigal,
 

 you can't create a standalone executable in D just by parsing the D
 source files (for all the imports) if you need to link in external
 libs.
 you need to at least specify the lib name if it's on the linker's
 search path or provide the full path otherwise.
pagma(lib, ...); //?
implement this. In general though this is a bad idea. why would you want to embed such outside data inside your code?
Because it's needed to build the code
 info needed for building your task should not be part of the code.
IMO it should. Ideally it should be available in the code in a form tools can read. At a minimum, it should be in the comment header. The only other choice is placing it outside your code and we have already covered why I think that is a bad idea.
 what if I want to rename the lib,
So you rename the lib and whatever references to it (inside the code or outside) end up needing to be update. Regardless, you will need to update something by hand or have a tool do it for you. I see nothing harder about updateing it in the code than outside the code.
 do I have to recompile everything?
Nope. pragma(lib, ...) just passes a static lib to the linker and doesn't have any effect at runtime. (if you are dealing with .dll/.so libraries then you link in a export .lib with a pragma or load them manually and don't even worry about it at all)
 what if I don't have the source?
It's pointing at a static library so source doesn't matter. If you are working from source then you don't need the pragma and what matters is DMD's import path (if it is an unrelated code tree in which cases the path can be developer specific and needs to be set up per system).
 what if I want to change the version?
In that case you change the pragma. (again assuming static libs and the same side note for dynamic libs)
 what if I want to switch a vendor for this lib?
I have never heard of this being possible without major changes in the calling code so it don't matter.
May 25 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
BCS wrote:
 Reply to Yigal,
 
 BCS wrote:

 Hello Yigal,


 you can't create a standalone executable in D just by parsing the D
 source files (for all the imports) if you need to link in external
 libs.
 you need to at least specify the lib name if it's on the linker's
 search path or provide the full path otherwise.
pagma(lib, ...); //?
implement this. In general though this is a bad idea. why would you want to embed such outside data inside your code?
Because it's needed to build the code
 info needed for building your task should not be part of the code.
IMO it should. Ideally it should be available in the code in a form tools can read. At a minimum, it should be in the comment header. The only other choice is placing it outside your code and we have already covered why I think that is a bad idea.
 what if I want to rename the lib,
So you rename the lib and whatever references to it (inside the code or outside) end up needing to be update. Regardless, you will need to update something by hand or have a tool do it for you. I see nothing harder about updateing it in the code than outside the code.
 do I have to recompile everything?
Nope. pragma(lib, ...) just passes a static lib to the linker and doesn't have any effect at runtime. (if you are dealing with .dll/.so libraries then you link in a export .lib with a pragma or load them manually and don't even worry about it at all)
 what if I don't have the source?
It's pointing at a static library so source doesn't matter. If you are working from source then you don't need the pragma and what matters is DMD's import path (if it is an unrelated code tree in which cases the path can be developer specific and needs to be set up per system).
 what if I want to change the version?
In that case you change the pragma. (again assuming static libs and the same side note for dynamic libs)
 what if I want to switch a vendor for this lib?
I have never heard of this being possible without major changes in the calling code so it don't matter.
What I was trying to say is that you're hardcoding the lib name and version inside the code. I see two problems with this: if the pragma is in my code than I need to re-compile my code if I want to edit the pragma (rename lib, change version, change vendor, etc...) if the pragma is in some 3rd party component which I don't have the source for than I can't change the pragma. either way, it conflicts with my work-flow and goals. I do not wish to recompile a 1.5GB standalone executable if I just changed a minor version of a lib. IIRC, the math lib in C/C++ comes in three flavors so you can choose your trade-off (speed or accuracy) and the only thing you need to do is just link the flavor you want in your executable. you seem keen on combining the build process with compilation which is in my experience a very bad thing. it may simplify your life for your small projects but as I was telling you before it's a pain in the neck for the scale of projects I work on. I don't get why you refuse to see that. what you suggest is _not_ a good solution for me.
May 25 2009
next sibling parent reply grauzone <none example.net> writes:
 I do not wish to recompile a 1.5GB standalone executable if I just 
 changed a minor version of a lib.
Can you tell me, why that application needs to be that big, and can't be split in several, smaller processes?
May 26 2009
parent BCS <none anon.com> writes:
Hello grauzone,

 I do not wish to recompile a 1.5GB standalone executable if I just
 changed a minor version of a lib.
 
Can you tell me, why that application needs to be that big, and can't be split in several, smaller processes?
I'm more interested in how you got 1.5GBs of executable.
May 26 2009
prev sibling parent reply BCS <none anon.com> writes:
Hello Yigal,

 BCS wrote:
 
 Reply to Yigal,
 
 BCS wrote:
 
 Hello Yigal,
 

 you can't create a standalone executable in D just by parsing the
 D
 source files (for all the imports) if you need to link in external
 libs.
 you need to at least specify the lib name if it's on the linker's
 search path or provide the full path otherwise.
pagma(lib, ...); //?
implement this. In general though this is a bad idea. why would you want to embed such outside data inside your code?
Because it's needed to build the code
 info needed for building your task should not be part of the code.
 
IMO it should. Ideally it should be available in the code in a form tools can read. At a minimum, it should be in the comment header. The only other choice is placing it outside your code and we have already covered why I think that is a bad idea.
 what if I want to rename the lib,
 
So you rename the lib and whatever references to it (inside the code or outside) end up needing to be update. Regardless, you will need to update something by hand or have a tool do it for you. I see nothing harder about updateing it in the code than outside the code.
 do I have to recompile everything?
 
Nope. pragma(lib, ...) just passes a static lib to the linker and doesn't have any effect at runtime. (if you are dealing with .dll/.so libraries then you link in a export .lib with a pragma or load them manually and don't even worry about it at all)
 what if I don't have the source?
 
It's pointing at a static library so source doesn't matter. If you are working from source then you don't need the pragma and what matters is DMD's import path (if it is an unrelated code tree in which cases the path can be developer specific and needs to be set up per system).
 what if I want to change the version?
 
In that case you change the pragma. (again assuming static libs and the same side note for dynamic libs)
 what if I want to switch a vendor for this lib?
 
I have never heard of this being possible without major changes in the calling code so it don't matter.
What I was trying to say is that you're hardcoding the lib name and version inside the code. I see two problems with this: if the pragma is in my code than I need to re-compile my code if I want to edit the pragma (rename lib, change version, change vendor, etc...) if the pragma is in some 3rd party component which I don't have the source for than I can't change the pragma. either way, it conflicts with my work-flow and goals. I do not wish to recompile a 1.5GB standalone executable if I just changed a minor version of a lib.
I see you point but I think it is invalid. For starters, I could be wrong but I think that the use of pragma(lib,) can't be detected in object code, I think It just instructs DMD to pass the lib on to the linker when it gets called by DMD. If I am wrong about that I still think it doesn't matter because (as far as static libraries go) I think it would a very BAD idea to try and switch them out from under a closed source lib. Third, if you really want to go mucking around with those internals, you can always copy the new lib over the old one.
 
 IIRC, the math lib in C/C++ comes in three flavors so you can choose
 your trade-off (speed or accuracy) and the only thing you need to do
 is just link the flavor you want in your executable.
Everything needs a math lib so there will be a default. I'd not willing to second guess the original programmer if they choose to switch to another lib. The same goes for other libs as well. If you start switching to libs that the lib's programmer doesn't explicitly support, your already on your own and you have bigger problems than what I'm talking about.
 you seem keen on combining the build process with compilation which is
 in my experience a very bad thing. it may simplify your life for your
 small projects but as I was telling you before it's a pain in the neck
 for the scale of projects I work on. I don't get why you refuse to see
 that. what you suggest is _not_ a good solution for me.
What I want is a language where most of the time you build a project from only the information in the source code. What I don't want is a language where the only way to keep track of the information you need to build a project, is with an external data file. I don't want that because the only practical way to do that is _force_ the programmer to use an IDE and have it maintain that file.
May 26 2009
parent reply Jussi Jumppanen <jussij zeusedit.com> writes:
BCS Wrote:

 What I want is a language where most of the time you build 
 a project from only the information in the source code. 
You can build this Simple.cs file: using System; using System.Windows.Forms; namespace SimpleApplication { static class Program { [STAThread] static void Main() { MessageBoxButtons.OK, MessageBoxIcon.Information); } } } to create a Simple.exe using nothing but this command line: csc.exe /r:System.dll; D:\temp\simple.cs
 What I don't want is a language where the only way to keep track 
 of the information you need to build a project, is with an external 
 data file. 
People have been developing projects using an "external data file" for decades. It's called the make file.
 I don't want that because the only practical way to do that is _force_ 
 the programmer to use an IDE and have it maintain that file.
to use an IDE to write the code? MSBuild.exe is nothing than Microsoft's replacement to make.exe. It is nothing more than a version of make.exe that takes XML make files as it's input.
May 26 2009
parent BCS <none anon.com> writes:
Hello Jussi,

 BCS Wrote:
 
 What I want is a language where most of the time you build a project
 from only the information in the source code.
 
You can build this Simple.cs file:
[...]
 to create a Simple.exe using nothing but this command line:
 
 csc.exe /r:System.dll; D:\temp\simple.cs
Most any language has what I want for single file programs. But when you start getting dozens of file in a project (including some file mixed into the working directory that shouldn't be included) it breaks down.
 
 What I don't want is a language where the only way to keep track of
 the information you need to build a project, is with an external data
 file.
 
People have been developing projects using an "external data file" for decades. It's called the make file.
makefiles are intended to be edited by hand. I'd rather not need make at all until I start having extra language build steps (yacc, rpm/deb generation, regression tests, etc.).
 
 I don't want that because the only practical way to do that is
 _force_ the programmer to use an IDE and have it maintain that file.
 
an IDE to write the code?
The only practical way to keep track for what files do and do not get compiled is a .csproj file and the only resonable way to mantain them is VS or the equivelent.
 MSBuild.exe is nothing than Microsoft's replacement to make.exe.
 
 It is nothing more than a version of make.exe that takes XML make
 files as it's input.
 
Nuf said.
May 26 2009
prev sibling next sibling parent reply Brad Roberts <braddr puremagic.com> writes:
Yigal Chripun wrote:
 IMO, designing the language to support this better work-flow is a good
 decision made by MS, and D should follow it instead of trying to get
 away without an IDE.
Support or enable.. sure. Require, absolutely not. I've become convinced that the over-reliance on auto-complete and other IDE features has lead to a generation of developers that really don't know their language / environment. The number of propagated typo's due to first time mis-typing of name (I see lenght way too often at work) that I wanna ban the use of auto-complete, but I'd get lynched. If the applications library space is so vast or random that you can't keep track of where things are, a tool that helps you type in code is papering over a more serious problem. My other problem with IDE's, such as eclipse, is that it's such an all or nothing investment. You can't really just use part of it. You must buy in to it's editor, it's interface with your SCM, it's scriptures of indentation style, etc. Trying to deviate from any of it is such a large pain that it's just not worth it -- more so as the team working on a project gets larger. Sorry, I'll stop ranting. Sigh, Brad
May 17 2009
next sibling parent reply Yigal Chripun <yigal100 gmail.com> writes:
I disagree on all your points.
read inside for comments.

Brad Roberts wrote:
 Yigal Chripun wrote:
 IMO, designing the language to support this better work-flow is a good
 decision made by MS, and D should follow it instead of trying to get
 away without an IDE.
Support or enable.. sure. Require, absolutely not. I've become convinced that the over-reliance on auto-complete and other IDE features has lead to a generation of developers that really don't know their language / environment. The number of propagated typo's due to first time mis-typing of name (I see lenght way too often at work) that I wanna ban the use of auto-complete, but I'd get lynched.
first, typos - eclipse has a built-in spell checker so all those "lenght" will be underlined with an orange squiggly line. regarding the more general comment of bad developers - you see a connection where there is none. A friend of mine showed me a a graph online that clearly shows the inverse correlation between the number of pirates in the world and global worming. (thanks to the Somalian pirates, that means the global effort to reduce emissions somewhat helps) A better analogy would be automotive: if you're Michael Schumacher than an automated transmission will just slow you down, but for the rest of the population it helps improve driving. the transmission doesn't make the driver good or bad, but it does help the majority of drivers to improve their driving skills. there are bad programmers that use a text editor as much as the ones that use an IDE. there are also good programmers on both sides. An IDE doesn't create bad programmers, rather the IDE helps bad programmers to write less buggy code.
 
 If the applications library space is so vast or random that you can't keep
track
 of where things are, a tool that helps you type in code is papering over a more
 serious problem.
false again, using a tool that helps writing code does not mean there's a design problem in the code. auto-complete prevents typos, for instance and that has nothing to do with anything you said. For me, many times I remember that there's a method that does something I need by I can't remember if it's called fooBar(int, char) or barFoo(char, int) or any other permutation. you'd need to go check the documentation, I save time by using the auto-complete. Another use case is when I need to use some API I can get the list of methods with the documentation by using the auto-complete feature.
 
 My other problem with IDE's, such as eclipse, is that it's such an all or
 nothing investment.  You can't really just use part of it.  You must buy in to
 it's editor, it's interface with your SCM, it's scriptures of indentation
style,
 etc.  Trying to deviate from any of it is such a large pain that it's just not
 worth it -- more so as the team working on a project gets larger.
completely wrong. You forget - Eclipse is just a plug-in engine with default plug-ins that implement a Java IDE. editor: prefer vim/emacs? there are eclipse plugins that implement both. SCM: there are _tons_ of SCM plug-ins! just use what ever you prefer. I use git and there's a neat UI for that. *But*, sometimes I prefer git's command line. what to do? no problem, I can open a terminal window inside eclipse and run any command I want! I work on unix and my local eclipse (on windows) can open remote files on the unix machine. eclipse does everything for me including giving me a shell to run remote commands. indentation style: there's nothing easier. go to eclipse properties. for each language you have installed you can configure "styles" and eclipse will ident, color, format your code in what ever way you want. you don't have to like or use eclipse, or any other IDE but if you are not familiar with the tool, don't provide mis-information.
 
 Sorry, I'll stop ranting.
 
 Sigh,
 Brad
May 17 2009
parent Brad Roberts <braddr puremagic.com> writes:
Yigal Chripun wrote:
 I disagree on all your points.
 read inside for comments.
 
 Brad Roberts wrote:
 Yigal Chripun wrote:
 IMO, designing the language to support this better work-flow is a good
 decision made by MS, and D should follow it instead of trying to get
 away without an IDE.
Support or enable.. sure. Require, absolutely not. I've become convinced that the over-reliance on auto-complete and other IDE features has lead to a generation of developers that really don't know their language / environment. The number of propagated typo's due to first time mis-typing of name (I see lenght way too often at work) that I wanna ban the use of auto-complete, but I'd get lynched.
first, typos - eclipse has a built-in spell checker so all those "lenght" will be underlined with an orange squiggly line. regarding the more general comment of bad developers - you see a connection where there is none. A friend of mine showed me a a graph online that clearly shows the inverse correlation between the number of pirates in the world and global worming. (thanks to the Somalian pirates, that means the global effort to reduce emissions somewhat helps) A better analogy would be automotive: if you're Michael Schumacher than an automated transmission will just slow you down, but for the rest of the population it helps improve driving. the transmission doesn't make the driver good or bad, but it does help the majority of drivers to improve their driving skills. there are bad programmers that use a text editor as much as the ones that use an IDE. there are also good programmers on both sides. An IDE doesn't create bad programmers, rather the IDE helps bad programmers to write less buggy code.
 If the applications library space is so vast or random that you can't
 keep track
 of where things are, a tool that helps you type in code is papering
 over a more
 serious problem.
false again, using a tool that helps writing code does not mean there's a design problem in the code. auto-complete prevents typos, for instance and that has nothing to do with anything you said. For me, many times I remember that there's a method that does something I need by I can't remember if it's called fooBar(int, char) or barFoo(char, int) or any other permutation. you'd need to go check the documentation, I save time by using the auto-complete. Another use case is when I need to use some API I can get the list of methods with the documentation by using the auto-complete feature.
 My other problem with IDE's, such as eclipse, is that it's such an all or
 nothing investment.  You can't really just use part of it.  You must
 buy in to
 it's editor, it's interface with your SCM, it's scriptures of
 indentation style,
 etc.  Trying to deviate from any of it is such a large pain that it's
 just not
 worth it -- more so as the team working on a project gets larger.
completely wrong. You forget - Eclipse is just a plug-in engine with default plug-ins that implement a Java IDE. editor: prefer vim/emacs? there are eclipse plugins that implement both. SCM: there are _tons_ of SCM plug-ins! just use what ever you prefer. I use git and there's a neat UI for that. *But*, sometimes I prefer git's command line. what to do? no problem, I can open a terminal window inside eclipse and run any command I want! I work on unix and my local eclipse (on windows) can open remote files on the unix machine. eclipse does everything for me including giving me a shell to run remote commands. indentation style: there's nothing easier. go to eclipse properties. for each language you have installed you can configure "styles" and eclipse will ident, color, format your code in what ever way you want. you don't have to like or use eclipse, or any other IDE but if you are not familiar with the tool, don't provide mis-information.
 Sorry, I'll stop ranting.

 Sigh,
 Brad
As I said.. "I have become convinced..." it might not be actually true, and it might not hold for everyone, but I've seen it frequently enough that I've started to doubt statements to the contrary. I could well be wrong, but I'm not going to accept your word any more than you accept mine. You are correct that for every generalization that there are good exceptions. To address a few of your points, I've tried several of the various plugs, both at the editor and scm layers. I've talked with a whole bunch of other people I consider experts who have done the same. The answer that's come back every single time... the plugins suck. The only ones that actually are long-term usable are the defaults. Maybe one year that'll change, but forgive me for not holding my breath. That doesn't mean it's not possible, just means that the effort of doing a _good_ job hasn't been worth the communities time, and that's fine. Anyway.. since I'm fairly confident that D isn't ever going to abandon the pieces I care about, and might well enable the pieces you care about, it's kinda pointless to argue about it. Later, Brad
May 17 2009
prev sibling parent reply BCS <none anon.com> writes:
Hello Brad,

 My other problem with IDE's, such as eclipse, is that it's such an all
 or nothing investment.  You can't really just use part of it.  You
 must buy in to it's editor, it's interface with your SCM, it's
 scriptures of indentation style, etc.  Trying to deviate from any of
 it is such a large pain that it's just not worth it -- more so as the
 team working on a project gets larger.
For VS, you might have a point. However for D, I use Descent and I haven't found any of those to be a problem. Getting people to agree on how to set it up I expect would be a bigger problem.
May 17 2009
parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
BCS wrote:
 Hello Brad,
 
 My other problem with IDE's, such as eclipse, is that it's such an all
 or nothing investment.  You can't really just use part of it.  You
 must buy in to it's editor, it's interface with your SCM, it's
 scriptures of indentation style, etc.  Trying to deviate from any of
 it is such a large pain that it's just not worth it -- more so as the
 team working on a project gets larger.
For VS, you might have a point. However for D, I use Descent and I haven't found any of those to be a problem. Getting people to agree on how to set it up I expect would be a bigger problem.
Actually, Descent isn't perfect, either. For example, it mandates that cases in a switch MUST be aligned with the braces. What's more fun is that you can't override it until AFTER it's corrected YOU. Oh, and how it indents multiline function calls is completely retarded. And every time I try to autocomplete a templated function call, it insists on inserting ALL of the template arguments, even when they're supposed to be derived. Don't get me wrong, I quite like Descent. But as soon as you try to make a program "smart", you're going to start getting it wrong. </rant> -- Daniel
May 17 2009
next sibling parent reply grauzone <none example.net> writes:
 Oh, and how it indents multiline function calls is completely retarded.
 
 And every time I try to autocomplete a templated function call, it
 insists on inserting ALL of the template arguments, even when they're
 supposed to be derived.
This is why I don't like IDEs. Plus, every time you type something, stuff BLINKS around, grabbing your attention, saying I'M SO ANNOYING PLEASE DISABLE ME AS A FEATURE. Like documentation tooltips, auto completion hints, or "intelligent" indentation. It's ridiculous. When I hit a key, I want the text editor insert that key. Not do.... random.... stuff. How do Eclipse user deal with it? Not look at the screen when typing?
May 17 2009
next sibling parent Georg Wrede <georg.wrede iki.fi> writes:
grauzone wrote:
 Oh, and how it indents multiline function calls is completely retarded.

 And every time I try to autocomplete a templated function call, it
 insists on inserting ALL of the template arguments, even when they're
 supposed to be derived.
This is why I don't like IDEs. Plus, every time you type something, stuff BLINKS around, grabbing your attention, saying I'M SO ANNOYING PLEASE DISABLE ME AS A FEATURE. Like documentation tooltips, auto completion hints, or "intelligent" indentation. It's ridiculous. When I hit a key, I want the text editor insert that key. Not do.... random.... stuff. How do Eclipse user deal with it? Not look at the screen when typing?
That's why I do even Java with editor and make.
May 17 2009
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
grauzone wrote:
 Oh, and how it indents multiline function calls is completely retarded.

 And every time I try to autocomplete a templated function call, it
 insists on inserting ALL of the template arguments, even when they're
 supposed to be derived.
This is why I don't like IDEs. Plus, every time you type something, stuff BLINKS around, grabbing your attention, saying I'M SO ANNOYING PLEASE DISABLE ME AS A FEATURE. Like documentation tooltips, auto completion hints, or "intelligent" indentation. It's ridiculous. When I hit a key, I want the text editor insert that key. Not do.... random.... stuff. How do Eclipse user deal with it? Not look at the screen when typing?
I do look at the screen because I WANT to use those features. I don't try to work around them I try to use them instead. If there's a feature I don't like I disable it, you just have to configure the application in the way you like it. An application can't be configured from the beginning to satisfy all peoples needs.
May 18 2009
prev sibling next sibling parent Ary Borenszweig <ary esperanto.org.ar> writes:
grauzone wrote:
 Oh, and how it indents multiline function calls is completely retarded.

 And every time I try to autocomplete a templated function call, it
 insists on inserting ALL of the template arguments, even when they're
 supposed to be derived.
now fixed in trunk and will be in the next release. If something is not available in the IDE or it annoys you, just disable it or make a feature request. :)
May 18 2009
prev sibling parent Yigal Chripun <yigal100 gmail.com> writes:
grauzone wrote:
 Oh, and how it indents multiline function calls is completely retarded.

 And every time I try to autocomplete a templated function call, it
 insists on inserting ALL of the template arguments, even when they're
 supposed to be derived.
This is why I don't like IDEs. Plus, every time you type something, stuff BLINKS around, grabbing your attention, saying I'M SO ANNOYING PLEASE DISABLE ME AS A FEATURE. Like documentation tooltips, auto completion hints, or "intelligent" indentation. It's ridiculous. When I hit a key, I want the text editor insert that key. Not do.... random.... stuff. How do Eclipse user deal with it? Not look at the screen when typing?
that sounds like an old man complaining that modern television has sound and colors. you can disable all features or just use a primitive text editor since that's what your used too, but those thing are *not* problems. It is extremely useful to have the documentation tooltips instead of spending time on searching manually in some book or whatever. the smart indentation is a godsend, if I paste a snippet it is adjusted to my code so I can see how many braces I need to have at the end. I certainly do *NOT* want to go back to writing shell scripts or emacs LISP functions just to copy some snippet from one file to another!
May 18 2009
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
Daniel Keep wrote:
 
 BCS wrote:
 Hello Brad,

 My other problem with IDE's, such as eclipse, is that it's such an all
 or nothing investment.  You can't really just use part of it.  You
 must buy in to it's editor, it's interface with your SCM, it's
 scriptures of indentation style, etc.  Trying to deviate from any of
 it is such a large pain that it's just not worth it -- more so as the
 team working on a project gets larger.
For VS, you might have a point. However for D, I use Descent and I haven't found any of those to be a problem. Getting people to agree on how to set it up I expect would be a bigger problem.
Actually, Descent isn't perfect, either. For example, it mandates that cases in a switch MUST be aligned with the braces. What's more fun is that you can't override it until AFTER it's corrected YOU.
Just file a ticket.
 Oh, and how it indents multiline function calls is completely retarded.
 
 And every time I try to autocomplete a templated function call, it
 insists on inserting ALL of the template arguments, even when they're
 supposed to be derived.
That's been fixed now: http://www.dsource.org/projects/descent/changeset/1344
 Don't get me wrong, I quite like Descent.  But as soon as you try to
 make a program "smart", you're going to start getting it wrong.
 
 </rant>
 
   -- Daniel
May 18 2009
parent reply Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Jacob Carlborg wrote:
 Daniel Keep wrote:
 Actually, Descent isn't perfect, either.  For example, it mandates that
 cases in a switch MUST be aligned with the braces.  What's more fun is
 that you can't override it until AFTER it's corrected YOU.
Just file a ticket.
The relevant ticket[1] is a year old, according to dsource... [1]: At least I *think* he's talking about this: http://dsource.org/projects/descent/ticket/82
May 18 2009
parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
Frits van Bommel wrote:
 Jacob Carlborg wrote:
 Daniel Keep wrote:
 Actually, Descent isn't perfect, either.  For example, it mandates that
 cases in a switch MUST be aligned with the braces.  What's more fun is
 that you can't override it until AFTER it's corrected YOU.
Just file a ticket.
The relevant ticket[1] is a year old, according to dsource... [1]: At least I *think* he's talking about this: http://dsource.org/projects/descent/ticket/82
Well, I didn't know it was *that* important for using it. If you consider it really important, post something in the forums, reply to that ticket, or something like that. Well... posting in the newsgroup works too. ;-) http://dsource.org/projects/descent/changeset/1347
May 18 2009
parent reply Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Ary Borenszweig wrote:
 Frits van Bommel wrote:
 Jacob Carlborg wrote:
 Daniel Keep wrote:
 Actually, Descent isn't perfect, either.  For example, it mandates that
 cases in a switch MUST be aligned with the braces.  What's more fun is
 that you can't override it until AFTER it's corrected YOU.
Just file a ticket.
The relevant ticket[1] is a year old, according to dsource... [1]: At least I *think* he's talking about this: http://dsource.org/projects/descent/ticket/82
Well, I didn't know it was *that* important for using it. If you consider it really important, post something in the forums, reply to that ticket, or something like that.
Why would I reply to it? I *wrote* it. (And yeah, it may not look important but it's pretty darn annoying)
 Well... posting in the newsgroup works too. ;-)
 
 http://dsource.org/projects/descent/changeset/1347
:) Unfortunately, most of my coding is on LDC nowadays, which isn't written in D...
May 18 2009
parent Ary Borenszweig <ary esperanto.org.ar> writes:
Frits van Bommel escribió:
 Ary Borenszweig wrote:
 Frits van Bommel wrote:
 Jacob Carlborg wrote:
 Daniel Keep wrote:
 Actually, Descent isn't perfect, either.  For example, it mandates 
 that
 cases in a switch MUST be aligned with the braces.  What's more fun is
 that you can't override it until AFTER it's corrected YOU.
Just file a ticket.
The relevant ticket[1] is a year old, according to dsource... [1]: At least I *think* he's talking about this: http://dsource.org/projects/descent/ticket/82
Well, I didn't know it was *that* important for using it. If you consider it really important, post something in the forums, reply to that ticket, or something like that.
Why would I reply to it? I *wrote* it.
LOL!
May 19 2009
prev sibling parent reply Lutger <lutger.blijdestijn gmail.com> writes:
Yigal Chripun wrote:

...
 IMO, designing the language to support this better work-flow is a good 
 decision made by MS, and D should follow it instead of trying to get 
 away without an IDE.
I'm not sure about this. D is designed to be easier to parse than C++ (but that's saying nothing) to allow better tools made for it. I think this should be enough. pain to do without. Autocomplete dictates that related functions should be named with the exact same prefix - even when this isn't logical. It also encourages names to be as descriptive as possible, in practice leading to a part of the api docs encoded in the function name. Extremely bloated names are the consequence of this. It doesn't always make code more readable imho. The documentation comments are in xml: pure insanity. I tried to generate documentation for my stuff at work once, expecting to be done in max 5 min. like ddoc. Turns out nobody at work uses documentation generation for a reason: it isn't really fleshed out and one-click from the IDE, in fact it is a pain in the arse compared to using ddoc. I should stop now before this turns into a rant.
May 18 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
Lutger wrote:
 Yigal Chripun wrote:
 
 ...
 IMO, designing the language to support this better work-flow is a good 
 decision made by MS, and D should follow it instead of trying to get 
 away without an IDE.
I'm not sure about this. D is designed to be easier to parse than C++ (but that's saying nothing) to allow better tools made for it. I think this should be enough. pain to do without. Autocomplete dictates that related functions should be named with the exact same prefix - even when this isn't logical. It also encourages names to be as descriptive as possible, in practice leading to a part of the api docs encoded in the function name. Extremely bloated names are the consequence of this. It doesn't always make code more readable imho.
this I completely disagree with. those are the same faulty reasons I already answered. an IDE does _not_ create bad programmers, and does _not_ encourage bad code. it does encourage descriptive names which is a _good_ thing. writing "strcpy" ala C style is cryptic and *wrong*. code is read hundred times more than it's written and a better name would be for instance - "stringCopy". it's common nowadays to have tera-byte sized HDD so why people try to save a few bytes from their source while sacrificing readability? the only issue I have with too long names is when dealing with C/C++ code that prefixes all symbols with their file-names/namespaces. At least in C++ this is solved by using namespaces. but this is a problem with the languages themselves and has nothing to do with the IDE.
 The documentation comments are in xml: pure insanity. I tried to generate
documentation 
 for my stuff at work once, expecting to be done in max 5 min. like ddoc. Turns
out nobody at 
 work uses documentation generation for a reason: it isn't really fleshed out
and one-click
 from the IDE, in fact it is a pain in the arse compared to using ddoc.
 
 I should stop now before this turns into a rant.
 
I agree fully with this. XML documents are a mistake made by MS. javadoc is a much better format and even that can be improved. This however has nothing to do with the IDE. the important part is that the IDE parses whatever format is used and can show you the documentation via simple means. no need for you to spend time to find the documentation yourself.
May 19 2009
parent Lutger <lutger.blijdestijn gmail.com> writes:
Yigal Chripun wrote:
...
 
this I completely disagree with. those are the same faulty reasons I already answered. an IDE does _not_ create bad programmers, and does _not_ encourage bad code. it does encourage descriptive names which is a _good_ thing. writing "strcpy" ala C style is cryptic and *wrong*. code is read hundred times more than it's written and a better name would be for instance - "stringCopy". it's common nowadays to have tera-byte sized HDD so why people try to save a few bytes from their source while sacrificing readability?
... This is not what I was saying. I'm not talking about strcpy vs stringCopy. stringCopy is short. I'm talking about things like SetCompatibleTextRenderingDefault. And this example isn't even so bad. Fact is, it is easier to come up with long identifiers and there is no penalty in the form of typing cost for doing so. It's not about bad programmers (or saving bytes, that's just ridiculous), but IDE does encourage some kind of constructs because they are easier in that environment. Good programmers come up with good, descriptive names, whether they program in an IDE or not. At work I must program in VB.NET. This language is pretty verbose in describing even the most common things. It's easier to parse when you're new to the language, but after a while I find all the verbosity gets in the way of readability.
May 24 2009
prev sibling next sibling parent reply BCS <none anon.com> writes:
Hello Ary,

 BCS escribió:
 
 I think that any real programing project now days (regardless of
 language) needs tools to help the programmer. The difference between

 you won't get much at all done without one.
 
I can't agree with this. Most of the time I use an IDE for the autocompletion, not much for the build-and-jump-to-error stuff. And I don't see D being easier with regards to remembering what's the name of that function, which members does a class have, in which module are all these.
I'm not referring to editing. For that, most any language you can get away without an IDE if you are willing, but it will cost you something.
 Why do you say that with D you can get away without an IDE
Because off ten as not I do. For some reason I have never gotten Descent working correctly. Most of the time, the code outlining works but I've never gotten auto-compleat or integrated building to work.


 get away with pretty much everything, except you'll be slower at it
 (same goes for D without an IDE).
As above, I'm not talking about editing, but rater about the rest of the project without an IDE? I don't even know if it can be done. Yes you can trigger a build from the command line, but setting up a project without it would require hand editing of XML (yuck) and the build tool IS visual studio.
 
 Again, this also applies to Java. When I started using Java I used the
 command line and an editor with just syntax highlighting, and made
 programs of several classes without problem. Refactoring was a PITA,
 and I'm thinking it's like that in D nowadays. :-P
I think it's like that in every language. The programs people work on now days are, no matter their representation, to complex for a person to navigate without tools.
May 17 2009
parent reply Jussi Jumppanen <jussij zeusedit.com> writes:
BCS Wrote:


Yes.
 I don't even know if it can be done. 
It is actually very easy to do: http://www.zeusedit.com/forum/viewtopic.php?t=2518 and it is even easier if you have a simple 'one file', throw away project: http://www.zeusedit.com/forum/viewtopic.php?t=1235
 Yes you can trigger a build from the command line, but setting 
 up a project without it would require hand editing of XML (yuck) 
 and the build tool IS visual studio.
It is true that Visual Studio creates XML project/solution files and the contents of these files is overly complex. But these XML files are Visual Studio specific and a lot of their complexity comes from the fact that they contain extra information that is only needed by the IDE. If you use the MsBuild approach the amount of XML that is needed to create a project/solution is much smaller and in general the XML is fairly trivial.
May 17 2009
next sibling parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
Jussi Jumppanen wrote:
 BCS Wrote:
 

Yes.
 I don't even know if it can be done. 
It is actually very easy to do: http://www.zeusedit.com/forum/viewtopic.php?t=2518
I look at that, and all I can say is: "if that's easy, then what's this?"
 rebuild MyProgram
I will give you this, though: D's toolchain could use improvement in more complex builds. But it's still a hell of a lot simpler than
 ...
 
 If you use the MsBuild approach the amount of XML that is needed to 
 create a project/solution is much smaller and in general the XML is 
 fairly trivial.  
If it's trivial, why does it need a tutorial? :P -- Daniel
May 17 2009
parent reply Robert Fraser <fraserofthenight gmail.com> writes:
Daniel Keep wrote:
 rebuild MyProgram
I will give you this, though: D's toolchain could use improvement in more complex builds. But it's still a hell of a lot simpler than
Have you tried DSSS? It's surprisingly feature-rich, and its syntax is a lot simpler than MSBuild's. IMO, any build more complex than setting a few options should be handled by a scripting language, though. Knocking together a Perl script to call your builder is often a lot easier than mucking around with huge configuration files (anyone who's used Ant can attest).
May 17 2009
parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
Robert Fraser wrote:
 Daniel Keep wrote:
 rebuild MyProgram
I will give you this, though: D's toolchain could use improvement in more complex builds. But it's still a hell of a lot simpler than
Have you tried DSSS? It's surprisingly feature-rich, and its syntax is a lot simpler than MSBuild's.
-_- You realise that in order to be using rebuild, I HAVE to also have DSSS, right? I'm pretty sure Gregor stopped releasing rebuild-only packages quite some time ago. DSSS itself is OK, but I can't let you get away with saying it's syntax is simpler than MSBuild's. Oh sure, it's not XML, but it's just... inscrutable. For example, I once had to ditch DSSS for a project because I needed to add a flag for Windows builds. Adding that flag killed all the other flags for all the other builds for some reason, even using the += thing. The lack of any sort of stable ordering for build steps is a pain, too. And it annoys me that I can't specify what a default build should do. DSSS is weird; even after Gregor wrote all the documentation for it, it still just didn't make sense. Maybe it's cursed or something. :P
 IMO, any build more complex than setting a few options should be handled
 by a scripting language, though. Knocking together a Perl script to call
 your builder is often a lot easier than mucking around with huge
 configuration files (anyone who's used Ant can attest).
I do have a build script for one of my projects. It's fairly large. The problem is, it's doing what this makefile would accomplish: %.d: %.dw It's always annoyed the crap out of me that we've lost such a basic transformative tool. -- Daniel P.S. No, I can't just use make; I'm on Windows. I really, REALLY don't want to have to deal with that bullshit again.
May 17 2009
next sibling parent reply Derek Parnell <derek psych.ward> writes:
On Mon, 18 May 2009 12:43:34 +1000, Daniel Keep wrote:

 
 You realise that in order to be using rebuild, I HAVE to also have DSSS,
 right?  I'm pretty sure Gregor stopped releasing rebuild-only packages
 quite some time ago.
Not to trumpet my own horn, but have you considered my build tool called 'Bud'? And if you have then what is missing from it that you need? http://www.dsource.org/projects/build Gregor derived DSSS from my project. -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
May 17 2009
parent Daniel Keep <daniel.keep.lists gmail.com> writes:
Derek Parnell wrote:
 On Mon, 18 May 2009 12:43:34 +1000, Daniel Keep wrote:
 
  
 You realise that in order to be using rebuild, I HAVE to also have DSSS,
 right?  I'm pretty sure Gregor stopped releasing rebuild-only packages
 quite some time ago.
Not to trumpet my own horn, but have you considered my build tool called 'Bud'? And if you have then what is missing from it that you need? http://www.dsource.org/projects/build Gregor derived DSSS from my project.
Actually, the project that has the big build script is using bud. I think bud is supposed to have some sort of transformative feature, but I just couldn't make it work. Aside from that, it works just fine, so I never felt the need to replace it. [1] The two main reasons I switched to rebuild over bud was -oq and -dc. Developing with various combinations of stable compilers, unstable compilers, custom compilers, phobos, tango stable and tango trunk makes -dc a godsend. And I'm just a neat freak when it comes to folders, hence my love of -oq :D. -- Daniel [1] This project actually has a private copy of the entire D toolchain because I'm completely paranoid about breaking it.
May 18 2009
prev sibling parent reply Robert Fraser <fraserofthenight gmail.com> writes:
Daniel Keep wrote:
 P.S.  No, I can't just use make; I'm on Windows.  I really, REALLY don't
 want to have to deal with that bullshit again.
http://gnuwin32.sourceforge.net/packages.html My current build script is cobbled together from Perl, Make, and DSSS. It sounds ugly, but when I tried it out on Linux (I usually use Windows), the entire thing built without a single change.
May 17 2009
next sibling parent Robert Fraser <fraserofthenight gmail.com> writes:
Robert Fraser wrote:
 Daniel Keep wrote:
 P.S.  No, I can't just use make; I'm on Windows.  I really, REALLY don't
 want to have to deal with that bullshit again.
http://gnuwin32.sourceforge.net/packages.html
Oops, wrong one: http://sourceforge.net/project/showfiles.php?group_id=9328&package_id=9393
May 17 2009
prev sibling parent Daniel Keep <daniel.keep.lists gmail.com> writes:
Robert Fraser wrote:
 Daniel Keep wrote:
 P.S.  No, I can't just use make; I'm on Windows.  I really, REALLY don't
 want to have to deal with that bullshit again.
http://gnuwin32.sourceforge.net/packages.html My current build script is cobbled together from Perl, Make, and DSSS. It sounds ugly, but when I tried it out on Linux (I usually use Windows), the entire thing built without a single change.
Mine is Python, bud and a modified version of Knuth's CWEB. I don't trust win32 "ports" of GNU tools [1] because there's usually some horrible incompatibility lurking in the shadows waiting to bite you on the arse. -- Daniel [1] I exclude Cygwin from this because it's running inside proper bash with largely proper UNIX semantics. It's also so fiddly and annoying to get to that I don't bother any more. :P
May 18 2009
prev sibling parent reply BCS <none anon.com> writes:
Hello Jussi,

 BCS Wrote:
 

 
Yes.
 I don't even know if it can be done.
 
It is actually very easy to do: http://www.zeusedit.com/forum/viewtopic.php?t=2518
Um, Step 1. OK. Step2. OK. Step 3. Yup, no longer practical. If that's what it take, you have just proven my point to my satisfaction. D can be built like that .proj file.
 Yes you can trigger a build from the command line, but setting up a
 project without it would require hand editing of XML (yuck) and the
 build tool IS visual studio.
 
It is true that Visual Studio creates XML project/solution files and the contents of these files is overly complex. But these XML files are Visual Studio specific and a lot of their complexity comes from the fact that they contain extra information that is only needed by the IDE. If you use the MsBuild approach the amount of XML that is needed to create a project/solution is much smaller and in general the XML is fairly trivial.
Smaller than huge and fairly trivial in comparison to what? The only people on an Xbox ( :-O Oh my, someone actually did that, I didn't known: http://www.forevergeek.com/2005/06/os_x_for_the_xbox/ ) because they like thumbing their nose at the big guys.
May 17 2009
parent Jesse Phillips <jessekphillips gmail.com> writes:
On Mon, 18 May 2009 02:51:19 +0000, BCS wrote:

 Hello Jussi,
 
 BCS Wrote:
 

 
Yes.
 I don't even know if it can be done.
 
It is actually very easy to do: http://www.zeusedit.com/forum/viewtopic.php?t=2518
Um, Step 1. OK. Step2. OK. Step 3. Yup, no longer practical. If that's what it take, you have just proven my point to my satisfaction. D can be but nothing like that .proj file.
different from D or C family. Link in your libraries and give it your *.cs files. My projects haven't been very large, but haven't yet had to provide non-source code for a build.
May 25 2009
prev sibling parent reply Christopher Wright <dhasenan gmail.com> writes:
Ary Borenszweig wrote:
 BCS escribió:
 Hello Georg,

 So, in a way, Microsoft may be right in assuming that (especially when
 their thinking anyway is that everybody sits at a computer that's
 totally dedicated to the user's current activity anyhow) preposterous
 horse power is (or, should be) available at the code editor.
I think that any real programing project now days (regardless of language) needs tools to help the programmer. The difference between D you won't get much at all done without one.
I can't agree with this. Most of the time I use an IDE for the autocompletion, not much for the build-and-jump-to-error stuff. And I don't see D being easier with regards to remembering what's the name of that function, which members does a class have, in which module are all these. get away with pretty much everything, except you'll be slower at it (same goes for D without an IDE).
The more boilerplate code a language requires, the more important it is to have an IDE. Features that a language provides that allow you to write less code make an IDE less important. I really like IDEs. They let me think less when creating code. Of course, the other feature is notifying the user about errors sooner than their next compile. This saves a lot of time, regardless of whether your language requires significant cruft or not.
May 17 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Christopher Wright wrote:
 I really like IDEs. They let me think less when creating code.
It wouldn't be hard to do a competent IDE for D. After all, D is designed to make that job easy.
May 19 2009
parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
Walter Bright escribió:
 Christopher Wright wrote:
 I really like IDEs. They let me think less when creating code.
It wouldn't be hard to do a competent IDE for D. After all, D is designed to make that job easy.
Like, for example, if you have this: --- char[] someFunction(char[] name) { return "int " ~ name ~ ";"; } class Foo { mixin(someFunction("variable")); } void main() { Foo foo = new Foo(); foo. --> I'd really like the IDE to suggest me "variable" } --- Do you really think implementing a *good* IDE for D is easy now? :-P (of course Descent works in this case, but just because it has the full dmdfe in it... so basically a good IDE will need to be able to do CTFE, instantiante templates, etc., and all of those things are kind of unclear in the specification of the D language, so if you don't use dmdfe... well... I hope you get my point)
May 19 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Ary Borenszweig wrote:
 Do you really think implementing a *good* IDE for D is easy now? :-P
 
 (of course Descent works in this case, but just because it has the full 
 dmdfe in it... so basically a good IDE will need to be able to do CTFE, 
 instantiante templates, etc., and all of those things are kind of 
 unclear in the specification of the D language, so if you don't use 
 dmdfe... well... I hope you get my point)
The dmdfe is available, so one doesn't have to recreate it. That makes it easy :-)
May 19 2009
next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 19 May 2009 18:59:59 -0400, Walter Bright  
<newshound1 digitalmars.com> wrote:

 Ary Borenszweig wrote:
 Do you really think implementing a *good* IDE for D is easy now? :-P
  (of course Descent works in this case, but just because it has the  
 full dmdfe in it... so basically a good IDE will need to be able to do  
 CTFE, instantiante templates, etc., and all of those things are kind of  
 unclear in the specification of the D language, so if you don't use  
 dmdfe... well... I hope you get my point)
The dmdfe is available, so one doesn't have to recreate it. That makes it easy :-)
The source to gcc is available, so that makes porting gcc to another platform easy. -Steve
May 19 2009
prev sibling parent Ary Borenszweig <ary esperanto.org.ar> writes:
Walter Bright escribió:
 Ary Borenszweig wrote:
 Do you really think implementing a *good* IDE for D is easy now? :-P

 (of course Descent works in this case, but just because it has the 
 full dmdfe in it... so basically a good IDE will need to be able to do 
 CTFE, instantiante templates, etc., and all of those things are kind 
 of unclear in the specification of the D language, so if you don't use 
 dmdfe... well... I hope you get my point)
The dmdfe is available, so one doesn't have to recreate it. That makes it easy :-)
Except if the IDE is not made in C++ ;-)
May 19 2009
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Yigal Chripun:

 first, typos - eclipse has a built-in spell checker so all those 
 "lenght" will be underlined with an orange squiggly line.
A much better solution is to use "size" (or even "len") everywhere in D, that avoids such typos in the first place. An IDE is useful, but it's better to improve the language first, so later you don't use the IDE to write boilerplate or fix typos. -----------------
If we ignore this fact D will become another niche academic language that no
one uses.<
Unfortunately I think the academy isn't much interested in D. So if D doesn't succeed, it will just be a dead language. this is done much less often than template programming in D, but it's doable.
second, D needs to update its stone age compilation model copied from C++.<
update D. "moons" files that bundle modules, built-in bud-like functionality, DDLs and more. Things to think about. Bye, bearophile
May 18 2009
parent reply davidl <davidl nospam.org> writes:
ÔÚ Mon, 18 May 2009 16:01:56 +0800£¬bearophile <bearophileHUGS lycos.com>  
дµÀ:

 Yigal Chripun:

 first, typos - eclipse has a built-in spell checker so all those
 "lenght" will be underlined with an orange squiggly line.
A much better solution is to use "size" (or even "len") everywhere in D, that avoids such typos in the first place. An IDE is useful, but it's better to improve the language first, so later you don't use the IDE to write boilerplate or fix typos. -----------------
 If we ignore this fact D will become another niche academic language  
 that no one uses.<
Unfortunately I think the academy isn't much interested in D. So if D doesn't succeed, it will just be a dead language. the fly, this is done much less often than template programming in D, but it's doable.
 second, D needs to update its stone age compilation model copied from  
 C++.<
and update D. "moons" files that bundle modules, built-in bud-like functionality, DDLs and more. Things to think about. Bye, bearophile
successful language requires a longer time. And people needs higher cost to move to D. Reusing current .Net codebase is a very strong selling point. frontend may be an easier approach. closing their source and obtaining higher performance. not ignore the brilliance of the .Net framework. -- ʹÓà Opera ¸ïÃüÐԵĵç×ÓÓʼþ¿Í»§³ÌÐò: http://www.opera.com/mail/
May 18 2009
next sibling parent "Tim Matthews" <tim.matthews7 gmail.com> writes:
On Tue, 19 May 2009 00:37:21 +1200, davidl <davidl nospam.org> wrote:




 D to be a successful language requires a longer time. And people needs  
 higher cost to move to D. Reusing current .Net codebase is a very strong  
 selling point.


 frontend may be an easier approach.

 of closing their source and obtaining higher performance.

 can not ignore the brilliance of the .Net framework.
A project in the making that uses D language compiled to cil and can therefore call .net base framework code. http://the-free-meme.blogspot.com/
May 18 2009
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
davidl wrote:
 ÔÚ Mon, 18 May 2009 16:01:56 +0800£¬bearophile 
 <bearophileHUGS lycos.com> дµÀ:
 
 Yigal Chripun:

 first, typos - eclipse has a built-in spell checker so all those
 "lenght" will be underlined with an orange squiggly line.
A much better solution is to use "size" (or even "len") everywhere in D, that avoids such typos in the first place. An IDE is useful, but it's better to improve the language first, so later you don't use the IDE to write boilerplate or fix typos. -----------------
 If we ignore this fact D will become another niche academic language 
 that no one uses.<
Unfortunately I think the academy isn't much interested in D. So if D doesn't succeed, it will just be a dead language. code on the fly, this is done much less often than template programming in D, but it's doable.
 second, D needs to update its stone age compilation model copied from 
 C++.<
Yes. Eventually Walter will need to take a look at way people write "moons" files that bundle modules, built-in bud-like functionality, DDLs and more. Things to think about. Bye, bearophile
D to be a successful language requires a longer time. And people needs higher cost to move to D. Reusing current .Net codebase is a very strong selling point. frontend may be an easier approach. of closing their source and obtaining higher performance. can not ignore the brilliance of the .Net framework.
appreciate a few pointers. Or references. Or delegates :o). Andrei
May 18 2009
next sibling parent reply BCS <ao pathlink.com> writes:
Reply to Andrei,


 appreciate a few pointers. Or references. Or delegates :o).
 
2) the support is back by MS, 3) the docs are great, and 4) the language is highly consistent and conservative, e.i. nothing is added until they've got it right.
 Andrei
 
May 18 2009
next sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
BCS wrote:
 Reply to Andrei,
 

 appreciate a few pointers. Or references. Or delegates :o).
class, 2) the support is back by MS, 3) the docs are great, and 4) the language is highly consistent and conservative, e.i. nothing is added until they've got it right.
Have you seen Linq? That's *amazing*! You can deal with expression ASTs and do really cool stuff with that. Like doing: var results = someObjectsThatAreGoingToBeTakenFromTheDb.Where(o => o.Name == "Foo"); foreach(var result in results) { // There, the previous condition with the given expression has been // translated to SQL and executed. }
May 18 2009
parent reply BCS <ao pathlink.com> writes:
Reply to Ary,

 BCS wrote:
 
 Reply to Andrei,
 

 appreciate a few pointers. Or references. Or delegates :o).
 
class, 2) the support is back by MS, 3) the docs are great, and 4) the language is highly consistent and conservative, e.i. nothing is added until they've got it right.
Have you seen Linq? That's *amazing*!
think it adds anything that puts it much above the rest of the crowd in any way.
 You can deal with expression ASTs and do really cool stuff with that.
 Like doing:
 
 var results = someObjectsThatAreGoingToBeTakenFromTheDb.Where(o =>
 o.Name == "Foo");
I think this will work: int delegate(int delegate(ref T)) Where(T[] array, bool delegate(T) dg) { struct Ret { T[] Array; bool delegate(T) Dg; int opApply(int delegate(ref T) idg) { foreach(ref T t; Array) if(Dg(t)) if(int ret = idg) return ret; return ret; } return &(Ret(array,dg)).opApply; } } If not, a little tweeking shloudl cover it. There is a lib I played around with a whole back that showed that D can do AST manipulation with expressions.
 foreach(var result in results) {
 // There, the previous condition with the given expression has been
 // translated to SQL and executed.
 }

 
I do to, but mostly because of the tools
May 18 2009
next sibling parent Jarrett Billingsley <jarrett.billingsley gmail.com> writes:
On Mon, May 18, 2009 at 1:47 PM, BCS <ao pathlink.com> wrote:

 I think this will work:

 int delegate(int delegate(ref T)) Where(T[] array, bool delegate(T) dg)
 {
 =A0 struct Ret
 =A0 {
 =A0 =A0 =A0 T[] Array;
 =A0 =A0 =A0 bool delegate(T) Dg;

 =A0 =A0 =A0 int opApply(int delegate(ref T) idg)
 =A0 =A0 =A0 {
 =A0 =A0 =A0 =A0 =A0 foreach(ref T t; Array)
 =A0 =A0 =A0 =A0 =A0 =A0 =A0if(Dg(t)) if(int ret =3D idg) return ret;
 =A0 =A0 =A0 =A0 =A0 return ret;
 =A0 =A0 =A0 }
 =A0 =A0 =A0 return &(Ret(array,dg)).opApply;
 =A0 }
 }

 If not, a little tweeking shloudl cover it.
Using that, however, looks pretty ugly: array.Where((SomeObject o) { return o.name =3D=3D "foo"; }); D could really use some sugar for delegate literals, especially tiny 'predicate' literals like this. array.Where(\o -> o.name =3D=3D "foo");
May 18 2009
prev sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
BCS wrote:
 Reply to Ary,
 
 BCS wrote:

 Reply to Andrei,


 appreciate a few pointers. Or references. Or delegates :o).
class, 2) the support is back by MS, 3) the docs are great, and 4) the language is highly consistent and conservative, e.i. nothing is added until they've got it right.
Have you seen Linq? That's *amazing*!
don't think it adds anything that puts it much above the rest of the crowd in any way.
 You can deal with expression ASTs and do really cool stuff with that.
 Like doing:

 var results = someObjectsThatAreGoingToBeTakenFromTheDb.Where(o =>
 o.Name == "Foo");
I think this will work: int delegate(int delegate(ref T)) Where(T[] array, bool delegate(T) dg) { struct Ret { T[] Array; bool delegate(T) Dg; int opApply(int delegate(ref T) idg) { foreach(ref T t; Array) if(Dg(t)) if(int ret = idg) return ret; return ret; } return &(Ret(array,dg)).opApply; } } If not, a little tweeking shloudl cover it.
that code into this (more or less, this is just the idea!): SqlConnection conn = ...; conn.executeQuery("select * from SomeTable Where Name = 'Foo'"); What the "Where" method does it to receieve an expression tree for "o => o.Name = 'Foo'", and using a visitor it converts it to an SQL statement. In D you don't have expression trees. The best you could do it to give it a string, but then you loose autocompletion, refactoring, nice compiler error messages and probably many other things. The best thing about this is that the expression is represented using a class, say Func<From, To>. So you could say: Func<From, bool> predicate = (From f) => f.Name == "Foo"; Now you can do: From[] array = ...; array.Where(predicate); or: DbObjects.Where(predicate); In the first case, the predicate will be executed at run-time for each object in the array, much like your D example does. In the second case, however, predicate will be translated to SQL.
May 18 2009
parent reply BCS <ao pathlink.com> writes:
Reply to Ary,

 BCS wrote:
 
 Reply to Ary,
 
 Have you seen Linq? That's *amazing*!
 
I don't think it adds anything that puts it much above the rest of the crowd in any way.
 You can deal with expression ASTs and do really cool stuff with
 that. Like doing:
 
 var results = someObjectsThatAreGoingToBeTakenFromTheDb.Where(o =>
 o.Name == "Foo");
 
I think this will work: int delegate(int delegate(ref T)) Where(T[] array, bool delegate(T) dg) { struct Ret { T[] Array; bool delegate(T) Dg; int opApply(int delegate(ref T) idg) { foreach(ref T t; Array) if(Dg(t)) if(int ret = idg) return ret; return ret; } return &(Ret(array,dg)).opApply; } } If not, a little tweeking shloudl cover it.
that code into this (more or less, this is just the idea!): SqlConnection conn = ...; conn.executeQuery("select * from SomeTable Where Name = 'Foo'");
Only for LINQ to SQL (or the like) all LINQ is is a set of standard nameing conventions and sugar. I Add a "Where" function to some SQL tabel object and you get the above as well.
 What the "Where" method does it to receieve an expression tree for "o
 => o.Name = 'Foo'", and using a visitor it converts it to an SQL
 statement. In D you don't have expression trees. The best you could do
 it to give it a string, but then you loose autocompletion,
 refactoring, nice compiler error messages and probably many other
 things.
I can see a Where!("Name","Age>")(someName,someAge) being not to hard to implement. Heck with prepared statements it might even be trivial, maybe even non templated.
 
 The best thing about this is that the expression is represented using
 a class, say Func<From, To>. So you could say:
 
 Func<From, bool> predicate = (From f) => f.Name == "Foo";
 
 Now you can do:
 
 From[] array = ...;
 array.Where(predicate);
 or:
 
 DbObjects.Where(predicate);
 
 In the first case, the predicate will be executed at run-time for each
 object in the array, much like your D example does. In the second
 case, however, predicate will be translated to SQL.
 
ho-hum) but AST reflection.
May 18 2009
next sibling parent Ary Borenszweig <ary esperanto.org.ar> writes:
BCS wrote:
 Reply to Ary,
 
 BCS wrote:

 Reply to Ary,

 Have you seen Linq? That's *amazing*!
I don't think it adds anything that puts it much above the rest of the crowd in any way.
 You can deal with expression ASTs and do really cool stuff with
 that. Like doing:

 var results = someObjectsThatAreGoingToBeTakenFromTheDb.Where(o =>
 o.Name == "Foo");
I think this will work: int delegate(int delegate(ref T)) Where(T[] array, bool delegate(T) dg) { struct Ret { T[] Array; bool delegate(T) Dg; int opApply(int delegate(ref T) idg) { foreach(ref T t; Array) if(Dg(t)) if(int ret = idg) return ret; return ret; } return &(Ret(array,dg)).opApply; } } If not, a little tweeking shloudl cover it.
that code into this (more or less, this is just the idea!): SqlConnection conn = ...; conn.executeQuery("select * from SomeTable Where Name = 'Foo'");
Only for LINQ to SQL (or the like) all LINQ is is a set of standard nameing conventions and sugar. I Add a "Where" function to some SQL tabel object and you get the above as well.
 What the "Where" method does it to receieve an expression tree for "o
 => o.Name = 'Foo'", and using a visitor it converts it to an SQL
 statement. In D you don't have expression trees. The best you could do
 it to give it a string, but then you loose autocompletion,
 refactoring, nice compiler error messages and probably many other
 things.
I can see a Where!("Name","Age>")(someName,someAge) being not to hard to implement. Heck with prepared statements it might even be trivial, maybe even non templated.
 The best thing about this is that the expression is represented using
 a class, say Func<From, To>. So you could say:

 Func<From, bool> predicate = (From f) => f.Name == "Foo";

 Now you can do:

 From[] array = ...;
 array.Where(predicate);
 or:

 DbObjects.Where(predicate);

 In the first case, the predicate will be executed at run-time for each
 object in the array, much like your D example does. In the second
 case, however, predicate will be translated to SQL.
still ho-hum) but AST reflection.
Yes, sorry. Sometimes I mix both things. It's AST reflection that's nice
May 18 2009
prev sibling parent reply Lutger <lutger.blijdestijn gmail.com> writes:
BCS wrote:

...
 
 all LINQ is is a set of standard nameing conventions and sugar. I Add a
"Where" 
 function to some SQL tabel object and you get the above as well.
 
... Not really, LINQ is 'sugar' for the underlying libraries that implements querying. Instead of calling it just sugar, it is more proper to call it a language in it's own right. LINQ to SQL is just one thing, the power of LINQ is that you separate queries from source of data. You can write one query (ideally) that works with SQL, XML or plain Arrays. It's not only that you don't have to write SQL queries anymore, a lot of messy for/while/etc. loops can be totally replaced with LINQ queries too.
May 19 2009
parent reply BCS <ao pathlink.com> writes:
Reply to Lutger,

 BCS wrote:
 
 ...
 
 all LINQ is is a set of standard nameing conventions and sugar. I Add
 a "Where" function to some SQL tabel object and you get the above as
 well.
 
... Not really, LINQ is 'sugar' for the underlying libraries that
As far as language features go, I'm even less impressed with sugar for libraries.
 implements querying. Instead of calling it just sugar, it is more
 proper to call it a language in it's own right.
 
I still don't think it's anything to spectacular. The AST stuff on the other hand...
May 19 2009
parent "Tim Matthews" <tim.matthews7 gmail.com> writes:
On Wed, 20 May 2009 05:40:37 +1200, BCS <ao pathlink.com> wrote:

 Reply to Lutger,

 BCS wrote:
  ...

 all LINQ is is a set of standard nameing conventions and sugar. I Add
 a "Where" function to some SQL tabel object and you get the above as
 well.
... Not really, LINQ is 'sugar' for the underlying libraries that
As far as language features go, I'm even less impressed with sugar for libraries.
 implements querying. Instead of calling it just sugar, it is more
 proper to call it a language in it's own right.
I still don't think it's anything to spectacular. The AST stuff on the other hand...
LINQ's syntactic sugar is not bad in my opinion. With .net its more acceptable to have syntactic sugar for a library that it depends on as it is all part of the base framework that is going to be there but with D I think we could all agree that a LINQ like query just doesnt fit in too well (for now at least)
May 19 2009
prev sibling next sibling parent reply Christopher Wright <dhasenan gmail.com> writes:
BCS wrote:
 Reply to Andrei,
 

 appreciate a few pointers. Or references. Or delegates :o).
class, 2) the support is back by MS, 3) the docs are great, and 4) the language is highly consistent and conservative, e.i. nothing is added until they've got it right.
and another 5 or so major feature requests implemented each major version. That isn't a bad state to be in, really. It's good enough, with a billion dollars in corporate backing, to make you the Current Big Language.
May 18 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Christopher Wright wrote:
 BCS wrote:
 Reply to Andrei,


 appreciate a few pointers. Or references. Or delegates :o).
class, 2) the support is back by MS, 3) the docs are great, and 4) the language is highly consistent and conservative, e.i. nothing is added until they've got it right.
I can confirm straight from the source (important person, name any buts and ands Microsoft's direct response to Java.
 and another 5 or so major feature requests implemented each major 
 version. That isn't a bad state to be in, really. It's good enough, with 
 a billion dollars in corporate backing, to make you the Current Big 
 Language.
essential capabilities. Andrei
May 18 2009
parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
Andrei Alexandrescu escribió:
 Christopher Wright wrote:
 BCS wrote:
 Reply to Andrei,


 appreciate a few pointers. Or references. Or delegates :o).
class, 2) the support is back by MS, 3) the docs are great, and 4) the language is highly consistent and conservative, e.i. nothing is added until they've got it right.
I can confirm straight from the source (important person, name any buts and ands Microsoft's direct response to Java.
 and another 5 or so major feature requests implemented each major 
 version. That isn't a bad state to be in, really. It's good enough, 
 with a billion dollars in corporate backing, to make you the Current 
 Big Language.
essential capabilities.
Like...?
May 18 2009
parent "Nick Sabalausky" <a a.a> writes:
"Ary Borenszweig" <ary esperanto.org.ar> wrote in message 
news:gusps3$87l$1 digitalmars.com...
 Andrei Alexandrescu escribió:

 essential capabilities.
Like...?
Templates that don't suck.
May 19 2009
prev sibling parent davidl <davidl nospam.org> writes:
ÔÚ Tue, 19 May 2009 00:14:50 +0800£¬BCS <ao pathlink.com> дµÀ:

 Reply to Andrei,


 appreciate a few pointers. Or references. Or delegates :o).
class, 2) the support is back by MS, 3) the docs are great, and 4) the language is highly consistent and conservative, e.i. nothing is added until they've got it right.
 Andrei
source by default(lot of code can be disassembled). Reusing them itself is while keeping the commercial secret. They don't want the others disassemble the app and tweak a little bit and rebuild the app(this is always possible if you don't strip the IL by only obfuscating it. For native cases, you would have a much higher barrier to rebuild others' app). The first 3 points can be shared with MS if we keep at a reasonable consistency with MS stuff. built immediately by compiling SharpDevelop, which is also considered to be high quality to use. -- ʹÓà Opera ¸ïÃüÐԵĵç×ÓÓʼþ¿Í»§³ÌÐò: http://www.opera.com/mail/
May 18 2009
prev sibling next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 18 May 2009 12:00:02 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:


 appreciate a few pointers. Or references. Or delegates :o).
The IDE is excellent. The docs are reasonable once you figure out how they are laid out. in any other .NET language and you find weird quirks and shoehorning (even sense to support .NET. two killer features for me: 1. .NET is usually installed or easily installed on any MS system. 2. the reflection/attributes are awesome. warts. Generics are next to useless, especially when compared to D templates. -Steve
May 18 2009
next sibling parent reply Robert Fraser <fraserofthenight gmail.com> writes:
Steven Schveighoffer wrote:
 The IDE is excellent.  The docs are reasonable once you figure out how 
 they are laid out.
I really don't get what's so great about VS. After using JDT (Eclipse Java), I find Vs kind of lacking. IMO, MS invests a lot of resources in areas to sell VS to companies rather than programmers. Automatic checking against UML models... okay, that's a cool feature, but I'd rather it could highlight my semantic errors without my having to click "build" or have more than 5 refactorings or have quick-fixes for errors, or...
May 18 2009
parent reply Bill Baxter <wbaxter gmail.com> writes:
On Mon, May 18, 2009 at 12:09 PM, Robert Fraser
<fraserofthenight gmail.com> wrote:
 Steven Schveighoffer wrote:
 The IDE is excellent. =A0The docs are reasonable once you figure out how
 they are laid out.
I really don't get what's so great about VS. After using JDT (Eclipse Jav=
a),
 I find Vs kind of lacking. IMO, MS invests a lot of resources in areas to
 sell VS to companies rather than programmers. Automatic checking against =
UML
 models... okay, that's a cool feature, but I'd rather it could highlight =
my
 semantic errors without my having to click "build" or have more than 5
 refactorings or have quick-fixes for errors, or...
It's a lot nicer with Visual Assist or (from what I hear) Resharper. But it is sad that you have to pay even more money to make the thing you paid so much money for in the first place more usable. Considering things like Eclipse are free. --bb
May 18 2009
parent reply BCS <ao pathlink.com> writes:
Reply to Bill,

 But it is sad that you have to pay even
 more money to make the thing you paid so much money for in the first
 place more usable.
VS/MS/etc is a for profit ecosystem. They assume that your system and software is paid for by your boss and he's spending 10-20 time that much on your paycheck so who cares. At least that's the impression I get.
May 18 2009
parent reply "Tim Matthews" <tim.matthews7 gmail.com> writes:
On Tue, 19 May 2009 08:56:59 +1200, BCS <ao pathlink.com> wrote:

 VS/MS/etc is a for profit ecosystem. They assume that your system and  
 software is paid for by your boss and he's spending 10-20 time that much  
 on your paycheck so who cares. At least that's the impression I get.
I think vs express editions that can be used to make great software, sell the software and not pay MS a single cent is very generous of them and the .net cil is a genious idea. The most succefull compilers are the ones that recognize that there is multiple languages, multiple archictectures and that there should be something in the middle. CIL just leaves it in the middle code until the last minute. MS may not do the best operating systems but the whole .net thing is very good in my opinion and I think sun is better for there solaris than there java.
May 19 2009
parent reply grauzone <none example.net> writes:
 and the .net cil is a genious idea. The most succefull compilers are the 
 ones that recognize that there is multiple languages, multiple 
 archictectures and that there should be something in the middle. CIL 
 just leaves it in the middle code until the last minute. MS may not do 
 the best operating systems but the whole .net thing is very good in my 
And what exactly is good about byte code? It's portable? My D code is portable too. Sure, it requires recompilation, but it doesn't need a clusterfuck-VM just for running it.
 opinion and I think sun is better for there solaris than there java.
.Net is just Microsoft's Java clone, and Sun didn't invent byte code either.
May 19 2009
parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
grauzone wrote:
 and the .net cil is a genious idea. The most succefull compilers are
 the ones that recognize that there is multiple languages, multiple
 archictectures and that there should be something in the middle. CIL
 just leaves it in the middle code until the last minute. MS may not do
 the best operating systems but the whole .net thing is very good in my 
And what exactly is good about byte code? It's portable? My D code is portable too. Sure, it requires recompilation, but it doesn't need a clusterfuck-VM just for running it.
There's a few points here: 1. Users don't like compiling software. Hell, *I* don't like having to compile software since it invariably doesn't work first go, even when the build instructions are correct (they often aren't.) 2. A very large number of Windows developers write closed-source software. The idea of having customers obtain and compile their software scares the pants off of them. If it didn't, they wouldn't invest so much money in obfuscators. I hate to be the one to tell you this, but... MS didn't design .NET to make you happy. *ducks* -- Daniel
May 19 2009
parent reply "Tim Matthews" <tim.matthews7 gmail.com> writes:
On Wed, 20 May 2009 15:08:43 +1200, Daniel Keep  
<daniel.keep.lists gmail.com> wrote:

 I hate to be the one to tell you this, but... MS didn't design .NET to
 make you happy.  *ducks*

   -- Daniel
Windos is a very user orientated os and to make the user happy they try to get all devs writing code that will work on windows easily. Also IIRC java bytecode is interpreted by default but .net is jit/aot which is how my cpu should be used. I think .net is only interpreted on the .net micro framework
May 19 2009
parent reply Jarrett Billingsley <jarrett.billingsley gmail.com> writes:
On Wed, May 20, 2009 at 12:26 AM, Tim Matthews <tim.matthews7 gmail.com> wr=
ote:
 On Wed, 20 May 2009 15:08:43 +1200, Daniel Keep
 <daniel.keep.lists gmail.com> wrote:

 I hate to be the one to tell you this, but... MS didn't design .NET to
 make you happy. =A0*ducks*

 =A0-- Daniel
Windos is a very user orientated os and to make the user happy they try t=
o
 get all devs writing code that will work on windows easily. Also IIRC jav=
a
 bytecode is interpreted by default but .net is jit/aot which is how my cp=
u
 should be used. I think .net is only interpreted on the .net micro framew=
ork

Just, uh, wow.  Please dude, read up on this stuff first.
May 19 2009
parent "Tim Matthews" <tim.matthews7 gmail.com> writes:
On Wed, 20 May 2009 17:31:14 +1200, Jarrett Billingsley  
<jarrett.billingsley gmail.com> wrote:

 Just, uh, wow.  Please dude, read up on this stuff first.
This thread turned into a java vs .net argument. I'm sorry but I don't know the details of the JVM's just in time compiler. The virtual machine in the name plus the design goals led me to this misunderstanding "It should be interpreted, threaded, and dynamic" http://en.wikipedia.org/wiki/Java_(programming_language)#Primary_goals http://en.wikipedia.org/wiki/Comparison_of_the_Java_and_.NET_platforms
May 20 2009
prev sibling parent "Nick Sabalausky" <a a.a> writes:
"Steven Schveighoffer" <schveiguy yahoo.com> wrote in message 
news:op.ut4vynx5eav7ka steves.networkengines.com...
 The docs are reasonable once you figure out how  they are laid out.
I find the docs to be so slow as to be almost unusable. F*(*^*&%*& AJAX.
May 19 2009
prev sibling next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:gus0lu$1smj$2 digitalmars.com...


 appreciate a few pointers. Or references. Or delegates :o).
good tools, although the newer versions of VS are almost as much of a bloated unresponsive mess as Eclipse - Which come to think of it, makes me wonder - If Java has gotten so fast as many people claim, why is Eclipse still such a sluggish POS?).
May 19 2009
next sibling parent BCS <none anon.com> writes:
Hello Nick,

 If Java has gotten so fast as many
 people claim, why is Eclipse still such a sluggish POS?).
 
for the same reason that anything is slow, people more than make up for any gains in perf with more features (and shoddy code)
May 19 2009
prev sibling parent reply Christopher Wright <dhasenan gmail.com> writes:
Nick Sabalausky wrote:
 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
 news:gus0lu$1smj$2 digitalmars.com...
 

 appreciate a few pointers. Or references. Or delegates :o).
good tools, although the newer versions of VS are almost as much of a bloated unresponsive mess as Eclipse - Which come to think of it, makes me wonder - If Java has gotten so fast as many people claim, why is Eclipse still such a sluggish POS?).
Generics and reflection. Generics just hide a lot of casts, usually, but that's still quite useful. And autoboxing is convenient, though not appropriate for D.
May 19 2009
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Christopher Wright (dhasenan gmail.com)'s article
 Nick Sabalausky wrote:
 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message
 news:gus0lu$1smj$2 digitalmars.com...


 appreciate a few pointers. Or references. Or delegates :o).
good tools, although the newer versions of VS are almost as much of a bloated unresponsive mess as Eclipse - Which come to think of it, makes me wonder - If Java has gotten so fast as many people claim, why is Eclipse still such a sluggish POS?).
Generics and reflection. Generics just hide a lot of casts, usually, but that's still quite useful. And autoboxing is convenient, though not appropriate for D.
What the heck do you need generics for when you have real templates? To me, generics seem like just a lame excuse for templates.
May 19 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
dsimcha wrote:
 == Quote from Christopher Wright (dhasenan gmail.com)'s article
 Nick Sabalausky wrote:
 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message
 news:gus0lu$1smj$2 digitalmars.com...


 appreciate a few pointers. Or references. Or delegates :o).
good tools, although the newer versions of VS are almost as much of a bloated unresponsive mess as Eclipse - Which come to think of it, makes me wonder - If Java has gotten so fast as many people claim, why is Eclipse still such a sluggish POS?).
Generics and reflection. Generics just hide a lot of casts, usually, but that's still quite useful. And autoboxing is convenient, though not appropriate for D.
What the heck do you need generics for when you have real templates? To me, generics seem like just a lame excuse for templates.
I agree. Then, templates aren't easy to implement and they were understandably already busy implementing the using statement. Andrei
May 19 2009
parent reply Lutger <lutger.blijdestijn gmail.com> writes:
Andrei Alexandrescu wrote:

...
 What the heck do you need generics for when you have real templates?  To me,
 generics seem like just a lame excuse for templates.
I agree. Then, templates aren't easy to implement and they were understandably already busy implementing the using statement. Andrei
While I don't fully understand how generics work under the hood in .NET, there are some benefits to how it is done. For example, you can use runtime reflection on generic types. And the jit compiler instantiates them at runtime. They may serve a different purpose than templates: except they have a type parameter. C++ templates are really just like macros, except they look like classes." It seems that lack of structural typing is seen as a feature: "When you think about it, constraints are a pattern matching mechanism. You want to be able to say, "This type parameter must have a constructor that takes two arguments, implement operator+, have this static method, has these two instance methods, etc." The question is, how complicated do you want this pattern matching mechanism to be? There's a whole continuum from nothing to grand pattern matching. We think it's too little to say nothing, and the grand pattern matching becomes very complicated, so we're in- between." From: http://www.artima.com/intv/genericsP.html
May 19 2009
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Lutger" <lutger.blijdestijn gmail.com> wrote in message 
news:gv090o$225$1 digitalmars.com...
 Andrei Alexandrescu wrote:

 ...
 What the heck do you need generics for when you have real templates?  To 
 me,
 generics seem like just a lame excuse for templates.
I agree. Then, templates aren't easy to implement and they were understandably already busy implementing the using statement. Andrei
While I don't fully understand how generics work under the hood in .NET, there are some benefits to how it is done. For example, you can use runtime reflection on generic types. And the jit compiler instantiates them at runtime. They may serve a different purpose than templates: "Anders Hejlsberg: To me the best way to understand the distinction like classes, except they have a type parameter. C++ templates are really just like macros, except they look like classes." It seems that lack of structural typing is seen as a feature: "When you think about it, constraints are a pattern matching mechanism. You want to be able to say, "This type parameter must have a constructor that takes two arguments, implement operator+, have this static method, has these two instance methods, etc." The question is, how complicated do you want this pattern matching mechanism to be? There's a whole continuum from nothing to grand pattern matching. We think it's too little to say nothing, and the grand pattern matching becomes very complicated, so we're in- between." From: http://www.artima.com/intv/genericsP.html
but until the old (and I do mean old) issue of "There's an IComparable, so why the hell won't MS give us an IArithmetic so we can actually use arithmetic operators on generic code?" gets fixed (and at this point I'm convinced they've never had any intent of ever fixing that), I don't care "crap" and "almost useless".
May 20 2009
parent reply Christopher Wright <dhasenan gmail.com> writes:
Nick Sabalausky wrote:

 but until the old (and I do mean old) issue of "There's an IComparable, so 
 why the hell won't MS give us an IArithmetic so we can actually use 
 arithmetic operators on generic code?" gets fixed (and at this point I'm 
 convinced they've never had any intent of ever fixing that), I don't care 


 "crap" and "almost useless". 
methods, and interfaces cannot specify static methods.
May 20 2009
parent reply "Nick Sabalausky" <a a.a> writes:
"Christopher Wright" <dhasenan gmail.com> wrote in message 
news:gv0p4e$uvv$1 digitalmars.com...
 Nick Sabalausky wrote:

 but until the old (and I do mean old) issue of "There's an IComparable, 
 so why the hell won't MS give us an IArithmetic so we can actually use 
 arithmetic operators on generic code?" gets fixed (and at this point I'm 
 convinced they've never had any intent of ever fixing that), I don't care 


 "crap" and "almost useless".
methods, and interfaces cannot specify static methods.
Then how does IComparable work?
May 20 2009
parent reply Christopher Wright <dhasenan gmail.com> writes:
Nick Sabalausky wrote:
 "Christopher Wright" <dhasenan gmail.com> wrote in message 
 news:gv0p4e$uvv$1 digitalmars.com...
 Nick Sabalausky wrote:

 but until the old (and I do mean old) issue of "There's an IComparable, 
 so why the hell won't MS give us an IArithmetic so we can actually use 
 arithmetic operators on generic code?" gets fixed (and at this point I'm 
 convinced they've never had any intent of ever fixing that), I don't care 


 "crap" and "almost useless".
methods, and interfaces cannot specify static methods.
Then how does IComparable work?
It uses a member function instead.
May 20 2009
parent reply "Nick Sabalausky" <a a.a> writes:
"Christopher Wright" <dhasenan gmail.com> wrote in message 
news:gv29vn$7a0$1 digitalmars.com...
 Nick Sabalausky wrote:
 "Christopher Wright" <dhasenan gmail.com> wrote in message 
 news:gv0p4e$uvv$1 digitalmars.com...
 Nick Sabalausky wrote:

 generics, but until the old (and I do mean old) issue of "There's an 
 IComparable, so why the hell won't MS give us an IArithmetic so we can 
 actually use arithmetic operators on generic code?" gets fixed (and at 
 this point I'm convinced they've never had any intent of ever fixing 


 squarely into the categories of "crap" and "almost useless".
methods, and interfaces cannot specify static methods.
Then how does IComparable work?
It uses a member function instead.
And they can't do the same for arithmetic?
May 20 2009
next sibling parent Christopher Wright <dhasenan gmail.com> writes:
Nick Sabalausky wrote:
 "Christopher Wright" <dhasenan gmail.com> wrote in message 
 news:gv29vn$7a0$1 digitalmars.com...
 Nick Sabalausky wrote:
 "Christopher Wright" <dhasenan gmail.com> wrote in message 
 news:gv0p4e$uvv$1 digitalmars.com...
 Nick Sabalausky wrote:

 generics, but until the old (and I do mean old) issue of "There's an 
 IComparable, so why the hell won't MS give us an IArithmetic so we can 
 actually use arithmetic operators on generic code?" gets fixed (and at 
 this point I'm convinced they've never had any intent of ever fixing 


 squarely into the categories of "crap" and "almost useless".
methods, and interfaces cannot specify static methods.
Then how does IComparable work?
It uses a member function instead.
And they can't do the same for arithmetic?
I believe the rationale for using static functions is so that you can add null to something. (The indexing operator, mind you, is a member property, so this doesn't always hold.) Additionally, this gets rid of opX_r. In practice, I doubt anyone uses that. But it's too late to make that change.
May 21 2009
prev sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 20 May 2009 23:40:54 -0400, Nick Sabalausky <a a.a> wrote:

 "Christopher Wright" <dhasenan gmail.com> wrote in message
 news:gv29vn$7a0$1 digitalmars.com...
 Nick Sabalausky wrote:
 "Christopher Wright" <dhasenan gmail.com> wrote in message
 news:gv0p4e$uvv$1 digitalmars.com...
 Nick Sabalausky wrote:

 generics, but until the old (and I do mean old) issue of "There's an
 IComparable, so why the hell won't MS give us an IArithmetic so we  
 can
 actually use arithmetic operators on generic code?" gets fixed (and  
 at
 this point I'm convinced they've never had any intent of ever fixing


 falls
 squarely into the categories of "crap" and "almost useless".
methods, and interfaces cannot specify static methods.
Then how does IComparable work?
It uses a member function instead.
And they can't do the same for arithmetic?
Keep in mind that the member method does not map to an operator. So you still have to call it directly: object.compareTo(object2); -Steve
May 21 2009
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Lutger wrote:
 Andrei Alexandrescu wrote:
 
 ...
 What the heck do you need generics for when you have real templates?  To me,
 generics seem like just a lame excuse for templates.
I agree. Then, templates aren't easy to implement and they were understandably already busy implementing the using statement. Andrei
While I don't fully understand how generics work under the hood in .NET, there are some benefits to how it is done. For example, you can use runtime reflection on generic types. And the jit compiler instantiates them at runtime. They may serve a different purpose than templates: except they have a type parameter. C++ templates are really just like macros, except they look like classes." It seems that lack of structural typing is seen as a feature: "When you think about it, constraints are a pattern matching mechanism. You want to be able to say, "This type parameter must have a constructor that takes two arguments, implement operator+, have this static method, has these two instance methods, etc." The question is, how complicated do you want this pattern matching mechanism to be? There's a whole continuum from nothing to grand pattern matching. We think it's too little to say nothing, and the grand pattern matching becomes very complicated, so we're in- between." From: http://www.artima.com/intv/genericsP.html
Oh, so Wal^H^H^Ha friend of mine I was talking to was right: there's some missing of the point point going on. The code generation aspect of templates is a blind spot of the size of Canada. Andrei
May 20 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
Andrei Alexandrescu wrote:
 Lutger wrote:
 Andrei Alexandrescu wrote:

 ...
 What the heck do you need generics for when you have real 
 templates?  To me,
 generics seem like just a lame excuse for templates.
I agree. Then, templates aren't easy to implement and they were understandably already busy implementing the using statement. Andrei
While I don't fully understand how generics work under the hood in .NET, there are some benefits to how it is done. For example, you can use runtime reflection on generic types. And the jit compiler instantiates them at runtime. They may serve a different purpose than templates: "Anders Hejlsberg: To me the best way to understand the distinction just like classes, except they have a type parameter. C++ templates are really just like macros, except they look like classes." It seems that lack of structural typing is seen as a feature: "When you think about it, constraints are a pattern matching mechanism. You want to be able to say, "This type parameter must have a constructor that takes two arguments, implement operator+, have this static method, has these two instance methods, etc." The question is, how complicated do you want this pattern matching mechanism to be? There's a whole continuum from nothing to grand pattern matching. We think it's too little to say nothing, and the grand pattern matching becomes very complicated, so we're in- between." From: http://www.artima.com/intv/genericsP.html
Oh, so Wal^H^H^Ha friend of mine I was talking to was right: there's some missing of the point point going on. The code generation aspect of templates is a blind spot of the size of Canada. Andrei
I think you miss the point here. Generics and code generation are two separate and orthogonal features that where conflated together by C++. while you can do powerful stuff with templates it smells of trying to write Haskel code with the C pre-proceesor. if you want to see a clean solution to this issue look at Nemerle. essentially, their AST Macro system provides multi-level compilation. c++ templates are a horrible hack designed to ween off C programmers from using the pre-processor and the D templates provide mostly cosmetic changes to this. they do not solve the bigger design issue.
May 20 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Yigal Chripun wrote:
 I think you miss the point here.
 Generics and code generation are two separate and orthogonal features 
 that where conflated together by C++.
It's kind of odd, then, that for example the Generative Programming book (http://www.generative-programming.org) chose to treat the two notions in conjunction.
 while you can do powerful stuff with templates it smells of trying to 
 write Haskel code with the C pre-proceesor.
 if you want to see a clean solution to this issue look at Nemerle.
 essentially, their AST Macro system provides multi-level compilation.
 
 c++ templates are a horrible hack designed to ween off C programmers 
 from using the pre-processor and the D templates provide mostly cosmetic 
  changes to this. they do not solve the bigger design issue.
What is the bigger design issue? Andrei
May 20 2009
parent reply Bill Baxter <wbaxter gmail.com> writes:
On Wed, May 20, 2009 at 1:09 PM, Andrei Alexandrescu
<SeeWebsiteForEmail erdani.org> wrote:
 Yigal Chripun wrote:
 I think you miss the point here.
 Generics and code generation are two separate and orthogonal features that
 where conflated together by C++.
It's kind of odd, then, that for example the Generative Programming book (http://www.generative-programming.org) chose to treat the two notions in conjunction.
Yeh, there's definitely a overlap. Orthogonal isn't quite the right word there. I'm reading a bit on C++/CLI right now, which is C++ extended to inter-operate with CLR. C++/CLI has *both* classic C++ templates and CLR generics: template<typename T> ... // all code specializations generated at compile-time (if at all) generic<typename T> ... // code generated at compiletime unconditionally, specialized at run-time. I'm not clear on exactly what happens at runtime in the generic<> case. I had been thinking it was simply that the compiler does some type checking at compile time and the VM code then just manipulates pointers to Object from there. That may be what happens in Java generics, but in CLR generics at least you can specialize on non-Object value types and that apparently does not result in everything getting boxed. So it seems like there's a little something extra going on. I think the main reason for having Generics is that they're the best anyone currently knows how to do at the IL bytecode level. Generics give you a way to define generic parameterized types that work across all the languages that target a given VM's bytecode. But that doesn't preclude any language that targets that VM from *also* implementing compile-time templates, or code generators, or AST macros at the source code level. But the problem with source-level code generation is that you then need the source code in order to use the library. I think they were then you have everything you need to use it. Period. (I think.) At any rate, a tech that requires inclusion of source code is not very interesting to Microsoft, because Microsoft doesn't generally like to let people see their source code in the first place, and they know that many of their biggest customers don't like to either. They're nervous enough about just putting de-compileable bytecode out there. --bb
May 20 2009
parent reply "Nick Sabalausky" <a a.a> writes:
"Bill Baxter" <wbaxter gmail.com> wrote in message 
news:mailman.151.1242855932.13405.digitalmars-d puremagic.com...
 On Wed, May 20, 2009 at 1:09 PM, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:
 Yigal Chripun wrote:
 I think you miss the point here.
 Generics and code generation are two separate and orthogonal features 
 that
 where conflated together by C++.
It's kind of odd, then, that for example the Generative Programming book (http://www.generative-programming.org) chose to treat the two notions in conjunction.
Yeh, there's definitely a overlap. Orthogonal isn't quite the right word there. I'm reading a bit on C++/CLI right now, which is C++ extended to inter-operate with CLR. C++/CLI has *both* classic C++ templates and CLR generics: template<typename T> ... // all code specializations generated at compile-time (if at all) generic<typename T> ... // code generated at compiletime unconditionally, specialized at run-time. I'm not clear on exactly what happens at runtime in the generic<> case. I had been thinking it was simply that the compiler does some type checking at compile time and the VM code then just manipulates pointers to Object from there. That may be what happens in Java generics, but in CLR generics at least you can specialize on non-Object value types and that apparently does not result in everything getting boxed. So it seems like there's a little something extra going on. I think the main reason for having Generics is that they're the best anyone currently knows how to do at the IL bytecode level. Generics give you a way to define generic parameterized types that work across all the languages that target a given VM's bytecode. But that doesn't preclude any language that targets that VM from *also* implementing compile-time templates, or code generators, or AST macros at the source code level. But the problem with source-level code generation is that you then need the source code in order to use the library. I think they were then you have everything you need to use it. Period. (I think.) At any rate, a tech that requires inclusion of source code is not very interesting to Microsoft, because Microsoft doesn't generally like to let people see their source code in the first place, and they know that many of their biggest customers don't like to either. They're nervous enough about just putting de-compileable bytecode out there.
Maybe this is naive, but what about an AST-level template/generic? Couldn't that provide for the best of both worlds? For instance, suppose (purely hypothetically) that the .NET assembly system were changed to allow the source for a D/C++ style of source-level template to be embedded into the assembly. Then they'd be able to do D/C++ style source-level template/code-generation. Right? Now obviously the big problem with that is it would only be usable in the same language it was originally written in. So, instead of getting that cross-language support by going all the way down to the IL bytecode level to implement generics (which, as you said, would somehow prevent the flexibility that the D/C++ style enjoys) suppose it only went down as far as a language-agnostic AST? I suppose that might make reverse-engineering easier which MS might not like, but I'm not suggesting this as something that MS should like or should even do, but rather suggesting it as (business issues completely aside) something that would possibly gain the benefits of both styles.
May 20 2009
next sibling parent Daniel Keep <daniel.keep.lists gmail.com> writes:
Nick Sabalausky wrote:
 ...
 
 Maybe this is naive, but what about an AST-level template/generic? Couldn't 
 that provide for the best of both worlds?
 
 For instance, suppose (purely hypothetically) that the .NET assembly system 
 were changed to allow the source for a D/C++ style of source-level template 
 to be embedded into the assembly. Then they'd be able to do D/C++ style 
 source-level template/code-generation. Right? Now obviously the big problem 
 with that is it would only be usable in the same language it was originally 
 written in. So, instead of getting that cross-language support by going all 
 the way down to the IL bytecode level to implement generics (which, as you 
 said, would somehow prevent the flexibility that the D/C++ style enjoys) 
 suppose it only went down as far as a language-agnostic AST?
 
 ...
What I've always thought might be an interesting experiment would be to change templates in LDC so that instead of generating an AST, they generate code that generates code. So when you use A!(T), what happens is that at runtime the template is "run" with T as an argument. This generates a chunk of LLVM bitcode which LLVM then assembles to machine code and links into the program. This alleviates the problem with using source in that if you embed the template's actual source, then you suddenly ALSO have to embed the standard library's source and the source of any other libraries you happened to compile with. Oh, and the same version of the compiler. -- Daniel
May 20 2009
prev sibling next sibling parent reply Yigal Chripun <yigal100 gmail.com> writes:
Nick Sabalausky wrote:
 
 Maybe this is naive, but what about an AST-level template/generic? Couldn't 
 that provide for the best of both worlds?
 
 For instance, suppose (purely hypothetically) that the .NET assembly system 
 were changed to allow the source for a D/C++ style of source-level template 
 to be embedded into the assembly. Then they'd be able to do D/C++ style 
 source-level template/code-generation. Right? Now obviously the big problem 
 with that is it would only be usable in the same language it was originally 
 written in. So, instead of getting that cross-language support by going all 
 the way down to the IL bytecode level to implement generics (which, as you 
 said, would somehow prevent the flexibility that the D/C++ style enjoys) 
 suppose it only went down as far as a language-agnostic AST?
 
 I suppose that might make reverse-engineering easier which MS might not 
 like, but I'm not suggesting this as something that MS should like or should 
 even do, but rather suggesting it as (business issues completely aside) 
 something that would possibly gain the benefits of both styles. 
 
 
that's the exact opposite of a good solution. I already mentioned several times before the language Nemerle which provide the correct solution. important fact - Nemerle is a .NET language and it does _NOT_ need to modify the underlining system. The way it works in Nemerle is pretty simple: the language has a syntax to compose/decompose AST. a Macro in nemerle is just a plain old function that uses the same syntax you'd use at run-time and this "function" can use APIs to access the compiler's internal data structures (the AST) and manipulate it. you "connect" it to your regular code by either just calling it like a regular function or by using attributes. let's compare to see the benefits: in D: tango.io.Stdout("Hello World").newline; // prints at run-time pragma(msg, "Hello World"); // prints at compile-time in Nemerle: macro m () { Nemerle.IO.printf ("compile-time\n"); <[ Nemerle.IO.printf ("run-time\n") ]>; } // and you call it like this: m(); Nemerle.IO.printf ("run-time\n"); notice how both use the same code, the same printf function? the only change is that the second line inside the macro is enclosed inside <[ ]> which means output (return) the AST for this code instead of actually running the code and returning the result of the call. Macros in Nemerle need to be compiled since they are regular Nemerle code and they need to be loaded by the compiler (added to the command line) in order to compile the code the calls the macros. essentially these are just plugins for the compiler. compared to the elegance of this solution, templates are just a crude copy-paste mechanism implemented inside the compiler.
May 21 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Yigal Chripun wrote:
 Nick Sabalausky wrote:
 I suppose that might make reverse-engineering easier which MS might 
 not like, but I'm not suggesting this as something that MS should like 
 or should even do, but rather suggesting it as (business issues 
 completely aside) something that would possibly gain the benefits of 
 both styles.
that's the exact opposite of a good solution. I already mentioned several times before the language Nemerle which provide the correct solution. important fact - Nemerle is a .NET language and it does _NOT_ need to modify the underlining system. The way it works in Nemerle is pretty simple: the language has a syntax to compose/decompose AST. a Macro in nemerle is just a plain old function that uses the same syntax you'd use at run-time and this "function" can use APIs to access the compiler's internal data structures (the AST) and manipulate it. you "connect" it to your regular code by either just calling it like a regular function or by using attributes. let's compare to see the benefits: in D: tango.io.Stdout("Hello World").newline; // prints at run-time pragma(msg, "Hello World"); // prints at compile-time in Nemerle: macro m () { Nemerle.IO.printf ("compile-time\n"); <[ Nemerle.IO.printf ("run-time\n") ]>; } // and you call it like this: m(); Nemerle.IO.printf ("run-time\n"); notice how both use the same code, the same printf function? the only change is that the second line inside the macro is enclosed inside <[ ]> which means output (return) the AST for this code instead of actually running the code and returning the result of the call. Macros in Nemerle need to be compiled since they are regular Nemerle code and they need to be loaded by the compiler (added to the command line) in order to compile the code the calls the macros. essentially these are just plugins for the compiler. compared to the elegance of this solution, templates are just a crude copy-paste mechanism implemented inside the compiler.
Nemerle's interesting, but it has its own issues. The largest one is that it will have to beat history: languages with configurable syntax have failed in droves in the 1970s. Before I got into D, I was working on Enki. Enki was my own programming language and of course made D look like a piece of crap. In Enki, you had only very few primitives related to macro expansion, and you could construct all language elements - if, while, for, structures, classes, exceptions, you name it, from those primitive elements. There were two elements that convinced me to quit Enki. One was that I'd got word of a language called IMP72. IMP72 embedded the very same ideas Enki had, with two exceptions: (1) it was created in 1972, and (2) nobody gave a damn ever since. IMP72 (and there were others too around that time) started with essentially one primitive and then generated itself with a bootstrap routine, notion that completely wowed me and I erroneously thought would have the world wowed too. The second reason was that I've had many coffees and some beers with Walter and he convinced me that configurable syntax is an idea that people just don't like. Thinking a bit more, I realized that humans don't operate well with configurable syntax. To use the hackneyed comparison, no natural language or similar concoction has configurable syntax. Not even musical notation or whatnot. There's one syntax for every human language. I speculated that humans can learn one syntax for a language and then wire their brains to just pattern match semantics using it. Configurable syntax just messes with that approach, and besides makes any program hugely context-dependent and consequently any large program a pile of crap. That being said, I have no idea whether or not Nemerle will be successful. I just speculate it has an uphill battle to win. Andrei
May 21 2009
next sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s
 Before I got into D, I was working on Enki. Enki was my own programming
 language and of course made D look like a piece of crap. In Enki, you
 had only very few primitives related to macro expansion, and you could
 construct all language elements - if, while, for, structures, classes,
 exceptions, you name it, from those primitive elements.
 There were two elements that convinced me to quit Enki. One was that I'd
 got word of a language called IMP72. IMP72 embedded the very same ideas
 Enki had, with two exceptions: (1) it was created in 1972, and (2)
 nobody gave a damn ever since. IMP72 (and there were others too around
 that time) started with essentially one primitive and then generated
 itself with a bootstrap routine, notion that completely wowed me and I
 erroneously thought would have the world wowed too.
 The second reason was that I've had many coffees and some beers with
 Walter and he convinced me that configurable syntax is an idea that
 people just don't like. Thinking a bit more, I realized that humans
 don't operate well with configurable syntax. To use the hackneyed
 comparison, no natural language or similar concoction has configurable
 syntax. Not even musical notation or whatnot. There's one syntax for
 every human language. I speculated that humans can learn one syntax for
 a language and then wire their brains to just pattern match semantics
 using it. Configurable syntax just messes with that approach, and
 besides makes any program hugely context-dependent and consequently any
 large program a pile of crap.
 That being said, I have no idea whether or not Nemerle will be
 successful. I just speculate it has an uphill battle to win.
 Andrei
This is pretty much a special case of a more general statement about customizability. Customizability is good as long as there are also sane default/de facto standard ways of doing things and simple things are still made simple. Take EMacs or vi, for example. I absolutely despise both because they have very annoying, idiosynchratic ways of doing basic stuff like saving a file, navigating through a file, etc. The backspace, delete, home, etc. keys don't always do what I've come to expect them to do out of the box. I know all this stuff is customizable, but the barrier to entry of learning how to configure all this stuff is much higher than the barrier to just using a simple GUI text editor like gedit or Notepad++ instead. I don't care how powerful these programs are, they're still not special enough that violating such basic conventions is acceptable. Bringing this analogy back to language design, if you make a language very highly configurable and don't provide good defaults, the barrier to entry will just be too high. If people have to understand a whole bunch of intricacies of the macro system to do anything more complex than "Hello, world", the language will be confined to a highly devoted niche. On the other hand, if you do provide strong conventions and sane defaults, people will probably avoid violating them because doing so would make their code no longer portable from programmer to programmer, and would probably break a whole bunch of library code, etc. that relies on the convention. In other words, they would feel like they were creating a whole new language, and for all practical purposes they would be. Thus, the purpose of this customizability would be defeated. As I've said before in various places, my favorite thing about D is that it takes a level headed, pragmatic view on so many issues that other languages fight holy wars about. These include performance vs. simplicity, safety vs. flexibility, etc. This is just another one to add to the list. Having a sane subset that's something like a hybrid between Java and Python, but then putting D's template system on top of it for when the sane subset just doesn't cut it is a pragmatic, level-headed solution to the holy war between meta-languages that let (force?) you to customize everything and don't have a well-defined sane, simple subset and excessively rigid static languages that sometimes don't make complex things possible. (Note: Don't get me wrong, IMHO, parts of the template system have actually earned a rightful place in the sane subset, just not the more advanced metaprogramming stuff.)
May 21 2009
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
dsimcha wrote:
 Bringing this analogy back to language design, if you make a language very
highly
 configurable and don't provide good defaults, the barrier to entry will just be
 too high.  If people have to understand a whole bunch of intricacies of the
macro
 system to do anything more complex than "Hello, world", the language will be
 confined to a highly devoted niche.  On the other hand, if you do provide
strong
 conventions and sane defaults, people will probably avoid violating them
because
 doing so would make their code no longer portable from programmer to
programmer,
 and would probably break a whole bunch of library code, etc. that relies on the
 convention.  In other words, they would feel like they were creating a whole
new
 language, and for all practical purposes they would be.  Thus, the purpose of
this
 customizability would be defeated.
The symptom is visible: even with good defaults and all, such languages invariably come with the advice "don't use our best feature". That's terrible. It's a bad anguage design to put in power that can do more harm than good, to the extent that you openly unrecommend usage of the feature. Like you have a car with a very powerful engine but not a clutch to match (there are many, Subaru Impreza comes to mind). Then you have this awesome power on paper, but invariably the mechanic tells you: don't seal the pedal to the metal if you want to have a car. I therefore much prefer D's templates which I use and also recommend to non-advanced users, plus the occasional string mixin that is a seldom-used feature instrumental to only a minority of idioms. Andrei
May 21 2009
prev sibling next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:gv3ubr$2ul$1 digitalmars.com...
 Yigal Chripun wrote:

 Nemerle's interesting, but it has its own issues. The largest one is that 
 it will have to beat history: languages with configurable syntax have 
 failed in droves in the 1970s.

 Before I got into D, I was working on Enki. Enki was my own programming 
 language and of course made D look like a piece of crap. In Enki, you had 
 only very few primitives related to macro expansion, and you could 
 construct all language elements - if, while, for, structures, classes, 
 exceptions, you name it, from those primitive elements.

 There were two elements that convinced me to quit Enki. One was that I'd 
 got word of a language called IMP72. IMP72 embedded the very same ideas 
 Enki had, with two exceptions: (1) it was created in 1972, and (2) nobody 
 gave a damn ever since. IMP72 (and there were others too around that time) 
 started with essentially one primitive and then generated itself with a 
 bootstrap routine, notion that completely wowed me and I erroneously 
 thought would have the world wowed too.
There are many possible reasons for a failed language's failure. One of the biggest is lack of visibility. Who has ever heard of IMP72? Sure, that lack of visibility could have been because people hated that particular aspect of the language, but it could also have been from any one of a number of other reasons.
 The second reason was that I've had many coffees and some beers with 
 Walter and he convinced me that configurable syntax is an idea that people 
 just don't like. Thinking a bit more, I realized that humans don't operate 
 well with configurable syntax. To use the hackneyed comparison, no natural 
 language or similar concoction has configurable syntax. Not even musical 
 notation or whatnot. There's one syntax for every human language. I 
 speculated that humans can learn one syntax for a language and then wire 
 their brains to just pattern match semantics using it. Configurable syntax 
 just messes with that approach, and besides makes any program hugely 
 context-dependent and consequently any large program a pile of crap.
So I take it AST Macros are no longer on the table for D3?
May 21 2009
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Nick Sabalausky wrote:
 There are many possible reasons for a failed language's failure. One of the 
 biggest is lack of visibility. Who has ever heard of IMP72? Sure, that lack 
 of visibility could have been because people hated that particular aspect of 
 the language, but it could also have been from any one of a number of other 
 reasons.
As I said, there were many languages with configurable syntax created during that period. None was even remembered. But then, correlation is not causation :o).
 The second reason was that I've had many coffees and some beers with 
 Walter and he convinced me that configurable syntax is an idea that people 
 just don't like. Thinking a bit more, I realized that humans don't operate 
 well with configurable syntax. To use the hackneyed comparison, no natural 
 language or similar concoction has configurable syntax. Not even musical 
 notation or whatnot. There's one syntax for every human language. I 
 speculated that humans can learn one syntax for a language and then wire 
 their brains to just pattern match semantics using it. Configurable syntax 
 just messes with that approach, and besides makes any program hugely 
 context-dependent and consequently any large program a pile of crap.
So I take it AST Macros are no longer on the table for D3?
AST macros can be implemented to not allow configurable syntax. Andrei
May 21 2009
prev sibling next sibling parent reply Georg Wrede <georg.wrede iki.fi> writes:
--- Disclaimer: this is a bit long. Read it later. ---

Andrei Alexandrescu wrote:
 Yigal Chripun wrote:
 Macros in Nemerle need to be compiled since they are regular Nemerle 
 code and they need to be loaded by the compiler (added to the command 
 line) in order to compile the code the calls the macros.

 essentially these are just plugins for the compiler.

 compared to the elegance of this solution, templates are just a crude 
 copy-paste mechanism implemented inside the compiler.
Nemerle's interesting, but it has its own issues. The largest one is that it will have to beat history: languages with configurable syntax have failed in droves in the 1970s.
This actually raises a larger issue. IMHO, it's not about configurable syntax, OOP, or any single paradigm. What really matters for a language is, the continuum that the newbie--to--guru programmer faces during his ascent through the language. A *perfect* language should be reasonably usable by the newbie. While he advances, he will encounter both needs and solutions for his increasingly advanced/complex programming projects. Now, *any* serious discontinuity, or forced change of paradigm, will constitute a hurdle. And each such hurdle only serves as a point where this programmer (maybe momentarily, but still), considers other alternatives. If such a point is conspicuous or steep enough, he'll most probably venture to another language. A programming language that has this in mind, will work like Windows: there's always an obvious (who said good?? it's Windows, for crying out loud) way to go, and for you to feel you're advancing on your career.
 Before I got into D, I was working on Enki. Enki was my own programming 
 language and of course made D look like a piece of crap. In Enki, you 
 had only very few primitives related to macro expansion, and you could 
 construct all language elements - if, while, for, structures, classes, 
 exceptions, you name it, from those primitive elements.
Lisp, Forth, anyone???
 There were two elements that convinced me to quit Enki. One was that I'd 
 got word of a language called IMP72. IMP72 embedded the very same ideas 
 Enki had, with two exceptions: (1) it was created in 1972, and (2) 
 nobody gave a damn ever since. IMP72 (and there were others too around 
 that time) started with essentially one primitive and then generated 
 itself with a bootstrap routine, notion that completely wowed me and I 
 erroneously thought would have the world wowed too.
Yeah, the academic honey pot. Always looks seductive in TeX formatted papers!
 The second reason was that I've had many coffees and some beers with 
 Walter and he convinced me that configurable syntax is an idea that 
 people just don't like.
You must forgive him, he's almost as old as I am!!! :O [1]
 Thinking a bit more, I realized that humans don't operate well with
 configurable syntax.
Actually, it's not only about that. Configurable syntax is of course a major issue. But there's an other one. And that's redundancy. In any Real-Life language, there's some amount of redundancy. For example, in many languages, both the predicate and the object are (is this the word) inflected (?), in unison. This gives the ability for the listener to skip (or not hear because of background noise) the ending of either word, and then understand the sentence as a whole. In a similar vein, verbs have a different ending with plural and single, even though the "number" is still available from the rest of the sentence, or simply even the context. Redundancy lets people receive oral communication without undue fixation on single-phoneme differences in the utterance. One might consider that as a form of error checking, like 9-bit parity memory chips. (Something you might want to remember when people ask for compulsory empty parenthesis at the end of functions and methods.) ((See my next paragraph.))
 To use the hackneyed comparison, no natural language or similar
 concoction has configurable syntax. Not even musical notation or
 whatnot. There's one syntax for every human language. I speculated
 that humans can learn one syntax for a language and then wire their
 brains to just pattern match semantics using it. Configurable syntax
 just messes with that approach, and besides makes any program hugely
 context-dependent and consequently any large program a pile of crap.
We may have a continuum here. At the one end is the configurable syntax, at the other end the cast-in-stone regular syntax (which often comes with a reduced vocabulary). I read (IIRC, Sicentific American, late 1970's) that the average size of vocabulary that a (then) working-class London person in a "lesser" neighbourhood used during a week, was only three hundred words! (This factoid was so shocking to me that I will not forget it till the day I'm on my death-bed. I've got more than 40000 words in each of English, Swedish, and Finnish. [4]) As an opposite, take this paragraph from an imaginary math book: "Now, substituting a^e*(2pi*sin(alpha[inverse of the second derivative of beta at the index])) for any first term in every polynomial in the quaternion, it becomes obvious that the derivation needs a temporary extra term, which we will call Z(fab). Now, after the substitution (which we will use thorugh page 34), we now will study the short-term changes in every dimension we're investigating here." Wouldn't you say, this is "configurable syntax". And the poor student doesn't have a choice but to try to follow this crap. (No wonder the working-class guy has no more a chance of ever following this, than the productive, but oh, so challenged member of this newsgroup.) All my musings here, make me write two things: 1: A language has to *provide* an obvious (and followable) path from the ridiculously simple, to the ultimately esoteric, with no (or as few as possible) discontinuities in mind set. (But see [2].) 2: Any _categorical_ statement that contains an implicit assessment of the {audience's|programmer's|critic's} abilities, and their relevance to the level that a particular construct in a language demands, is inherently obsolete. An example is "configurable syntax is an idea that people just don't like".
 That being said, I have no idea whether or not Nemerle will be 
 successful. I just speculate it has an uphill battle to win.
We better make sure D doesn't climb the same hill. And damn sure that the potholes in the D uphill aren't larger than the disciples' stride. --------- [1] Configurable syntax belongs to the same group as recursion. You can program for years without ever needing it (or understanding that you did need it, actually). And then, you see it in a textbook, and all they used it for was Fibonacci. Gee, great, you think, and then you write your own Fibonacci, and then just forget about the whole thing. They simply don't (or didn't, at the time I read textbooks on programming) ever show its relevance in real life everyday programming. (Yes, maybe at that time, hard disks with directory hierarchies were not common. (I used CP/M, couldn't afford UNIX. Neither did they.) Otherwise they'd all have had an example of summing the sizes of files in a directory tree. Which would at once have made the reader feel recursion actually has some relevance to their life.) Configurable syntax isn't exactly what Walter makes his bread and butter on. Neither do I. So neither of us has any reason (today) to use it, or even imagine the benefits of it. Having said that, I remember the time (years ago) when I suggested in this newsgroup that we should have a macro language above the regular programming syntax. I suggested it would be somewhat like Lisp, and it would then let us do some most amazing things, totally inconceivable to us at the time. I almost became the laughing stock of the crowd. (And those things were (mostly) much less amazing than what D templates do now. Except for a little configurable syntax stuff. :-) ) Similarily, I see configurable syntax as a way (somewhat orthogonal to templates, but not fully) of making it possible to *easily* write some kinds of programs that we really can't imagine right now. As an example of how hard it is to get this stuff across, I'll admit to the following: in the old days, I used to imagine that Lisp is the all-powrful super language for _anything_ (except for bit twiddling and systems work), and that if you couldn't do something in Lisp, it was only because of your own limitations as a person. Then, one day, I read that they're implementing macros! I dropped from my chair! If there ever was a language I didn't expect to need macros, Lisp definitely was it. Then, during the next years I read articles that made me believe (I wouldnt be preposterous enough to state I /understood/) that they actually were needed, and that they advanced the applicability of the language significantly. So, configurable syntax for D, I assume *will* be a reality. But I hope it won't be before 2.0 is official. :-( !!!! And while I'm at it, I think that in D4 (not D3) we will have a total overhaul of the current template syntax. (!!!) Yes, it beats the G-string off C++, no question. But that is not enough. And, when you really look at it, the "ease" of D templates is an illusion. The "D like syntax", the "invisibility" of the template system, is only to appease newcomers, to piss off C++ folks, and to delude ourselves. Time will show that it will need to be more overtly Lisp like, and that actually a (sort of) difference in syntax to regular D is well motivated. [2] This is in contrast to the "uphill potholes" I was talking before. Why it's okay (IMHO) to have a slightly different syntax, is, the mental leap from regular programming to templates is huge anyway. And it can't be made to go away either[3]. (Rumtime versus compile time is already hard enough for many. Then we have "is my code producing binary code, or is it producing some other source code? Then what happens? Can that in turn produce yet more source code?? Am I forced to only use this Recursion Stuff all over the place?", etc.) So, then it would only be natural to have a slightly different syntax. That would also give us possiblilities beyond the current ones, to (first partly, and then more completely) implement a system of configurable syntax to D. In the same vein that D still implements an ASM language, there should be a (ehh...) "Higher Level" language at the other end. Precisely like D has never deluded itself of being efficient enough at the Very Low Level to really make ASM obsolete, it should not delude itself with notions of not needing a fundamental VHL (Very High Level) language at the other end. The current Template Language is excellent, beats anybody in sight, but implemented it, too. Or at least the more marketable details of it. But we have to maintain our lead. And that's done by advancing the bleeding edge so it stays just beyond their reach. [3] One can try to hide a difference by making something "look the same". But that only serves to disorient people. Do you remember that there were *two* mazes in DD. Way past the "xyzzy" graffiti, you ended up in a maze. I spent a whole week around there, before a girlfriend pointed out that "the corridors are alike" but yesterday "the corridors are all different". That's when I understood there were two mazes. No wonder I'd had a feeling of being totally clueless there. Microsoft has made a career of making stuff look alike when they're not, and non-alike when they are. It's poison to rational, intelligent users. But the idiots who just learn by rote memory what to do when, couldn't care less. Or understand the point of our grief. [4] How would you measure how many words you have in a language? The easiest way (back when there were books), was to have somebody else look up a random word in a random page of a dictionary, and ask you to explain it. Do that like 80 times, and then calculate the percentage you got right. Compare that to how many words the dictionary brags of having, usually right on the cover. (You do need a proper dictionary, with more than 50000 words, of course.)
May 21 2009
next sibling parent reply Jason House <jason.house gmail.com> writes:
Georg Wrede Wrote:

 
 --- Disclaimer: this is a bit long. Read it later. ---
Wow, you're right. Sadly, I stopped reading about 80% through (discussion on D4) At a high level, I agree with your assessment about an easy growth path for newbors to gurus. I should also add that a language must make the code of gurus remain understandable for all but the newest of newbies. (Ever try to read through STL or a boost library?) I also had an idea for a configurable syntax language that I almost started developing. The concept was dead simple: 1. Provide the ability to call generated code at compile time. D comes close to this, but D has some restrictions on which code can be called. 2. Expose all compiler primitives for potential use in user code: parse trees, types, IL, raw assembly, etc... 3. Provide hooks to replace specific compilation phases (or wedge tasks between phases) With that, the language was sufficiently capable to morph into any existing language. Everything else, including most language design was "merely" standard library work. My inspiration was seeing various boost libraries such as spirit and lambda. They seemed like the coolest things ever, but all they did was add new syntax for a common coding need. I hadn't heard of LINQ but knew such a thing was feasible in my language...
 Andrei Alexandrescu wrote:
 Yigal Chripun wrote:
 Macros in Nemerle need to be compiled since they are regular Nemerle 
 code and they need to be loaded by the compiler (added to the command 
 line) in order to compile the code the calls the macros.

 essentially these are just plugins for the compiler.

 compared to the elegance of this solution, templates are just a crude 
 copy-paste mechanism implemented inside the compiler.
Nemerle's interesting, but it has its own issues. The largest one is that it will have to beat history: languages with configurable syntax have failed in droves in the 1970s.
This actually raises a larger issue. IMHO, it's not about configurable syntax, OOP, or any single paradigm. What really matters for a language is, the continuum that the newbie--to--guru programmer faces during his ascent through the language. A *perfect* language should be reasonably usable by the newbie. While he advances, he will encounter both needs and solutions for his increasingly advanced/complex programming projects. Now, *any* serious discontinuity, or forced change of paradigm, will constitute a hurdle. And each such hurdle only serves as a point where this programmer (maybe momentarily, but still), considers other alternatives. If such a point is conspicuous or steep enough, he'll most probably venture to another language. A programming language that has this in mind, will work like Windows: there's always an obvious (who said good?? it's Windows, for crying out loud) way to go, and for you to feel you're advancing on your career.
 Before I got into D, I was working on Enki. Enki was my own programming 
 language and of course made D look like a piece of crap. In Enki, you 
 had only very few primitives related to macro expansion, and you could 
 construct all language elements - if, while, for, structures, classes, 
 exceptions, you name it, from those primitive elements.
Lisp, Forth, anyone???
 There were two elements that convinced me to quit Enki. One was that I'd 
 got word of a language called IMP72. IMP72 embedded the very same ideas 
 Enki had, with two exceptions: (1) it was created in 1972, and (2) 
 nobody gave a damn ever since. IMP72 (and there were others too around 
 that time) started with essentially one primitive and then generated 
 itself with a bootstrap routine, notion that completely wowed me and I 
 erroneously thought would have the world wowed too.
Yeah, the academic honey pot. Always looks seductive in TeX formatted papers!
 The second reason was that I've had many coffees and some beers with 
 Walter and he convinced me that configurable syntax is an idea that 
 people just don't like.
You must forgive him, he's almost as old as I am!!! :O [1]
 Thinking a bit more, I realized that humans don't operate well with
 configurable syntax.
Actually, it's not only about that. Configurable syntax is of course a major issue. But there's an other one. And that's redundancy. In any Real-Life language, there's some amount of redundancy. For example, in many languages, both the predicate and the object are (is this the word) inflected (?), in unison. This gives the ability for the listener to skip (or not hear because of background noise) the ending of either word, and then understand the sentence as a whole. In a similar vein, verbs have a different ending with plural and single, even though the "number" is still available from the rest of the sentence, or simply even the context. Redundancy lets people receive oral communication without undue fixation on single-phoneme differences in the utterance. One might consider that as a form of error checking, like 9-bit parity memory chips. (Something you might want to remember when people ask for compulsory empty parenthesis at the end of functions and methods.) ((See my next paragraph.))
 To use the hackneyed comparison, no natural language or similar
 concoction has configurable syntax. Not even musical notation or
 whatnot. There's one syntax for every human language. I speculated
 that humans can learn one syntax for a language and then wire their
 brains to just pattern match semantics using it. Configurable syntax
 just messes with that approach, and besides makes any program hugely
 context-dependent and consequently any large program a pile of crap.
We may have a continuum here. At the one end is the configurable syntax, at the other end the cast-in-stone regular syntax (which often comes with a reduced vocabulary). I read (IIRC, Sicentific American, late 1970's) that the average size of vocabulary that a (then) working-class London person in a "lesser" neighbourhood used during a week, was only three hundred words! (This factoid was so shocking to me that I will not forget it till the day I'm on my death-bed. I've got more than 40000 words in each of English, Swedish, and Finnish. [4]) As an opposite, take this paragraph from an imaginary math book: "Now, substituting a^e*(2pi*sin(alpha[inverse of the second derivative of beta at the index])) for any first term in every polynomial in the quaternion, it becomes obvious that the derivation needs a temporary extra term, which we will call Z(fab). Now, after the substitution (which we will use thorugh page 34), we now will study the short-term changes in every dimension we're investigating here." Wouldn't you say, this is "configurable syntax". And the poor student doesn't have a choice but to try to follow this crap. (No wonder the working-class guy has no more a chance of ever following this, than the productive, but oh, so challenged member of this newsgroup.) All my musings here, make me write two things: 1: A language has to *provide* an obvious (and followable) path from the ridiculously simple, to the ultimately esoteric, with no (or as few as possible) discontinuities in mind set. (But see [2].) 2: Any _categorical_ statement that contains an implicit assessment of the {audience's|programmer's|critic's} abilities, and their relevance to the level that a particular construct in a language demands, is inherently obsolete. An example is "configurable syntax is an idea that people just don't like".
 That being said, I have no idea whether or not Nemerle will be 
 successful. I just speculate it has an uphill battle to win.
We better make sure D doesn't climb the same hill. And damn sure that the potholes in the D uphill aren't larger than the disciples' stride. --------- [1] Configurable syntax belongs to the same group as recursion. You can program for years without ever needing it (or understanding that you did need it, actually). And then, you see it in a textbook, and all they used it for was Fibonacci. Gee, great, you think, and then you write your own Fibonacci, and then just forget about the whole thing. They simply don't (or didn't, at the time I read textbooks on programming) ever show its relevance in real life everyday programming. (Yes, maybe at that time, hard disks with directory hierarchies were not common. (I used CP/M, couldn't afford UNIX. Neither did they.) Otherwise they'd all have had an example of summing the sizes of files in a directory tree. Which would at once have made the reader feel recursion actually has some relevance to their life.) Configurable syntax isn't exactly what Walter makes his bread and butter on. Neither do I. So neither of us has any reason (today) to use it, or even imagine the benefits of it. Having said that, I remember the time (years ago) when I suggested in this newsgroup that we should have a macro language above the regular programming syntax. I suggested it would be somewhat like Lisp, and it would then let us do some most amazing things, totally inconceivable to us at the time. I almost became the laughing stock of the crowd. (And those things were (mostly) much less amazing than what D templates do now. Except for a little configurable syntax stuff. :-) ) Similarily, I see configurable syntax as a way (somewhat orthogonal to templates, but not fully) of making it possible to *easily* write some kinds of programs that we really can't imagine right now. As an example of how hard it is to get this stuff across, I'll admit to the following: in the old days, I used to imagine that Lisp is the all-powrful super language for _anything_ (except for bit twiddling and systems work), and that if you couldn't do something in Lisp, it was only because of your own limitations as a person. Then, one day, I read that they're implementing macros! I dropped from my chair! If there ever was a language I didn't expect to need macros, Lisp definitely was it. Then, during the next years I read articles that made me believe (I wouldnt be preposterous enough to state I /understood/) that they actually were needed, and that they advanced the applicability of the language significantly. So, configurable syntax for D, I assume *will* be a reality. But I hope it won't be before 2.0 is official. :-( !!!! And while I'm at it, I think that in D4 (not D3) we will have a total overhaul of the current template syntax. (!!!) Yes, it beats the G-string off C++, no question. But that is not enough. And, when you really look at it, the "ease" of D templates is an illusion. The "D like syntax", the "invisibility" of the template system, is only to appease newcomers, to piss off C++ folks, and to delude ourselves. Time will show that it will need to be more overtly Lisp like, and that actually a (sort of) difference in syntax to regular D is well motivated. [2] This is in contrast to the "uphill potholes" I was talking before. Why it's okay (IMHO) to have a slightly different syntax, is, the mental leap from regular programming to templates is huge anyway. And it can't be made to go away either[3]. (Rumtime versus compile time is already hard enough for many. Then we have "is my code producing binary code, or is it producing some other source code? Then what happens? Can that in turn produce yet more source code?? Am I forced to only use this Recursion Stuff all over the place?", etc.) So, then it would only be natural to have a slightly different syntax. That would also give us possiblilities beyond the current ones, to (first partly, and then more completely) implement a system of configurable syntax to D. In the same vein that D still implements an ASM language, there should be a (ehh...) "Higher Level" language at the other end. Precisely like D has never deluded itself of being efficient enough at the Very Low Level to really make ASM obsolete, it should not delude itself with notions of not needing a fundamental VHL (Very High Level) language at the other end. The current Template Language is excellent, beats anybody in sight, but implemented it, too. Or at least the more marketable details of it. But we have to maintain our lead. And that's done by advancing the bleeding edge so it stays just beyond their reach. [3] One can try to hide a difference by making something "look the same". But that only serves to disorient people. Do you remember that there were *two* mazes in DD. Way past the "xyzzy" graffiti, you ended up in a maze. I spent a whole week around there, before a girlfriend pointed out that "the corridors are alike" but yesterday "the corridors are all different". That's when I understood there were two mazes. No wonder I'd had a feeling of being totally clueless there. Microsoft has made a career of making stuff look alike when they're not, and non-alike when they are. It's poison to rational, intelligent users. But the idiots who just learn by rote memory what to do when, couldn't care less. Or understand the point of our grief. [4] How would you measure how many words you have in a language? The easiest way (back when there were books), was to have somebody else look up a random word in a random page of a dictionary, and ask you to explain it. Do that like 80 times, and then calculate the percentage you got right. Compare that to how many words the dictionary brags of having, usually right on the cover. (You do need a proper dictionary, with more than 50000 words, of course.)
May 22 2009
parent reply Georg Wrede <georg.wrede iki.fi> writes:
Jason House wrote:
 Georg Wrede Wrote:
 
 --- Disclaimer: this is a bit long. Read it later. ---
 Wow, you're right. Sadly, I stopped reading about 80% through
 (discussion on D4)
Yeah, one should not write opinion pieces when inspired...
 At a high level, I agree with your assessment about an easy growth
 path for newbors to gurus. I should also add that a language must
 make the code of gurus remain understandable for all but the newest
 of newbies. (Ever try to read through STL or a boost library?)
(Yeah, reading that kind of code makes one feel inferior to the gurus. And so very sad that the language makes even mid-level stuff entirely uncomprehensible. Not to speak of the amounts of time it has to take coding it.) Strictly speaking, there's no way for a language to make guru code understandable to "anti-gurus". However (and what I guess you were thinkng of), when somebody uses the most advanced features in a language, then his stuff should not become incomprehensible to "anti-gurus" just because of the language itself. What I mean is, if somebody does tensor algebra, it's pretty sure a high schooler won't get it, no matter what the language is. *But*, if someone calculates the area of a rectangle or a triangle, using the most advanced language features, then the *language* should not make it incomprehensible to non-gurus, or even "anti-gurus". (I have to admit, no language can guarantee this, but it definitely has to be the aim when designing advanced language features.)
 I also had an idea for a configurable syntax language that I almost
 started developing.
...
 My inspiration was seeing various boost libraries such as spirit and
 lambda. They seemed like the coolest things ever, but all they did
 was add new syntax for a common coding need.  I hadn't heard of LINQ
 but knew such a thing was feasible in my language...
Spirit and lambda have inspired lots of people. Me too. And LINQ is a cool thing too. When I originally wrote about configurable syntax, LINQ would have made an excellent example of its usefulness. Sadly, I didn't know about it at the time.
May 22 2009
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Georg Wrede wrote:
 Jason House wrote:
 Georg Wrede Wrote:

 --- Disclaimer: this is a bit long. Read it later. ---
 Wow, you're right. Sadly, I stopped reading about 80% through
 (discussion on D4)
Yeah, one should not write opinion pieces when inspired...
 At a high level, I agree with your assessment about an easy growth
 path for newbors to gurus. I should also add that a language must
 make the code of gurus remain understandable for all but the newest
 of newbies. (Ever try to read through STL or a boost library?)
(Yeah, reading that kind of code makes one feel inferior to the gurus. And so very sad that the language makes even mid-level stuff entirely uncomprehensible. Not to speak of the amounts of time it has to take coding it.) Strictly speaking, there's no way for a language to make guru code understandable to "anti-gurus". However (and what I guess you were thinkng of), when somebody uses the most advanced features in a language, then his stuff should not become incomprehensible to "anti-gurus" just because of the language itself.
Two possible language features should make complicated template code significantly simpler: 1. Allow inner name promotion even if the template defines other members, as long as they are all private: now: template WidgetImpl(A, B) { ... alias ... Result; } template Widget(A, B) { alias WidgetImpl!(A, B) Widget; } proposed: template Widget(A, B) { private: ... alias ... Result; public: alias Widget Result; } It's needed very frequently, puts sand in the eye, and almost sure to throw off the casual reader. 2. Handle qualifiers and ref properly. Right now I use an abomination: enum bool byRef = is(typeof(&(R.init.front()))); /// UGLY /** Forwards to $(D _input.back). */ mixin( (byRef ? "ref " : "")~ q{ElementType!(R) front() { return _input.back; } }); Andrei
May 22 2009
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:

 1. Allow inner name promotion even if the template defines other
 members, as long as they are all private:

 now:

 template WidgetImpl(A, B) {
      ...
      alias ... Result;
 }

 template Widget(A, B) {
      alias WidgetImpl!(A, B) Widget;
 }

 proposed:

 template Widget(A, B) {
 private:
      ...
      alias ... Result;
 public:
      alias Widget Result;
 }

 It's needed very frequently, puts sand in the eye, and almost sure to
 throw off the casual reader.
The D1 documentation says:
Implicit Template Properties: If a template has exactly one member in it, and
the name of that member is the same as the template name, that member is
assumed to be referred to in a template instantiation:<
Two possible ideas: 1) If it contains a member with the name of the template (and other names), it can be referred anyway with the name of the template: template Foo(T) { T Foo; T[] Bar; } void test() { Foo!(int) = 6; // equal to Foo!(int).Foo anyway } This may lead to troubles I am not seeing yet. ---------------- 2) Having private and public members as in your example solves the problem, but it may be overkill, because in many situations you have only one public name, plus some private ones. So a 'return' may be used: template WidgetImpl(A, B) { T[] Bar; T Foo; return Foo; } template WidgetImpl(A, B) { ... return Result; } This doesn't allow you to hide names from the outside, but helps you avoid having a second template. Having a return also helps to see templates as closer to compile-time functions (changing few things there may be ways to merge the syntax of templates with the syntax of compile time functions, reducing the complexity of D). Bye, bearophile
May 22 2009
parent reply bearophile <bearophileHUGS lycos.com> writes:
bearophile:
 Having a return also helps to see templates as closer to compile-time
functions (changing few things there may be ways to merge the syntax of
templates with the syntax of compile time functions, reducing the complexity of
D).<
To write some templates as compile-time functions you may need to add a new type to D, named "type": type foo(type T) { return T[]; } That is the same as: template Foo(T) { alias T[] Foo; } Bye, bearophile
May 22 2009
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
bearophile:
To write some templates as compile-time functions you may need to add a new
type to D, named "type":<
To see if the idea of compile-time functions that process compile-time variables of type "type" isn't fully crazy I have translated some of the D1 templates of my dlibs into such compile-time functions. The following code shows a template and its possible translation. The code is of course untested and there can be many mistakes of mine. // ------------------------------- private template AA_impl(KeyType, ValueType) { ValueType[KeyType] result; const ValueType[KeyType] res = result; } template AA(KeyType, ValueType) { const ValueType[KeyType] AA = AA_impl!(KeyType, ValueType).res; } TyVal[TyKey] newaa(type TyKey, type TyVal) { TyVal[TyKey] aa; return aa; } // ------------------------------- template AAKeyType(T) { alias typeof(T.keys[0]) AAKeyType; } template AAValType(T) { alias typeof(T.values[0]) AAValType; } type aaKeyType(TV, TK)(TV[TK] aa) { return TK; } type aaValType(TV, TK)(TV[TK] aa) { return TV; } type aaKeyType(TV, TK)(type TV[TK]) { return TK; } type aaValType(TV, TK)(type TV[TK]) { return TV; } // ------------------------------- template ALL(alias Predicate, Types...) { static if (Types.length == 0) const bool ALL = true; else const bool ALL = Predicate!(Types[0]) && ALL!(Predicate, Types[1 .. $]); } bool All(alias pred, type[] types...) { static foreach (T; types) static if (!pred(T)) return false; return true; } ANY/Any are similar. // ------------------------------- template BaseTypedef(T) { static if( is( T BaseType1 == typedef ) ) alias BaseTypedef!(BaseType1) BaseTypedef; else alias T BaseTypedef; } type baseTypedef(type T) { // the syntax of is() may be improved in many ways static while ( is( T BaseType1 == typedef ) ) T = BaseType1; return T; } A problem of the "static foreach" (that Walter will face) is that templates are functional, so their values are immutable. And Scheme shows that in a language with no mutability it's not much useful to have for/foreach loops. In that "static while" loop the variable T of type type, is a mutable. // ------------------------------- template CastableTypes(Types...) { static if (Types.length <= 1) const CastableTypes = true; else const CastableTypes = is(typeof(Types[0] == Types[1])) && CastableTypes!(Types[1 .. $]); } bool areCastableTypes(type[] Types)(type... types) { static foreach (T; types[1 .. $]) static if (!is(typeof(Types[0] == Types[1]))) return false; return true; } // ------------------------------- template DeconstArrayType(T) { static if (IsStaticArray!(T)) alias typeof(T[0])[] DeconstArrayType; else alias T DeconstArrayType; } (DeconstArrayType works if T isn't a static array, this is very useful in my code) type deconstArrayType(type T) { static if (isStaticArray!(T)) return typeof(T[0])[]; //return (T[0]).typeof[]; else return T; } // ------------------------------- template IsType(T, Types...) { // Original idea by Burton Radons, modified static if (Types.length == 0) const bool IsType = false; else const bool IsType = is(T == Types[0]) || IsType!(T, Types[1 .. $]); } bool isType(T, type[] types...) { static foreach (TX; types) // now == works among run-time variables of type "type". static if (T == TX) return true; return false; } Or even simpler, if the compiler understands that isType is a compile-time-function-only: bool isType(T, type[] types...) { foreach (TX; types) if (T == TX) return true; return false; } With Any() and a compile-time lambda you can write: bool isType(T, type[] types...) { return Any((type TX){ return T == TX; }, types); } :-) // ------------------------------- Bye, bearophile
May 22 2009
prev sibling next sibling parent "Nick Sabalausky" <a a.a> writes:
"bearophile" <bearophileHUGS lycos.com> wrote in message 
news:gv6e4p$1m3f$1 digitalmars.com...
 bearophile:
 Having a return also helps to see templates as closer to compile-time 
 functions (changing few things there may be ways to merge the syntax of 
 templates with the syntax of compile time functions, reducing the 
 complexity of D).<
To write some templates as compile-time functions you may need to add a new type to D, named "type": type foo(type T) { return T[]; }
been really wanting to see that in D.
May 22 2009
prev sibling parent reply Jason House <jason.james.house gmail.com> writes:
bearophile Wrote:

 bearophile:
 Having a return also helps to see templates as closer to compile-time
functions (changing few things there may be ways to merge the syntax of
templates with the syntax of compile time functions, reducing the complexity of
D).<
To write some templates as compile-time functions you may need to add a new type to D, named "type": type foo(type T) { return T[]; } That is the same as: template Foo(T) { alias T[] Foo; }
I'd love to see compile-time functions returning types. IMHO, it's far superior to the template-based approach. There are several reasons for this: 1. The template scheme violates the DRY principle. It's like type constructors in C++, Java, etc... D was designed to better than that! 2. Function calls make it far clearer what the intent is. Once templates are allowed to have more members, the true purpose will be obfuscated. 3. It opens up possibilities for better (compile-time) reflection. That being said, it's possible for templates to define an instance with unknown type and value. A non-templated function returning either a type or value can't do that.
May 22 2009
parent bearophile <bearophileHUGS lycos.com> writes:
Jason House:
 I'd love to see compile-time functions returning types.<
Glad to see my post appreciated :-)
IMHO, it's far superior to the template-based approach.<
Even if such compile-time functions can't fully replace templates (so templates can't be removed from the language) such functions may be a significant improvement anyway. I have shown some examples in another post. But for some of those functions you may enjoy having a "static while" too :-) Probably you can even write things like: auto foo(type T) { static if (T == int) return 1; else return T[]; }
it's possible for templates to define an instance with unknown type and value.
A non-templated function returning either a type or value can't do that.<
Can you explain better or show an example? Bye, bearophile
May 22 2009
prev sibling parent bearophile <bearophileHUGS lycos.com> writes:
A configurable syntax has some advantages:
- It makes the language more flexible. I like the idea of this a lot.
- allows to do things that are otherwise hard to do.
- It can be used to write shorter programs.
- It can be used to define a sub-language fitter for a specific job. (Ruby
shows some of this, and CLisp shows lot of this).

A configurable syntax has some disadvantages too (you need some experience to
understand the following points):
- Every program can look different. So you may need lot of time to understand
code written by other people. This is bad if you have 50+ programmers working
on a project.
- The language gets more complex.
- Meta-programming is generally not easy to understand. So newbie programmers
will need more time to learn the language, the entry barrier to the language is
higher.
- It fragments the language community, creating more differences among
programs. This probably leads to a language that has much less already written
modules (example: In Python you can find a module for almost everything. This
sharing of code comes from the syntax that is standard, people using the same
single CPython interpreter, and so on. In Python even the coding style is
codified, there is only one normal way in writing code. This allows people to
share code in a simpler way. And having a rich community of re-usable modules
is probably the most important quality a modern language can have).
- Creating macros is sometimes like extending the language. But the discussions
on the D newsgroups and Python newsgroups have shown me that adding new good
syntax/semantics to a language is a very hard job. So most people that write
macros are going to add bad designs to the language. This is bad.

A silly solution is to have macros only in the standard library and in few
autorized libraries :o] But this is impossible to enforce.

So in the end it may be better to keep finding special and localized situations
where the D language may enjoy becoming a bit more flexible (generic infix
operators? Merging the syntax of compile-time functiong with some templates?
Etc etc) but it may be better to not add a generic configurable syntax (macros)
to D...

Bye,
bearophile
May 22 2009
prev sibling parent reply Yigal Chripun <yigal100 gmail.com> writes:
Andrei Alexandrescu wrote:
 Yigal Chripun wrote:
 Nick Sabalausky wrote:
 I suppose that might make reverse-engineering easier which MS might 
 not like, but I'm not suggesting this as something that MS should 
 like or should even do, but rather suggesting it as (business issues 
 completely aside) something that would possibly gain the benefits of 
 both styles.
that's the exact opposite of a good solution. I already mentioned several times before the language Nemerle which provide the correct solution. important fact - Nemerle is a .NET language and it does _NOT_ need to modify the underlining system. The way it works in Nemerle is pretty simple: the language has a syntax to compose/decompose AST. a Macro in nemerle is just a plain old function that uses the same syntax you'd use at run-time and this "function" can use APIs to access the compiler's internal data structures (the AST) and manipulate it. you "connect" it to your regular code by either just calling it like a regular function or by using attributes. let's compare to see the benefits: in D: tango.io.Stdout("Hello World").newline; // prints at run-time pragma(msg, "Hello World"); // prints at compile-time in Nemerle: macro m () { Nemerle.IO.printf ("compile-time\n"); <[ Nemerle.IO.printf ("run-time\n") ]>; } // and you call it like this: m(); Nemerle.IO.printf ("run-time\n"); notice how both use the same code, the same printf function? the only change is that the second line inside the macro is enclosed inside <[ ]> which means output (return) the AST for this code instead of actually running the code and returning the result of the call. Macros in Nemerle need to be compiled since they are regular Nemerle code and they need to be loaded by the compiler (added to the command line) in order to compile the code the calls the macros. essentially these are just plugins for the compiler. compared to the elegance of this solution, templates are just a crude copy-paste mechanism implemented inside the compiler.
Nemerle's interesting, but it has its own issues. The largest one is that it will have to beat history: languages with configurable syntax have failed in droves in the 1970s. Before I got into D, I was working on Enki. Enki was my own programming language and of course made D look like a piece of crap. In Enki, you had only very few primitives related to macro expansion, and you could construct all language elements - if, while, for, structures, classes, exceptions, you name it, from those primitive elements. There were two elements that convinced me to quit Enki. One was that I'd got word of a language called IMP72. IMP72 embedded the very same ideas Enki had, with two exceptions: (1) it was created in 1972, and (2) nobody gave a damn ever since. IMP72 (and there were others too around that time) started with essentially one primitive and then generated itself with a bootstrap routine, notion that completely wowed me and I erroneously thought would have the world wowed too. The second reason was that I've had many coffees and some beers with Walter and he convinced me that configurable syntax is an idea that people just don't like. Thinking a bit more, I realized that humans don't operate well with configurable syntax. To use the hackneyed comparison, no natural language or similar concoction has configurable syntax. Not even musical notation or whatnot. There's one syntax for every human language. I speculated that humans can learn one syntax for a language and then wire their brains to just pattern match semantics using it. Configurable syntax just messes with that approach, and besides makes any program hugely context-dependent and consequently any large program a pile of crap. That being said, I have no idea whether or not Nemerle will be successful. I just speculate it has an uphill battle to win. Andrei
I didn't talk about configurable syntax at all above. Yes, Nemerle has this feature as part of their Macro system but that's just one rather small aspect about the system that can be removed. I also don't see the bigger problem with it as you describe since this is limited in Nemerle to specific things. I'm also unsure as to what you define as syntax and what you define as semantics. for example, Smalltalk has only 5 keywords and it's implemented entirely as a collection of libraries. "if" "else" "switch" "while" are all implemented as methods of objects and therefore are configurable. Is that also wrong in your opinion? I agree with dsimcha - the language needs to provide simple intuitive defaults, that's why I think LISP didn't succeed. it is very powerful but the need to write your own macro just so you can say "4 + 5" instead of (+ 4 5) shows this point. I think Nemerle provies this - the constructs in Nemerle for the Macro system are very simple and intuitive. you only have one extra syntax feature, the <[ ]>. think of D's CTFE only much more extended in scope - you write a CTFE function and compile it. (that's the Nemerle Macro equivalent) than you just call it in a different file and the compiler will execute this function at compile time. Nemerle does not need an interpreter for this since these functions are compiled just like the rest of the code. Nemerle also provides compiler APIs so these functions could work on AST to add/alter/remove types and other constructs. Last thing, basing your arguments on history is flawed. the Micro-Kernel idea got the same treatment after the failures in the 80's (Mach and co.) but nowadays this idea was revived and there are already several million cellphones that run an OS built on the L4 micro-kernel so it's even a commercial success.
May 21 2009
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Yigal Chripun wrote:
 Last thing, basing your arguments on history is flawed. the Micro-Kernel 
 idea got the same treatment after the failures in the 80's (Mach and 
 co.) but nowadays this idea was revived and there are already several 
 million cellphones that run an OS built on the L4 micro-kernel so it's 
 even a commercial success.
Our industry goes in cycles, and I've been around to see a few (multitasking, microkernels, memory paging, virtual machine monitors, client-server...) That is because various tradeoffs in hardware have subtly changed with time. The human factor, however, has stayed the same. Andrei
May 21 2009
prev sibling parent "Nick Sabalausky" <a a.a> writes:
"Yigal Chripun" <yigal100 gmail.com> wrote in message 
news:gv5dpn$2oe9$1 digitalmars.com...
 I think Nemerle provies this - the constructs in Nemerle for the Macro 
 system are very simple and intuitive. you only have one extra syntax 
 feature, the <[ ]>. think of D's CTFE only much more extended in scope - 
 you write a CTFE function and compile it. (that's the Nemerle Macro 
 equivalent) than you just call it in a different file and the compiler 
 will execute this function at compile time.
 Nemerle does not need an interpreter for this since these functions are 
 compiled just like the rest of the code. Nemerle also provides compiler 
 APIs so these functions could work on AST to add/alter/remove types and 
 other constructs.
As I recall, we got onto this subject from discussing ways to combine the power of D/C++-style templates with the cross-[assembly/object] benefits of macro system is like a better form of D's CTFE. But I'm unclear on how nemerle's macro system relates to the problem of achieving the best of both
May 21 2009
prev sibling parent reply Georg Wrede <georg.wrede iki.fi> writes:
Nick Sabalausky wrote:
 Suppose (purely hypothetically) that the .NET assembly system 
 were changed to allow the source for a D/C++ style of source-level template 
 to be embedded into the assembly. Then they'd be able to do D/C++ style 
 source-level template/code-generation. Right?
I assume, actually presume, that would take the better part of a decade.
 Now obviously the big problem with that is it would only be usable in
 the same language it was originally written in.
That actually depends. Done the M$ way, it would. Done properly, it would work in any language. But then, that would mean a rewrite of the entire CLR, wouldn't it?
 I suppose that might make reverse-engineering easier  [...]
I don't think that's got anything to do with it. Not at least, if they'd really do it below the language/CLR boundary.
May 21 2009
parent "Nick Sabalausky" <a a.a> writes:
"Georg Wrede" <georg.wrede iki.fi> wrote in message 
news:gv4t8t$1r49$1 digitalmars.com...
 Nick Sabalausky wrote:
 Suppose (purely hypothetically) that the .NET assembly system were 
 changed to allow the source for a D/C++ style of source-level template to 
 be embedded into the assembly. Then they'd be able to do D/C++ style 
 source-level template/code-generation. Right?
I assume, actually presume, that would take the better part of a decade.
 Now obviously the big problem with that is it would only be usable in
 the same language it was originally written in.
That actually depends. Done the M$ way, it would. Done properly, it would work in any language. But then, that would mean a rewrite of the entire CLR, wouldn't it?
Actually, I only intended that part of it as a lead-in to my idea about templates/generics working on an AST-level instead of source-level (D/C++
May 21 2009
prev sibling next sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Yigal Chripun (yigal100 gmail.com)'s article
 Andrei Alexandrescu wrote:
 Lutger wrote:
 Andrei Alexandrescu wrote:

 ...
 What the heck do you need generics for when you have real
 templates?  To me,
 generics seem like just a lame excuse for templates.
I agree. Then, templates aren't easy to implement and they were understandably already busy implementing the using statement. Andrei
While I don't fully understand how generics work under the hood in .NET, there are some benefits to how it is done. For example, you can use runtime reflection on generic types. And the jit compiler instantiates them at runtime. They may serve a different purpose than templates: "Anders Hejlsberg: To me the best way to understand the distinction just like classes, except they have a type parameter. C++ templates are really just like macros, except they look like classes." It seems that lack of structural typing is seen as a feature: "When you think about it, constraints are a pattern matching mechanism. You want to be able to say, "This type parameter must have a constructor that takes two arguments, implement operator+, have this static method, has these two instance methods, etc." The question is, how complicated do you want this pattern matching mechanism to be? There's a whole continuum from nothing to grand pattern matching. We think it's too little to say nothing, and the grand pattern matching becomes very complicated, so we're in- between." From: http://www.artima.com/intv/genericsP.html
Oh, so Wal^H^H^Ha friend of mine I was talking to was right: there's some missing of the point point going on. The code generation aspect of templates is a blind spot of the size of Canada. Andrei
I think you miss the point here. Generics and code generation are two separate and orthogonal features that where conflated together by C++. while you can do powerful stuff with templates it smells of trying to write Haskel code with the C pre-proceesor. if you want to see a clean solution to this issue look at Nemerle. essentially, their AST Macro system provides multi-level compilation. c++ templates are a horrible hack designed to ween off C programmers from using the pre-processor and the D templates provide mostly cosmetic changes to this. they do not solve the bigger design issue.
Not sure I agree. C++ templates were probably intended to be something like generics initially and became Turing-complete almost by accident. To get Turing completeness in C++ templates requires severe abuse of features and spaghetti code writing. D extends templates so that they're actually *designed* for metaprogramming, not just an implementation of generics, thus solving C++'s design problem. Mixins (to really allow code generation), CTFE (to make it easier to generate code), static if (to avoid kludges like using specialization just to get branching) and tuples (to handle variadics) make D templates useful for metaprogramming without massive kludges.
May 20 2009
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
dsimcha wrote:
 Not sure I agree.  C++ templates were probably intended to be something like
 generics initially and became Turing-complete almost by accident.
That is factually correct. It was quite a hubbub on the C++ standardization committee when Erwin Unruh wrote a C++ program that wrote prime numbers in error messages. See http://tinyurl.com/oqk6nl Andrei
May 20 2009
prev sibling parent BCS <ao pathlink.com> writes:
Reply to Yigal,

 D templates provide mostly cosmetic changes to this.
 
If you think D's templates are C++'s template with a few "cosmetic changes" than you aren't paying attention. A few cosmetic changes aren't going to allow 1.4MB of c++ header files to be anywhere near duplicated in 2000 LOC (Boost sprit vs dparse)
May 20 2009
prev sibling next sibling parent reply BCS <none anon.com> writes:
Hello dsimcha,

 What the heck do you need generics for when you have real templates?
 To me, generics seem like just a lame excuse for templates.
 
smaller object code? OTOH a good implementation will noice when I can fold together several template expansions
May 19 2009
next sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from BCS (none anon.com)'s article
 Hello dsimcha,
 What the heck do you need generics for when you have real templates?
 To me, generics seem like just a lame excuse for templates.
smaller object code? OTOH a good implementation will noice when I can fold together several template expansions
I understand that object file bloat with templates is at least a theoretical concern, but come on. For most programs, at least most of the ones I write, most of the memory consumption is data, and code is only a tiny fraction. Does anyone have a real world use case where object file bloat due to templates was a significant problem *and* you weren't working w/ an embedded system where you
May 19 2009
parent BCS <none anon.com> writes:
Hello dsimcha,

 == Quote from BCS (none anon.com)'s article
 
 Hello dsimcha,
 
 What the heck do you need generics for when you have real templates?
 To me, generics seem like just a lame excuse for templates.
 
smaller object code? OTOH a good implementation will noice when I can fold together several template expansions
I understand that object file bloat with templates is at least a theoretical concern, but come on. For most programs, at least most of the ones I write, most of the memory consumption is data, and code is only a tiny fraction. Does anyone have a real world use case where object file bloat due to templates was a significant problem *and* you anyhow?
It's not just file size, it can also cause memory and cache pressure. But to answer your question, I don't remember exactly how big the object files were but I've had DMD use up 700+ MB of ram. I /think/ it was kicking out 10+MB object files. That was running dparse with a 200 rule grammar. OTOH generics won't help with that.
May 19 2009
prev sibling parent reply Kagamin <spam here.lot> writes:
BCS Wrote:

 smaller object code? OTOH a good implementation will noice when I can fold 
 together several template expansions
That's the difference. You can't fold templates because they're binary incompatible as opposite to generics.
May 20 2009
next sibling parent reply Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Kagamin wrote:
 BCS Wrote:
 
 smaller object code? OTOH a good implementation will noice when I can fold 
 together several template expansions
That's the difference. You can't fold templates because they're binary incompatible as opposite to generics.
They're not always binary-incompatible. For instance, if a template only works with pointers or references (this includes object references) to parameter types it might well contain the exact same machine code for some of the instantiations. A compiler backend or linker could recognize these cases and use a single instantiation's machine code for them. (Essentially, these are pretty much the same cases where generics would have been sufficient)
May 20 2009
parent reply Kagamin <spam here.lot> writes:
Frits van Bommel Wrote:

 That's the difference. You can't fold templates because they're binary
incompatible as opposite to generics.
They're not always binary-incompatible. For instance, if a template only works with pointers or references (this includes object references) to parameter types it might well contain the exact same machine code for some of the instantiations.
If you require that the class inherits some interface and call that interface's methods, they'll be incompatible. I'll dare to say this is the most useful variant of generic code.
May 20 2009
parent Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Kagamin wrote:
 Frits van Bommel Wrote:
 
 That's the difference. You can't fold templates because they're binary
incompatible as opposite to generics.
They're not always binary-incompatible. For instance, if a template only works with pointers or references (this includes object references) to parameter types it might well contain the exact same machine code for some of the instantiations.
If you require that the class inherits some interface and call that interface's methods, they'll be incompatible. I'll dare to say this is the most useful variant of generic code.
Okay, so it doesn't (usually) work for interfaces, but it'll work if the requirement is for a common base class. Or perhaps even if they happen to have a common base class that implements the interface in question.
May 20 2009
prev sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Wed, 20 May 2009 13:09:37 +0400, Kagamin <spam here.lot> wrote:

 BCS Wrote:

 smaller object code? OTOH a good implementation will noice when I can  
 fold
 together several template expansions
That's the difference. You can't fold templates because they're binary incompatible as opposite to generics.
You can fold /some/ templates. I believe LLVM already does merging of identical functions (including templates, virtual functions etc) as a part of optimization process. Not sure about LDC, though.
May 20 2009
parent reply Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Denis Koroskin wrote:
 On Wed, 20 May 2009 13:09:37 +0400, Kagamin <spam here.lot> wrote:
 
 BCS Wrote:

 smaller object code? OTOH a good implementation will noice when I can  
 fold
 together several template expansions
That's the difference. You can't fold templates because they're binary incompatible as opposite to generics.
You can fold /some/ templates. I believe LLVM already does merging of identical functions (including templates, virtual functions etc) as a part of optimization process. Not sure about LDC, though.
LLVM has a function merging pass, but LDC doesn't run it by default at any optimization level. (You can pass -mergefunc to run it explicitly, as with any LLVM pass) It has some limitations though. Since it runs on IR, it matters what LLVM type values have. That means it might merge Templ!(int) and Templ!(uint) since int and uint are both an i32 to LLVM, but it normally wouldn't merge Templ!(int*) and Templ(short*) even if the template compiles down to "return cast(T) somefunc(cast(void*) arg);" because the types are still different (i32* vs i16*). To do the latter transformation, the pass would need to be reimplemented to run when the code is closer to machine code.
May 20 2009
parent reply bearophile <bearophileHUGS lycos.com> writes:
Frits van Bommel:
 To do the latter transformation, the pass would need to be reimplemented to
run 
 when the code is closer to machine code.
Can't this feature be asked to the LLVM developers? Bye, bearophile
May 20 2009
parent Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
bearophile wrote:
 Frits van Bommel:
 To do the latter transformation, the pass would need to be reimplemented to
run 
 when the code is closer to machine code.
Can't this feature be asked to the LLVM developers?
Sure, feel free to file a feature request: http://llvm.org/bugs/enter_bug.cgi?product=new-bugs
May 20 2009
prev sibling next sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
dsimcha escribió:
 == Quote from Christopher Wright (dhasenan gmail.com)'s article
 Nick Sabalausky wrote:
 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message
 news:gus0lu$1smj$2 digitalmars.com...


 appreciate a few pointers. Or references. Or delegates :o).
good tools, although the newer versions of VS are almost as much of a bloated unresponsive mess as Eclipse - Which come to think of it, makes me wonder - If Java has gotten so fast as many people claim, why is Eclipse still such a sluggish POS?).
Generics and reflection. Generics just hide a lot of casts, usually, but that's still quite useful. And autoboxing is convenient, though not appropriate for D.
What the heck do you need generics for when you have real templates? To me, generics seem like just a lame excuse for templates.
Yesterday doob reported a bug in Descent saying "when you compile your project and it references a user library that has errors, when you click on the console to jump to the error, it doesn't work". I said to him: I never thought a user library could have errors! How did this happen to you? He replied: "I found a bug in a template in Tango". That's why generics doesn't suck: if there's something wrong in them, the compiler tells you in compile-time. In D, you get the errors only when instantiating that template. Generics might not be as powerful as templates, but once you write one that compiles, you know you will always be able to instantiate it.
May 20 2009
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Ary Borenszweig:
 That's why generics doesn't suck: if there's something wrong in them, 
 the compiler tells you in compile-time. In D, you get the errors only 
 when instantiating that template.
It's just like in dynamic languages, you need to unittest them a lot :-) So having a "static throws()" to assert that a template isn't working is very useful. Bye, bearophile
May 20 2009
prev sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Ary Borenszweig (ary esperanto.org.ar)'s article
 dsimcha escribió:
 == Quote from Christopher Wright (dhasenan gmail.com)'s article
 Nick Sabalausky wrote:
 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message
 news:gus0lu$1smj$2 digitalmars.com...


 appreciate a few pointers. Or references. Or delegates :o).
good tools, although the newer versions of VS are almost as much of a bloated unresponsive mess as Eclipse - Which come to think of it, makes me wonder - If Java has gotten so fast as many people claim, why is Eclipse still such a sluggish POS?).
Generics and reflection. Generics just hide a lot of casts, usually, but that's still quite useful. And autoboxing is convenient, though not appropriate for D.
What the heck do you need generics for when you have real templates? To me, generics seem like just a lame excuse for templates.
Yesterday doob reported a bug in Descent saying "when you compile your project and it references a user library that has errors, when you click on the console to jump to the error, it doesn't work". I said to him: I never thought a user library could have errors! How did this happen to you? He replied: "I found a bug in a template in Tango". That's why generics doesn't suck: if there's something wrong in them, the compiler tells you in compile-time. In D, you get the errors only when instantiating that template. Generics might not be as powerful as templates, but once you write one that compiles, you know you will always be able to instantiate it.
Yes, but there are two flaws in this argument: 1. If you are only using templates like generics, you simply use a unit test to see if it compiles. If you're not doing anything fancy and it compiles for one or two types, it will probably compile for everything that you would reasonably expect it to. 2. If you're doing something fancier, like metaprogramming, you have to just face the fact that this is non-trivial, and couldn't be done with generics anyhow. 3. As Bearophile alluded to, templates are really a clever hack to give you the flexibility of a dynamic language with the performance and compile time checking of a static language. This is done by moving the dynamism to instantiation time. Therefore, whereas in a dynamic language you pay at runtime in terms of the "here be monsters, this code may not be being used as the author intended and tested it", with templates you pay at instantiation time. However, IMHO this is orders of magnitude better than not having that flexibility at all. I personally can't figure out how people accomplish anything in static languages w/o templates. It's just too inflexible.
May 20 2009
parent Jacob Carlborg <doob me.com> writes:
dsimcha wrote:
 == Quote from Ary Borenszweig (ary esperanto.org.ar)'s article
 dsimcha escribió:
 == Quote from Christopher Wright (dhasenan gmail.com)'s article
 Nick Sabalausky wrote:
 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message
 news:gus0lu$1smj$2 digitalmars.com...


 appreciate a few pointers. Or references. Or delegates :o).
good tools, although the newer versions of VS are almost as much of a bloated unresponsive mess as Eclipse - Which come to think of it, makes me wonder - If Java has gotten so fast as many people claim, why is Eclipse still such a sluggish POS?).
Generics and reflection. Generics just hide a lot of casts, usually, but that's still quite useful. And autoboxing is convenient, though not appropriate for D.
What the heck do you need generics for when you have real templates? To me, generics seem like just a lame excuse for templates.
Yesterday doob reported a bug in Descent saying "when you compile your project and it references a user library that has errors, when you click on the console to jump to the error, it doesn't work". I said to him: I never thought a user library could have errors! How did this happen to you? He replied: "I found a bug in a template in Tango". That's why generics doesn't suck: if there's something wrong in them, the compiler tells you in compile-time. In D, you get the errors only when instantiating that template. Generics might not be as powerful as templates, but once you write one that compiles, you know you will always be able to instantiate it.
Yes, but there are two flaws in this argument: 1. If you are only using templates like generics, you simply use a unit test to see if it compiles. If you're not doing anything fancy and it compiles for one or two types, it will probably compile for everything that you would reasonably expect it to.
I used tango.text.xml.Document with wchar and dchar as the template type and in tango.text.xml.PullParser there were some functions that took char[] instead of T[] as the argument. http://www.dsource.org/projects/tango/ticket/1663
 2.  If you're doing something fancier, like metaprogramming, you have to just
face
 the fact that this is non-trivial, and couldn't be done with generics anyhow.
 
 3.  As Bearophile alluded to, templates are really a clever hack to give you
the
 flexibility of a dynamic language with the performance and compile time
checking
 of a static language.  This is done by moving the dynamism to instantiation
time.
  Therefore, whereas in a dynamic language you pay at runtime in terms of the
"here
 be monsters, this code may not be being used as the author intended and tested
 it", with templates you pay at instantiation time.  However, IMHO this is
orders
 of magnitude better than not having that flexibility at all.  I personally
can't
 figure out how people accomplish anything in static languages w/o templates. 
It's
 just too inflexible.
May 20 2009
prev sibling parent Christopher Wright <dhasenan gmail.com> writes:
dsimcha wrote:
 == Quote from Christopher Wright (dhasenan gmail.com)'s article
 Nick Sabalausky wrote:
 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message
 news:gus0lu$1smj$2 digitalmars.com...


 appreciate a few pointers. Or references. Or delegates :o).
good tools, although the newer versions of VS are almost as much of a bloated unresponsive mess as Eclipse - Which come to think of it, makes me wonder - If Java has gotten so fast as many people claim, why is Eclipse still such a sluggish POS?).
Generics and reflection. Generics just hide a lot of casts, usually, but that's still quite useful. And autoboxing is convenient, though not appropriate for D.
What the heck do you need generics for when you have real templates? To me, generics seem like just a lame excuse for templates.
Put a template in an interface. Use reflection to instantiate a template.
May 20 2009
prev sibling parent Lutger <lutger.blijdestijn gmail.com> writes:
Andrei Alexandrescu wrote:

...

 appreciate a few pointers. Or references. Or delegates :o).
It's not in the language. succeeds: Besides many smaller improvements, it provides delegates / events, lambda's and true generics compared to java. Compared to C++ it provides, well, lots of the same stuff that make people prefer Java over C++. cooler language: I suspect this is too alien for most people. don't program in a language, but in an ecosystem where the language is just a part of it alongside the framework, toolchain and documentation.
May 19 2009
prev sibling parent Christopher Wright <dhasenan gmail.com> writes:
davidl wrote:


Mono has AOT compilation. On the other hand, I've experienced more ICE per minute with Mono than with dmd. This is partially due to dmd's near monoculture and .NET's huge majority market share -- people code around dmdfe bugs, but not around Mono bugs.
May 18 2009
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Georg Wrede wrote:
 In the Good Old Days (when it was usual for an average programmer to 
 write parts of the code in ASM (that was the time before the late 
 eighties -- be it Basic, Pascal, or even C, some parts had to be done in 
 ASM to help a bearable user experience when the mainframes had less 
 power than today's MP3 players), the ASM programing was very different 
 on, say, Zilog, MOS, or Motorola processors. The rumor was that the 6502 
 was made for hand coded ASM, whereas the 8088 was more geared towards 
 automatic code generation (as in C commpilers, etc.). My experiences of 
 both certainly seemed to support this.
The 6502 is an 8 bit processor, the 8088 is 16 bits. All 8 bit processors were a terrible fit for C, which was designed for 16 bit CPUs. Everyone who coded professional apps for the 6502, 6800, 8080 and Z80 (all 8 bit CPUs) wrote in assembler. (Including myself.)
 If we were smart with D, we'd find out a way of leapfrogging this 

 or C++, more practical than Haskell, Scheme, Ruby, &co, and more 
 maintainable than C or Perl, but which *still* is Human Writable. All we 
 need is some outside-of-the-box thinking, and we might reap some 
 overwhelming advantages when we combine *this* language with the IDEs 
 and the horsepower that the modern drone takes for granted.
 
 Easier parsing, CTFE, actually usable templates, practical mixins, pure 
 functions, safe code, you name it! We have all the bits and pieces to 
 really make writing + IDE assisted program authoring, a superior reality.
Right, but I can't think of any IDE feature that would be a bad fit for using the filesystem to store the D source modules.
May 19 2009
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:
 The 6502 is an 8 bit processor, the 8088 is 16 bits. All 8 bit 
 processors were a terrible fit for C, which was designed for 16 bit 
 CPUs. Everyone who coded professional apps for the 6502, 6800, 8080 and 
 Z80 (all 8 bit CPUs) wrote in assembler. (Including myself.)
Forth interpreters can be very small, it's a very flexible language, you can metaprogram it almost as Lisp, and if implemented well it can be efficient (surely more than interpreter Basic, but less than handwritten asm. You can have an optimizing Forth in probably less than 4-5 KB). But the people was waiting/asking for the Basic Language, most people didn't know Forth, Basic was common in schools, so Basic was the language shipped inside the machine, instead of Forth: http://www.npsnet.com/danf/cbm/languages.html#FORTH The Commodore 64 with built-in Forth instead of Basic may have driven computer science in a quite different direction. Do you agree? Bye, bearophile
May 19 2009
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 Forth interpreters can be very small, it's a very flexible language,
 you can metaprogram it almost as Lisp, and if implemented well it can
 be efficient (surely more than interpreter Basic, but less than
 handwritten asm. You can have an optimizing Forth in probably less
 than 4-5 KB).
 
 But the people was waiting/asking for the Basic Language, most people
 didn't know Forth, Basic was common in schools, so Basic was the
 language shipped inside the machine, instead of Forth: 
 http://www.npsnet.com/danf/cbm/languages.html#FORTH
 
 The Commodore 64 with built-in Forth instead of Basic may have driven
 computer science in a quite different direction.
 
 Do you agree?
I remember lots of talk about Forth, and nobody using it.
May 19 2009
parent reply Derek Parnell <derek psych.ward> writes:
On Tue, 19 May 2009 16:09:54 -0700, Walter Bright wrote:

 bearophile wrote:
 Forth interpreters can be very small, it's a very flexible language,
 you can metaprogram it almost as Lisp, and if implemented well it can
 be efficient (surely more than interpreter Basic, but less than
 handwritten asm. You can have an optimizing Forth in probably less
 than 4-5 KB).
 
 But the people was waiting/asking for the Basic Language, most people
 didn't know Forth, Basic was common in schools, so Basic was the
 language shipped inside the machine, instead of Forth: 
 http://www.npsnet.com/danf/cbm/languages.html#FORTH
 
 The Commodore 64 with built-in Forth instead of Basic may have driven
 computer science in a quite different direction.
 
 Do you agree?
I remember lots of talk about Forth, and nobody using it.
It can quickly degenerate into a write-only language because it encourages one to extend the syntax, and even semantics, of the language. It takes extreme discipline to make a Forth program maintainable by anyone other than the original author. The other difficulty with it is that most people don't use Reverse Polish Notation often enough for it to become second nature, thus making it hard for people to read a Forth program and 'see' what its trying to do. However, it has its own elegance and simplicity that can be very alluring. I see it as the Circe of programming languages. -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
May 19 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Not even this book cover could save Forth!

http://www.globalnerdy.com/2007/09/14/reimagining-programming-book-covers/
May 19 2009
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Not even this book cover could save Forth!
 
 http://www.globalnerdy.com/2007/09/14/reimagining-programming-book-covers/
Hehe... And of course the Ruby book has the obligatory distasteful sexual reference. Only today I was reading another book on Rails and within the third page I got the notion that good website development is like good porn: you know it when you see it. Yeah, you've apparently seen to much of it. Get a date. :o/ I'm all for sexual jokes, but give me a break with "the lucky stiff". The subtler the better. I made one such joke in a talk at ACCU, and it took people 30 seconds to even suspect it. (Walter of course got it in a femtosecond.) Andrei
May 19 2009
prev sibling parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 20/05/2009 02:12, Walter Bright wrote:
 Not even this book cover could save Forth!

 http://www.globalnerdy.com/2007/09/14/reimagining-programming-book-covers/
Ah, Julie Bell and Boris Vallejo, one (well, two) of my favorite fantasy artists, they're pretty awesome. -- Bruno Medeiros - Software Engineer
Jul 27 2010
prev sibling parent Georg Wrede <georg.wrede iki.fi> writes:
bearophile wrote:
 Walter Bright:
 The 6502 is an 8 bit processor, the 8088 is 16 bits. All 8 bit 
 processors were a terrible fit for C, which was designed for 16 bit
  CPUs. Everyone who coded professional apps for the 6502, 6800,
 8080 and Z80 (all 8 bit CPUs) wrote in assembler. (Including
 myself.)
Forth interpreters can be very small, it's a very flexible language, you can metaprogram it almost as Lisp, and if implemented well it can be efficient (surely more than interpreter Basic, but less than handwritten asm. You can have an optimizing Forth in probably less than 4-5 KB). But the people was waiting/asking for the Basic Language, most people didn't know Forth, Basic was common in schools, so Basic was the language shipped inside the machine, instead of Forth: http://www.npsnet.com/danf/cbm/languages.html#FORTH The Commodore 64 with built-in Forth instead of Basic may have driven computer science in a quite different direction. Do you agree?
Forth isn't exactly user friendly, whereas any housewife could at least pretend to almost understand some of Daddy's Basic code. :-) C-64 with Forth, no market. An example is HP. They made a super cool calculator (almost computer) that was Forth programmable. Few bought it, and even fewer bothered to program in Forth. Even the 28S that I've got, has a Forth dialect, but most people only did algebra stuff on it. Bootstrapping an new system, that's where Forth really shines. But today's programmers are so set in the collective mindset of the popular programming languages, that the change of perspective Forth requires, will feel too tedious. People become lazy. But had the C-64 come with Simon's Basic (which was an expensive add-on that very few bought, mostly because they didn't understand its vast difference to "regular" Basic), less people would have quit programming at the 1000 line mark. They just thought it was hard, or that they weren't smart enough. --------- Heck, I just remembered, I've got a Forth cartridge for the VIC-20 somewhere! Oh, maybe some rainy day I'll do some time travel!! :-)
May 19 2009
prev sibling parent reply Georg Wrede <georg.wrede iki.fi> writes:
Walter Bright wrote:
 Georg Wrede wrote:
 In the Good Old Days (when it was usual for an average programmer to 
 write parts of the code in ASM (that was the time before the late 
 eighties -- be it Basic, Pascal, or even C, some parts had to be done 
 in ASM to help a bearable user experience when the mainframes had less 
 power than today's MP3 players), the ASM programing was very different 
 on, say, Zilog, MOS, or Motorola processors. The rumor was that the 
 6502 was made for hand coded ASM, whereas the 8088 was more geared 
 towards automatic code generation (as in C commpilers, etc.). My 
 experiences of both certainly seemed to support this.
The 6502 is an 8 bit processor, the 8088 is 16 bits. All 8 bit processors were a terrible fit for C, which was designed for 16 bit CPUs. Everyone who coded professional apps for the 6502, 6800, 8080 and Z80 (all 8 bit CPUs) wrote in assembler. (Including myself.)
Sloppy me, 8080 was what I meant, instead of the 8088. My bad. And you're right about ASM coding. But over here, with smaller software companies, stuff was done win S-Basic (does anyone even know that one anymore???), C-Basic, and Turbo Pascal. Ron Cain's SmallC wasn't really up to anything serious, and C wasn't all that well known around here then. But Turbo Pascal was already at 3.0 in 1985, and a good investment, because using it was the same on the pre-PC computers and the then new IBM-PC.
 If we were smart with D, we'd find out a way of leapfrogging this 

 or C++, more practical than Haskell, Scheme, Ruby, &co, and more 
 maintainable than C or Perl, but which *still* is Human Writable. All 
 we need is some outside-of-the-box thinking, and we might reap some 
 overwhelming advantages when we combine *this* language with the IDEs 
 and the horsepower that the modern drone takes for granted.

 Easier parsing, CTFE, actually usable templates, practical mixins, 
 pure functions, safe code, you name it! We have all the bits and 
 pieces to really make writing + IDE assisted program authoring, a 
 superior reality.
Right, but I can't think of any IDE feature that would be a bad fit for using the filesystem to store the D source modules.
I remember writing something about it here, like 7 years ago. But today there are others who have newer opinions about it. I haven't thought about it since then. I wonder how a seasoned template author would describe what the most welcome help would be when writing serious templates?
May 19 2009
parent reply BCS <ao pathlink.com> writes:
Reply to Georg,

 I wonder how a seasoned template author would describe what the most
 welcome help would be when writing serious templates?
 
"Breakpoint debugging" of template explanation. Pick a template, feed it values and see (as in syntax highlighting and foreach unrolling) what happens. Pick an invoked template and dive in. Real breakpoint debugging of CTFE where it will stop on the line that is not CTFEable. Oh and auto complete that works with meta but doesn't fall over on it's side twiching with larger systems.
May 19 2009
parent reply Robert Fraser <fraserofthenight gmail.com> writes:
BCS wrote:
 Oh and auto complete that works with meta but doesn't fall over on it's 
 side twiching with larger systems.
:-) It's getting better, slowly.
May 19 2009
parent BCS <none anon.com> writes:
Hello Robert,

 BCS wrote:
 
 Oh and auto complete that works with meta but doesn't fall over on
 it's side twiching with larger systems.
 
:-) It's getting better, slowly.
I can get you some test cases if you want... :-)
May 19 2009
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
davidl wrote:
 The module package system still stays in the state of the C age. It's 

 The namespace and distributed packaging is a must nowadays and the 
 compiler should be project oriented and take project information as the 
 compiling base. Also an IDE is quite useful for providing project 
 templates.
The file system is a wonderful hierarchical database, that fits in neatly with package/module organization of D projects. I don't see a compelling advantage to try to layer another database on top of it.
May 19 2009