www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Re: What Makes A Programming Language Good

reply Jim <bitcirkel yahoo.com> writes:
Jesse Phillips Wrote:
 It makes everything much clearer and creates a bunch of opportunities for
further development.

I don't see such benefit.

It's easier for the programmer to find the module if it shares the name with the file. Especially true when faced with other people's code, or code that's more than 6 months old, or just large projects. The same goes for packages and directories. The relationship is clear: each file defines a module. The natural thing would be to have them bear the same name. It lets the compiler traverse dependencies by itself. This is good for the following reasons: 1) You don't need build tools, makefiles. Just "dmd myApp.d". Do you know how many build tools there are, each trying to do the same thing. They are at disadvantage to the compiler because the compiler can do conditional compiling and generally understands the code better than other programs. There's also extra work involved in keeping makefiles current. They are just like header files are for C/C++ -- an old solution. 2) The compiler can do more optimisation, inlining, reduction and refactoring. The compiler also knows which code interacts with other code and can use that information for cache-specific optimisations. Vladimir suggested it would open the door to new language features (like virtual templated methods). Generally I think it would be good for templates, mixins and the like. In the TDPL book Andrei makes hints about future AST-introspection functionality. Surely access to the source would benefit from this. It would simplify error messages now caused by the linker. Names within a program wouldn't need to be mangled. More information about the caller / callee would also be available at the point of error. It would also be of great help to third-party developers. Static code analysers (for performance, correctness, bugs, documentation etc), packet managers... They could all benefit from the simpler structure. They wouldn't have to guess what code is used or built (by matching names themselves or trying to interpret makefiles). It would be easier for novices. The simpler it is to build a program the better. It could be good for the community of D programmers. Download some code and it would fit right in. Naming is a little bit of a Wild West now. Standardised naming makes it easier to sort, structure and reuse code.
 I'd create a branch (in git or mercury) for that task, it's quick and dirt
cheap, very easy to switch to and from, and you get the diff for free.

Right, using such tools is great. But what if you are like me and don't have a dev environment set up for Phobos, but I want to fix some module? Do I have to setup such an environment or through the file in a folder std/ just do some work on it?

You have compilers, linkers and editors but no version control system? They are generally very easy to install and use. When you have used one for a while you wonder how you ever got along without it before. In git for example, creating a feature branch is one command (or two clicks with a gui). There you can tinker and experiment all you want without causing any trouble with other branches. I usually create a new branch for every new feature. I do some coding on one, switch to another branch and fix something else. They are completely separate. When they are done you merge them into your mainline.
Jan 18 2011
parent reply Jesse Phillips <jessekphillips+D gmail.com> writes:
Jim Wrote:

 Jesse Phillips Wrote:
 It makes everything much clearer and creates a bunch of opportunities for
further development.

I don't see such benefit.

It's easier for the programmer to find the module if it shares the name with the file. Especially true when faced with other people's code, or code that's more than 6 months old, or just large projects. The same goes for packages and directories. The relationship is clear: each file defines a module. The natural thing would be to have them bear the same name.

Like I sad, I haven't seen this as an issue. People don't go around naming their files completely different from module name. There are just too many benefits to do it otherwise, I believe the include path makes use of this.
 It lets the compiler traverse dependencies by itself. This is good for the
following reasons:
 1) You don't need build tools, makefiles. Just "dmd myApp.d". Do you know how
many build tools there are, each trying to do the same thing. They are at
disadvantage to the compiler because the compiler can do conditional compiling
and generally understands the code better than other programs. There's also
extra work involved in keeping makefiles current. They are just like header
files are for C/C++ -- an old solution.

This is what the "Open Scalable Language Toolchains" talk is about http://vimeo.com/16069687 The idea is that the compile has the job of compiling the program and providing information about the program to allow other tools to make use of the information without their own lex/parser/analysis work. Meaning the compile should not have an advantage. Lastly Walter has completely different reasons for not wanting to have "auto find" in the compiler. It will become yet another magic black box that will still confuse people when it fails.
 2) The compiler can do more optimisation, inlining, reduction and refactoring.
The compiler also knows which code interacts with other code and can use that
information for cache-specific optimisations. Vladimir suggested it would open
the door to new language features (like virtual templated methods). Generally I
think it would be good for templates, mixins and the like. In the TDPL book
Andrei makes hints about future AST-introspection functionality. Surely access
to the source would benefit from this.

No, you do not get optimization benefits from how the files are stored on the disk. What Vladimir was talking about was the restriction that compilation unit was the module. DMD already provides many of these benefits if you just list all the files you want compiled on the command line.
 It would simplify error messages now caused by the linker. Names within a
program wouldn't need to be mangled. More information about the caller / callee
would also be available at the point of error.

Nope, because the module you are looking for could be in a library somewhere, and if you forget to point the linker to it, you'll still get linker errors.
 It would also be of great help to third-party developers. Static code
analysers (for performance, correctness, bugs, documentation etc), packet
managers... They could all benefit from the simpler structure. They wouldn't
have to guess what code is used or built (by matching names themselves or
trying to interpret makefiles).

As I said, have all these tools assume such a structure. If people aren't already using the layout, they will if they want to use these tools. I believe that is how using the import path already works in dmd.
 It would be easier for novices. The simpler it is to build a program the
better. It could be good for the community of D programmers. Download some code
and it would fit right in. Naming is a little bit of a Wild West now.
Standardised naming makes it easier to sort, structure and reuse code.

rdmd is distributed with the compiler... do you have examples of poorly chosen module names, which have caused issue?
 Right, using such tools is great. But what if you are like me and don't have a
dev environment set up for Phobos, but I want to fix some module? Do I have to
setup such an environment or through the file in a folder std/ just do some
work on it?

You have compilers, linkers and editors but no version control system? They are generally very easy to install and use. When you have used one for a while you wonder how you ever got along without it before. In git for example, creating a feature branch is one command (or two clicks with a gui). There you can tinker and experiment all you want without causing any trouble with other branches. I usually create a new branch for every new feature. I do some coding on one, switch to another branch and fix something else. They are completely separate. When they are done you merge them into your mainline.

No no no, having git installed on the system is completely different from have a dev environment for Phobos. You'd have to download all the Phobos files and Druntime into their proper location and any other dependencies/issues you run into when you try and build it. Then you would need a dmd installation which used your custom test build of Phobos.
Jan 18 2011
next sibling parent reply Jim <bitcirkel yahoo.com> writes:
Jesse Phillips Wrote:
 This is what the "Open Scalable Language Toolchains" talk is about
 http://vimeo.com/16069687
 
 The idea is that the compile has the job of compiling the program and
providing information about the program to allow other tools to make use of the
information without their own lex/parser/analysis work. Meaning the compile
should not have an advantage.

Yes, I like that idea very much. I wouldn't mind having a D toolchain like that. Seems modular and nice. The point is not needing to manually write makefiles, or having different and conflicting ways to build source code. The D language itself is all that is needed for declaring dependencies by using import statements, and the compiler could very well traverse these files along the way.
 Lastly Walter has completely different reasons for not wanting to have "auto
find" in the compiler. It will become yet another magic black box that will
still confuse people when it fails.

I'm not talking about any magic at all. Just plain D semantics. Make use of it.
 2) The compiler can do more optimisation, inlining, reduction and refactoring.
The compiler also knows which code interacts with other code and can use that
information for cache-specific optimisations. Vladimir suggested it would open
the door to new language features (like virtual templated methods). Generally I
think it would be good for templates, mixins and the like. In the TDPL book
Andrei makes hints about future AST-introspection functionality. Surely access
to the source would benefit from this.

No, you do not get optimization benefits from how the files are stored on the disk. What Vladimir was talking about was the restriction that compilation unit was the module. DMD already provides many of these benefits if you just list all the files you want compiled on the command line.

I never claimed that file storage was an optimisation. The compiler can optimise better by seeing more source code (or a greater AST if you will) at compile time. Inlining, for example, can only occur within a compilation unit. I'm arguing that a file is not the optimal compilation unit. Computers today have enough memory to hold the entire program in memory while doing the compilation. It should be up to the compiler to make the best of it. If you need to manually list the files then, well, you do unnecessary labour.
 It would simplify error messages now caused by the linker. Names within a
program wouldn't need to be mangled. More information about the caller / callee
would also be available at the point of error.

Nope, because the module you are looking for could be in a library somewhere, and if you forget to point the linker to it, you'll still get linker errors.

I didn't say "no linking errors". I said simpler errors messages, as in easier to understand. It could, for example, say where you tried to access a particular function: file and line number. A linker alone cannot say that. Also, you wouldn't have to tell the linker anything other than where your libraries resides. It would find the correct ones based on their modules' names.
 It would also be of great help to third-party developers. Static code
analysers (for performance, correctness, bugs, documentation etc), packet
managers... They could all benefit from the simpler structure. They wouldn't
have to guess what code is used or built (by matching names themselves or
trying to interpret makefiles).

As I said, have all these tools assume such a structure. If people aren't already using the layout, they will if they want to use these tools. I believe that is how using the import path already works in dmd.

Standards are better than assumptions.
 No no no, having git installed on the system is completely different from have
a dev environment for Phobos. You'd have to download all the Phobos files and
Druntime into their proper location and any other dependencies/issues you run
into when you try and build it. Then you would need a dmd installation which
used your custom test build of Phobos.

It seems I misunderstood you. Of course you have to download all dependencies before you build something. Otherwise it wouldn't be a dependency, would it? How many megabytes are these, 15? Frankly, I don't see the problem. What is it really that you don't like? I'm trying to argue for less manual dependency juggling by using the specification that is already there, your source code. The second thing, I guess, is not being overly restrictive to files as compilation units. It made sense long ago, but today it is arbitrary. Remember, C/C++ even compels you to declare your symbols in a particular order -- probably because of how the parsing algorithm was conceived at the time. It's unfortunate when it becomes language specification.
Jan 18 2011
parent Adam Ruppe <destructionator gmail.com> writes:
Jim wrote:
 I never claimed that file storage was an optimisation. The compiler
 can optimise better by seeing more source code (or a greater AST if
 you will) at compile time. Inlining, for example, can only occur
 within a compilation unit. I'm arguing that a file is not the optimal
 compilation unit. Computers today have enough memory to hold the
 entire program in memory while doing the compilation. It should be up
 to the compiler to make the best of it.

Note that dmd already does this, if you pass all the files on the command line at once. My new build.d program fetches the dependency list from dmd, then compiles by passing them all at once - it's a really simple program, just adding the dependencies onto the end of the command line (and trying to download them if they don't exist). So then you wouldn't have to do it manually either.
Jan 19 2011
prev sibling parent spir <denis.spir gmail.com> writes:
On 01/19/2011 05:16 AM, Jesse Phillips wrote:

 This is what the "Open Scalable Language Toolchains" talk is about
 http://vimeo.com/16069687

 The idea is that the compile has the job of compiling the program and 

of the information without their own lex/parser/analysis work. Meaning the compile should not have an advantage. Let us call "decoder" the part of a compiler that scans, parses, "semanticises" source code; and (syntactic/semantic) tree the resulting representation of code. What I dream of is a decoder that (on demand) spits out a data-description module of this tree. I mean a source code module --ideally in the source language itself: here D-- that can be imported by any other tool needing as input the said tree . [D is not that bad as data-desription language, thank to its nice literal notations (not comparable to Lua, indeed, but Lua was designed for that). It's also easy in D, I guess, to define proper types for the various kinds of nodes the tree would hold. D's main obstacle AFAIK is data description must all be put in the module's "static this" clause (for some reason I haven't yet understood); but we can survive that.] Denis _________________ vita es estrany spir.wikidot.com
Jan 19 2011