www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - DIP11

reply jdrewsen <jdrewsen nospam.com> writes:
What is the status of DIP11

http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

Has anyone started implementing it? Has it been rejected?

/Jonas
Aug 10 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-08-10 21:55, jdrewsen wrote:
 What is the status of DIP11

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Has anyone started implementing it? Has it been rejected?

 /Jonas
Not sure, personally I don't like it. Instead I'm working on a more traditional package manager called Orbit: https://github.com/jacob-carlborg/orbit/wiki/Orbit-Package-Manager-for-D -- /Jacob Carlborg
Aug 11 2011
parent reply Jonas Drewsen <jdrewsen nospam.com> writes:
On 11/08/11 09.07, Jacob Carlborg wrote:
 On 2011-08-10 21:55, jdrewsen wrote:
 What is the status of DIP11

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Has anyone started implementing it? Has it been rejected?

 /Jonas
Not sure, personally I don't like it. Instead I'm working on a more traditional package manager called Orbit: https://github.com/jacob-carlborg/orbit/wiki/Orbit-Package-Manager-for-D
Yes I've noticed that. Seems very promising. What I do like about DIP11 is how seamless it would work. You just have to compile and stuff works. /Jonas
Aug 11 2011
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-08-11 09:41, Jonas Drewsen wrote:
 On 11/08/11 09.07, Jacob Carlborg wrote:
 On 2011-08-10 21:55, jdrewsen wrote:
 What is the status of DIP11

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Has anyone started implementing it? Has it been rejected?

 /Jonas
Not sure, personally I don't like it. Instead I'm working on a more traditional package manager called Orbit: https://github.com/jacob-carlborg/orbit/wiki/Orbit-Package-Manager-for-D
Yes I've noticed that. Seems very promising. What I do like about DIP11 is how seamless it would work. You just have to compile and stuff works. /Jonas
I think that DIP11 is too limited, for example, it doesn't deal with versions. Orbit combined with a build tool will be seamless as well. RDMD is a great tool but as soon as you need to add compiler flags or compile a library you need either some kind of script or a build tool. And in that case you can just go with the built tool and have it work on all platforms. -- /Jacob Carlborg
Aug 11 2011
next sibling parent Jonas Drewsen <jdrewsen nospam.com> writes:
On 11/08/11 09.49, Jacob Carlborg wrote:
 On 2011-08-11 09:41, Jonas Drewsen wrote:
 On 11/08/11 09.07, Jacob Carlborg wrote:
 On 2011-08-10 21:55, jdrewsen wrote:
 What is the status of DIP11

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Has anyone started implementing it? Has it been rejected?

 /Jonas
Not sure, personally I don't like it. Instead I'm working on a more traditional package manager called Orbit: https://github.com/jacob-carlborg/orbit/wiki/Orbit-Package-Manager-for-D
Yes I've noticed that. Seems very promising. What I do like about DIP11 is how seamless it would work. You just have to compile and stuff works. /Jonas
I think that DIP11 is too limited, for example, it doesn't deal with versions. Orbit combined with a build tool will be seamless as well. RDMD is a great tool but as soon as you need to add compiler flags or compile a library you need either some kind of script or a build tool. And in that case you can just go with the built tool and have it work on all platforms.
Some refinements needs to be done to the DIP yes. One of them is version handling IMHO. /Jonas
Aug 11 2011
prev sibling next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 11 Aug 2011 03:49:56 -0400, Jacob Carlborg <doob me.com> wrote:

 On 2011-08-11 09:41, Jonas Drewsen wrote:
 On 11/08/11 09.07, Jacob Carlborg wrote:
 On 2011-08-10 21:55, jdrewsen wrote:
 What is the status of DIP11

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Has anyone started implementing it? Has it been rejected?

 /Jonas
Not sure, personally I don't like it. Instead I'm working on a more traditional package manager called Orbit: https://github.com/jacob-carlborg/orbit/wiki/Orbit-Package-Manager-for-D
Yes I've noticed that. Seems very promising. What I do like about DIP11 is how seamless it would work. You just have to compile and stuff works. /Jonas
I think that DIP11 is too limited, for example, it doesn't deal with versions. Orbit combined with a build tool will be seamless as well. RDMD is a great tool but as soon as you need to add compiler flags or compile a library you need either some kind of script or a build tool. And in that case you can just go with the built tool and have it work on all platforms.
Given that the implementation would be a compiler-used tool, and the tool can implement any protocol it wants, I think it has very few limitations. I envision the tool being able to handle any network protocol or packaging system we want it to. I think the benefit of this approach over a build tool which wraps the compiler is, the compiler already has the information needed for dependencies, etc. To a certain extent, the wrapping build tool has to re-implement some of the compiler pieces. -Steve
Aug 11 2011
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-08-11 14:52, Steven Schveighoffer wrote:
 Given that the implementation would be a compiler-used tool, and the
 tool can implement any protocol it wants, I think it has very few
 limitations. I envision the tool being able to handle any network
 protocol or packaging system we want it to.
That might be the case. Since it's arbitrary URLs that represents D modules and packages, to me it seems that there needs to be a lot of conventions: * Where to put the packages * How to name them * How to indicate a specific version and so on.
 I think the benefit of this approach over a build tool which wraps the
 compiler is, the compiler already has the information needed for
 dependencies, etc. To a certain extent, the wrapping build tool has to
 re-implement some of the compiler pieces.

 -Steve
Until the compiler can automatically compile dependencies we need build tools. What about linking with pre-compiled libraries, how would that work? Currently the linking paths needs to be known before the compiler is invoked. You would first need to compile without linking and then link, or something like that. Assuming the compiler isn't changed to be able to receive linker options form the external tool. Note that I'm basing all this on what's written in the DIP (and what you've said), as far as I know that's the current suggestion. But the DIP can of course be enhanced and updated. -- /Jacob Carlborg
Aug 11 2011
next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
I don't trust DIP11 to produce anything stable. We already have a
half-broken compiler, and now we're going to have a half-broken build
system to go along with it? No way.

I'd rather it be an external tool (built with D so we can actually
hack on it), that can be *downloaded from anywhere*, without having to
rely on dmars.com (which is painfully slow), and being encumbered by
licensing issues (you can't distribute DMD).
Aug 11 2011
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 11 Aug 2011 10:47:36 -0400, Jacob Carlborg <doob me.com> wrote:

 On 2011-08-11 14:52, Steven Schveighoffer wrote:
 Given that the implementation would be a compiler-used tool, and the
 tool can implement any protocol it wants, I think it has very few
 limitations. I envision the tool being able to handle any network
 protocol or packaging system we want it to.
That might be the case. Since it's arbitrary URLs that represents D modules and packages, to me it seems that there needs to be a lot of conventions: * Where to put the packages * How to name them * How to indicate a specific version
These are implementation details. The compiler just knows "hm.. there's some external file, I don't know how to get it, tool, can you help me out?" The tool could potentially download the files and build them, or download a pre-compiled package. Or it could add files to the compiler's todo list somehow (having recursive compiler invocations might be bad...)
 I think the benefit of this approach over a build tool which wraps the
 compiler is, the compiler already has the information needed for
 dependencies, etc. To a certain extent, the wrapping build tool has to
 re-implement some of the compiler pieces.

 -Steve
Until the compiler can automatically compile dependencies we need build tools.
I agree it would be advantageous to have the build tool add files to the compiler's todo list. As of now, just downloading source for import does not help because you still need to compile the file, not just import it. But you have to admit, just having the source file or include path include parts of the internet would be pretty cool. I really like that approach. Its sort of like how dsss used to work, except dsss contained a huge portion of the compiler in it. At least with DIP11, the compiler is the driver, no need to maintain a separate "compiler". It could be that DIP11 needs more work to get a technically useful and implementable feature.
 What about linking with pre-compiled libraries, how would that work?  
 Currently the linking paths needs to be known before the compiler is  
 invoked. You would first need to compile without linking and then link,  
 or something like that. Assuming the compiler isn't changed to be able  
 to receive linker options form the external tool.
I'm assuming the linker path in dmd.conf would include some global or user-specific cache directory where pre-compiled libraries are downloaded. Then the download tool puts the libraries in the right spot, so the path does not need adjusting.
 Note that I'm basing all this on what's written in the DIP (and what  
 you've said), as far as I know that's the current suggestion. But the  
 DIP can of course be enhanced and updated.
Yes, and much better than a DIP is a working solution, so it might very well be that Orbit wins :) I certainly don't have the time or expertise to implement it... -Steve
Aug 11 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-08-11 17:45, Steven Schveighoffer wrote:
 On Thu, 11 Aug 2011 10:47:36 -0400, Jacob Carlborg <doob me.com> wrote:

 On 2011-08-11 14:52, Steven Schveighoffer wrote:
 Given that the implementation would be a compiler-used tool, and the
 tool can implement any protocol it wants, I think it has very few
 limitations. I envision the tool being able to handle any network
 protocol or packaging system we want it to.
That might be the case. Since it's arbitrary URLs that represents D modules and packages, to me it seems that there needs to be a lot of conventions: * Where to put the packages * How to name them * How to indicate a specific version
These are implementation details. The compiler just knows "hm.. there's some external file, I don't know how to get it, tool, can you help me out?" The tool could potentially download the files and build them, or download a pre-compiled package. Or it could add files to the compiler's todo list somehow (having recursive compiler invocations might be bad...)
In that case that needs to be said in the DIP.
 I think the benefit of this approach over a build tool which wraps the
 compiler is, the compiler already has the information needed for
 dependencies, etc. To a certain extent, the wrapping build tool has to
 re-implement some of the compiler pieces.

 -Steve
Until the compiler can automatically compile dependencies we need build tools.
I agree it would be advantageous to have the build tool add files to the compiler's todo list. As of now, just downloading source for import does not help because you still need to compile the file, not just import it.
Yes, exactly.
 But you have to admit, just having the source file or include path
 include parts of the internet would be pretty cool. I really like that
 approach. Its sort of like how dsss used to work, except dsss contained
 a huge portion of the compiler in it. At least with DIP11, the compiler
 is the driver, no need to maintain a separate "compiler". It could be
 that DIP11 needs more work to get a technically useful and implementable
 feature.
It sounds pretty cool but the question is how well it would work and if the right thing to do.
 What about linking with pre-compiled libraries, how would that work?
 Currently the linking paths needs to be known before the compiler is
 invoked. You would first need to compile without linking and then
 link, or something like that. Assuming the compiler isn't changed to
 be able to receive linker options form the external tool.
I'm assuming the linker path in dmd.conf would include some global or user-specific cache directory where pre-compiled libraries are downloaded. Then the download tool puts the libraries in the right spot, so the path does not need adjusting.
Assuming that, you would still need to link with the libraries. I don't know if pragma(lib, ""); could work but I don't like that pragma in general. It's platform dependent and I'm not sure if it works for dynamic libraries. I don't think GDC implements it.
 Note that I'm basing all this on what's written in the DIP (and what
 you've said), as far as I know that's the current suggestion. But the
 DIP can of course be enhanced and updated.
Yes, and much better than a DIP is a working solution, so it might very well be that Orbit wins :) I certainly don't have the time or expertise to implement it... -Steve
I guess we just have to see what happens. I will at lease continue working on Orbit. -- /Jacob Carlborg
Aug 11 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"Jacob Carlborg" <doob me.com> wrote in message 
news:j21190$ls8$1 digitalmars.com...
 Assuming that, you would still need to link with the libraries. I don't 
 know if pragma(lib, ""); could work but I don't like that pragma in 
 general. It's platform dependent
This works cross-platform: pragma(lib, "LibNameWithoutExt"); And if you do need platform-specific, there's always version().
 and I'm not sure if it works for dynamic libraries. I don't think GDC 
 implements it.
A major downside of GDC, IMO.
Aug 11 2011
parent Jacob Carlborg <doob me.com> writes:
On 2011-08-11 23:30, Nick Sabalausky wrote:
 "Jacob Carlborg"<doob me.com>  wrote in message
 news:j21190$ls8$1 digitalmars.com...
 Assuming that, you would still need to link with the libraries. I don't
 know if pragma(lib, ""); could work but I don't like that pragma in
 general. It's platform dependent
This works cross-platform: pragma(lib, "LibNameWithoutExt");
Will that work with all available library types (static, dynamic) on all platforms?
 And if you do need platform-specific, there's always version().
You do, since on Posix most library names are prefixed with "lib". DSSS/Rebuild did this very well with the pragma "link". It prefixed the library names depending on the platform.
 and I'm not sure if it works for dynamic libraries. I don't think GDC
 implements it.
A major downside of GDC, IMO.
Yes. -- /Jacob Carlborg
Aug 12 2011
prev sibling parent reply Andrew Wiley <wiley.andrew.j gmail.com> writes:
On Thu, Aug 11, 2011 at 5:52 AM, Steven Schveighoffer
<schveiguy yahoo.com>wrote:

 On Thu, 11 Aug 2011 03:49:56 -0400, Jacob Carlborg <doob me.com> wrote:

  On 2011-08-11 09:41, Jonas Drewsen wrote:
 On 11/08/11 09.07, Jacob Carlborg wrote:

 On 2011-08-10 21:55, jdrewsen wrote:

 What is the status of DIP11

 http://www.wikiservice.at/d/**wiki.cgi?LanguageDevel/DIPs/**DIP11<http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11>

 Has anyone started implementing it? Has it been rejected?

 /Jonas
Not sure, personally I don't like it. Instead I'm working on a more traditional package manager called Orbit: https://github.com/jacob-**carlborg/orbit/wiki/Orbit-** Package-Manager-for-D<https://github.com/jacob-carlborg/orbit/wiki/Orbit-Package-Manager-for-D>
Yes I've noticed that. Seems very promising. What I do like about DIP11 is how seamless it would work. You just have to compile and stuff works. /Jonas
I think that DIP11 is too limited, for example, it doesn't deal with versions. Orbit combined with a build tool will be seamless as well. RDMD is a great tool but as soon as you need to add compiler flags or compile a library you need either some kind of script or a build tool. And in that case you can just go with the built tool and have it work on all platforms.
Given that the implementation would be a compiler-used tool, and the tool can implement any protocol it wants, I think it has very few limitations. I envision the tool being able to handle any network protocol or packaging system we want it to. I think the benefit of this approach over a build tool which wraps the compiler is, the compiler already has the information needed for dependencies, etc. To a certain extent, the wrapping build tool has to re-implement some of the compiler pieces.
This last bit doesn't really come into play here because you can already ask the compiler to output all that information. and easily use it in a separate program. That much is already done.
Aug 11 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 11 Aug 2011 12:24:48 -0400, Andrew Wiley  
<wiley.andrew.j gmail.com> wrote:

 On Thu, Aug 11, 2011 at 5:52 AM, Steven Schveighoffer
 <schveiguy yahoo.com>wrote:
 I think the benefit of this approach over a build tool which wraps the
 compiler is, the compiler already has the information needed for
 dependencies, etc.  To a certain extent, the wrapping build tool has to
 re-implement some of the compiler pieces.
This last bit doesn't really come into play here because you can already ask the compiler to output all that information. and easily use it in a separate program. That much is already done.
Yes, but then you have to restart the compiler to figure out what's next. Let's say a source file needs a.d, and a.d needs b.d, and both a.d and b.d are on the network. You potentially need to run the compiler 3 times just to make sure you have all the files, then run it a fourth time to compile. And there is no parsing of the output data, the problem boils down to a simple get tool. Running a simple get tool over and over doesn't consume as much time/resources as running the compiler over and over. There are still problems with the DIP -- there is no way yet to say "oh yeah, compiler, you have to build this file that I downloaded too". But if nothing else, I like the approach of having the compiler drive everything. It reduces the problem space to a smaller more focused task -- get a file based on a url. We also already have many tools in existence that can parse a url and download a file/package. -Steve
Aug 11 2011
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-08-11 19:07, Steven Schveighoffer wrote:
 On Thu, 11 Aug 2011 12:24:48 -0400, Andrew Wiley
 <wiley.andrew.j gmail.com> wrote:

 On Thu, Aug 11, 2011 at 5:52 AM, Steven Schveighoffer
 <schveiguy yahoo.com>wrote:
 I think the benefit of this approach over a build tool which wraps the
 compiler is, the compiler already has the information needed for
 dependencies, etc. To a certain extent, the wrapping build tool has to
 re-implement some of the compiler pieces.
This last bit doesn't really come into play here because you can already ask the compiler to output all that information. and easily use it in a separate program. That much is already done.
Yes, but then you have to restart the compiler to figure out what's next. Let's say a source file needs a.d, and a.d needs b.d, and both a.d and b.d are on the network. You potentially need to run the compiler 3 times just to make sure you have all the files, then run it a fourth time to compile.
So how would that be different if the compiler drives everything? Say you begin with a few local files. The compiler then scans through them looking for URL imports. Then asks a tool to download the dependencies it found and starts all over again. This is how my package manager will work. You have a local file containing all the direct dependencies needed to build your project. When invoked, the package manager tool fetches a file containing all packages and all their dependencies, from the repository. It then figures out all dependencies, both direct and indirect. Then it downloads all dependencies. It does all this before the compiler is even invoked once. Then, preferably, but optionally, it hands over to a build tool that builds everything. The build tool would need to invoke the compiler twice, first to get all the dependencies of all the local files in the project that is being built. Then it finally runs the compiler to build everything. Well, actually if you're using a build tool it would drive everything. You have the package dependencies in the build script file. The build tool starts by invoking the package manager (see above), then it can query the package manager for include and library paths and libraries to link with. As the final step it invokes the compiler to build everything (see above).
 And there is no parsing of the output data, the problem boils down to a
 simple get tool. Running a simple get tool over and over doesn't consume
 as much time/resources as running the compiler over and over.

 There are still problems with the DIP -- there is no way yet to say "oh
 yeah, compiler, you have to build this file that I downloaded too". But
 if nothing else, I like the approach of having the compiler drive
 everything. It reduces the problem space to a smaller more focused task
 -- get a file based on a url. We also already have many tools in
 existence that can parse a url and download a file/package.

 -Steve
The best would be if the compiler could be a library. Then the build tool could drive everything and ask other tools, like the a package manager and compiler about information it needs to build everything. -- /Jacob Carlborg
Aug 11 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 11 Aug 2011 14:19:35 -0400, Jacob Carlborg <doob me.com> wrote:

 On 2011-08-11 19:07, Steven Schveighoffer wrote:
 On Thu, 11 Aug 2011 12:24:48 -0400, Andrew Wiley
 <wiley.andrew.j gmail.com> wrote:

 On Thu, Aug 11, 2011 at 5:52 AM, Steven Schveighoffer
 <schveiguy yahoo.com>wrote:
 I think the benefit of this approach over a build tool which wraps the
 compiler is, the compiler already has the information needed for
 dependencies, etc. To a certain extent, the wrapping build tool has to
 re-implement some of the compiler pieces.
This last bit doesn't really come into play here because you can already ask the compiler to output all that information. and easily use it in a separate program. That much is already done.
Yes, but then you have to restart the compiler to figure out what's next. Let's say a source file needs a.d, and a.d needs b.d, and both a.d and b.d are on the network. You potentially need to run the compiler 3 times just to make sure you have all the files, then run it a fourth time to compile.
So how would that be different if the compiler drives everything? Say you begin with a few local files. The compiler then scans through them looking for URL imports. Then asks a tool to download the dependencies it found and starts all over again.
Forgive my compiler ignorance (not a compiler writer), but why does the compiler have to start over? It's no different than importing a file, is it?
 This is how my package manager will work. You have a local file  
 containing all the direct dependencies needed to build your project.  
 When invoked, the package manager tool fetches a file containing all  
 packages and all their dependencies, from the repository. It then  
 figures out all dependencies, both direct and indirect. Then it  
 downloads all dependencies. It does all this before the compiler is even  
 invoked once.

 Then, preferably, but optionally, it hands over to a build tool that  
 builds everything. The build tool would need to invoke the compiler  
 twice, first to get all the dependencies of all the local files in the  
 project that is being built. Then it finally runs the compiler to build  
 everything.
The benefit of using source is the source code is already written with an import statement, there is no need to write an external build file (all you need is command line that configures the compiler). Essentially, the import statements become your "build file". I think dsss worked like this, but I don't remember completely. My ideal solution, no matter how it's implemented is, I get a file blah.d, and I do: xyz blah.d and xyz handles all the dirty work of figuring out what to build along with blah.d as well as where to get those resources. Whether xyz == dmd, I don't know. It sure sounds like it could be... -Steve
Aug 11 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-08-11 20:31, Steven Schveighoffer wrote:
 On Thu, 11 Aug 2011 14:19:35 -0400, Jacob Carlborg <doob me.com> wrote:
 So how would that be different if the compiler drives everything? Say
 you begin with a few local files. The compiler then scans through them
 looking for URL imports. Then asks a tool to download the dependencies
 it found and starts all over again.
Forgive my compiler ignorance (not a compiler writer), but why does the compiler have to start over? It's no different than importing a file, is it?
Probably it's no different. Well what's different is it will first parse a couple of files, then downloads a few files, then parse the downloaded files and the fetch some more files and so on. To me this seems inefficient but since it's not implemented I don't know. It feels more efficient if it could download all needed files in one step. And then compile all files in the next step. I don't know what's possible with this DIP but to me it seems that the current suggestion will download individual files. This also seems inefficient, my solution deals with packages, i.e. zip files.
 This is how my package manager will work. You have a local file
 containing all the direct dependencies needed to build your project.
 When invoked, the package manager tool fetches a file containing all
 packages and all their dependencies, from the repository. It then
 figures out all dependencies, both direct and indirect. Then it
 downloads all dependencies. It does all this before the compiler is
 even invoked once.

 Then, preferably, but optionally, it hands over to a build tool that
 builds everything. The build tool would need to invoke the compiler
 twice, first to get all the dependencies of all the local files in the
 project that is being built. Then it finally runs the compiler to
 build everything.
The benefit of using source is the source code is already written with an import statement, there is no need to write an external build file (all you need is command line that configures the compiler).
I don't see the big difference. I think the most project (not the smallest ones) will end up with a special file anyway, containing the pragmas declaring these imports. In addition to that as soon as you need to pass flags to the compiler you will most likely to put that in a file of some kind. In that case you can just as easily put them in a build script and use a build tool.
 Essentially, the import statements become your "build file". I think
 dsss worked like this, but I don't remember completely.
Yes, this is similar how DSSS worked. The difference is that you didn't need a pragma to link a package to an URL you just wrote the import declarations as you do now. One problem I think DSSS has, is, as far as I know, it can't handle top level packages with same name. Or at least not in any good way. If you go with the Java package naming scheme and name your top level package after your domain, example: module com.foo.bar; module com.foo.foobar; And another project does the same: module com.abc.defg; Then these two projects will both end up in the "com" folder. Not very good in my opinion. In my solution every package has a name independent of the packages it contains an all packages are placed in a folder named after the package, including the version number.
 My ideal solution, no matter how it's implemented is, I get a file
 blah.d, and I do:

 xyz blah.d

 and xyz handles all the dirty work of figuring out what to build along
 with blah.d as well as where to get those resources. Whether xyz == dmd,
 I don't know. It sure sounds like it could be...


 -Steve
Yeah, I would like that too. But as I said above, as soon as you need compiler flags you need an additional file. With a built tool it can then be just: "$ build" -- /Jacob Carlborg
Aug 12 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 12 Aug 2011 03:23:30 -0400, Jacob Carlborg <doob me.com> wrote:

 On 2011-08-11 20:31, Steven Schveighoffer wrote:
 On Thu, 11 Aug 2011 14:19:35 -0400, Jacob Carlborg <doob me.com> wrote:
 So how would that be different if the compiler drives everything? Say
 you begin with a few local files. The compiler then scans through them
 looking for URL imports. Then asks a tool to download the dependencies
 it found and starts all over again.
Forgive my compiler ignorance (not a compiler writer), but why does the compiler have to start over? It's no different than importing a file, is it?
Probably it's no different. Well what's different is it will first parse a couple of files, then downloads a few files, then parse the downloaded files and the fetch some more files and so on. To me this seems inefficient but since it's not implemented I don't know. It feels more efficient if it could download all needed files in one step. And then compile all files in the next step. I don't know what's possible with this DIP but to me it seems that the current suggestion will download individual files. This also seems inefficient, my solution deals with packages, i.e. zip files.
The extendability is in the url. For example, yes, http://server/file.d downloads a single file, and would be slow if every import needed to download an individual file. But some other protocol, or some other cue to the download tool (like if the download has a path component that ends in .tgz or something), then you download all the files at once, and on subsequent imports, the cached file is used (no download necessary). I think there's even a suggestion in there for passing directives back to the compiler like "file already downloaded, open it here..." I think unless a single file *is* the package, it's going to be foolish to download individual files. I also think a protocol which defines a central repository would be beneficial. So you only need one -I parameter to include a whole community of D code (like dsource).
 This is how my package manager will work. You have a local file
 containing all the direct dependencies needed to build your project.
 When invoked, the package manager tool fetches a file containing all
 packages and all their dependencies, from the repository. It then
 figures out all dependencies, both direct and indirect. Then it
 downloads all dependencies. It does all this before the compiler is
 even invoked once.

 Then, preferably, but optionally, it hands over to a build tool that
 builds everything. The build tool would need to invoke the compiler
 twice, first to get all the dependencies of all the local files in the
 project that is being built. Then it finally runs the compiler to
 build everything.
The benefit of using source is the source code is already written with an import statement, there is no need to write an external build file (all you need is command line that configures the compiler).
I don't see the big difference. I think the most project (not the smallest ones) will end up with a special file anyway, containing the pragmas declaring these imports.
Note that the pragmas are specific to that file only. So you don't have an import file which defines pragmas. This is to prevent conflicts between two files that declare the same package override.
 In addition to that as soon as you need to pass flags to the compiler  
 you will most likely to put that in a file of some kind. In that case  
 you can just as easily put them in a build script and use a build tool.
a batch files/shell script should suffice, no need for a "special" tool.
 Essentially, the import statements become your "build file". I think
 dsss worked like this, but I don't remember completely.
Yes, this is similar how DSSS worked. The difference is that you didn't need a pragma to link a package to an URL you just wrote the import declarations as you do now.
IIRC, dsss still had a global config file that defined where to import things from. The DIP defines that -I switches can also define internet resources along with pragmas, so sticking those in the dmd.conf would probably be the equivalent.
 One problem I think DSSS has, is, as far as I know, it can't handle top  
 level packages with same name. Or at least not in any good way. If you  
 go with the Java package naming scheme and name your top level package  
 after your domain, example:

 module com.foo.bar;
 module com.foo.foobar;

 And another project does the same:

 module com.abc.defg;

 Then these two projects will both end up in the "com" folder. Not very  
 good in my opinion. In my solution every package has a name independent  
 of the packages it contains an all packages are placed in a folder named  
 after the package, including the version number.
These all seem like implementation details. I don't care how the tool caches the files.
 My ideal solution, no matter how it's implemented is, I get a file
 blah.d, and I do:

 xyz blah.d

 and xyz handles all the dirty work of figuring out what to build along
 with blah.d as well as where to get those resources. Whether xyz == dmd,
 I don't know. It sure sounds like it could be...


 -Steve
Yeah, I would like that too. But as I said above, as soon as you need compiler flags you need an additional file. With a built tool it can then be just: "$ build"
Or instructions on the web site "use 'dmd -O -inline -release -version=SpecificVersion project.d' to compile" Or build.sh (build.bat) Note that dcollections has no makefile, everything is built from shell scripts. I almost never have to edit the build file, because the line's like: dmd -lib -O -release -inline dcollections/*.d dcollections/model/*.d Any new files get included automatically. And it takes a second to build, so who cares if you rebuild every file every time? Interestingly, libraries would still need to specify all the files since they may not import eachother :) I don't know if there's a "good" solution that isn't too coarse for that. All of this discussion is good to determine the viability, and clarify some misinterpretations of DIP11, but I think unless someone steps up and tries to implement it, it's a moot conversation. I certainly don't have the time or knowledge to implement it. So is there anyone who is interested, or has tried (to re-ask the original question)? -Steve
Aug 12 2011
next sibling parent Adam D. Ruppe <destructionator gmail.com> writes:
I did write build2.d, which tries to simulate dip11 outside
the compiler.

http://arsdnet.net/dcode/build2.d

Since it's not actually in the compiler, it can't be perfect
but it sorta tries.
Aug 12 2011
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-08-12 15:49, Steven Schveighoffer wrote:
 Note that the pragmas are specific to that file only. So you don't have
 an import file which defines pragmas. This is to prevent conflicts
 between two files that declare the same package override.
Now I'm not quite sure I understand. Are you saying that every file needs to have these pragma imports ?
 In addition to that as soon as you need to pass flags to the compiler
 you will most likely to put that in a file of some kind. In that case
 you can just as easily put them in a build script and use a build tool.
a batch files/shell script should suffice, no need for a "special" tool.
Yeah, but you would need to duplicate the shell script, one for Posix and one for Windows. As far as I know there is no scripting like file that both Windows and Posix can read out of the box.
 Essentially, the import statements become your "build file". I think
 dsss worked like this, but I don't remember completely.
Yes, this is similar how DSSS worked. The difference is that you didn't need a pragma to link a package to an URL you just wrote the import declarations as you do now.
IIRC, dsss still had a global config file that defined where to import things from. The DIP defines that -I switches can also define internet resources along with pragmas, so sticking those in the dmd.conf would probably be the equivalent.
Ok, I see.
 One problem I think DSSS has, is, as far as I know, it can't handle
 top level packages with same name. Or at least not in any good way. If
 you go with the Java package naming scheme and name your top level
 package after your domain, example:

 module com.foo.bar;
 module com.foo.foobar;

 And another project does the same:

 module com.abc.defg;

 Then these two projects will both end up in the "com" folder. Not very
 good in my opinion. In my solution every package has a name
 independent of the packages it contains an all packages are placed in
 a folder named after the package, including the version number.
These all seem like implementation details. I don't care how the tool caches the files.
Well yes, but that is how DSSS works and that's what I'm explaining.
 Or instructions on the web site "use 'dmd -O -inline -release
 -version=SpecificVersion project.d' to compile"

 Or build.sh (build.bat) Note that dcollections has no makefile,
 everything is built from shell scripts. I almost never have to edit the
 build file, because the line's like:
You would need to shell script files, one for Windows and one for Posix, see above.
 dmd -lib -O -release -inline dcollections/*.d dcollections/model/*.d

 Any new files get included automatically. And it takes a second to
 build, so who cares if you rebuild every file every time?

 Interestingly, libraries would still need to specify all the files since
 they may not import eachother :) I don't know if there's a "good"
 solution that isn't too coarse for that.
That's what a build tool can handle. I think you should read this: http://dsource.org/projects/dsss/wiki/DSSSByExample and hopefully you'll understand why a built tool is a good thing.
 All of this discussion is good to determine the viability, and clarify
 some misinterpretations of DIP11, but I think unless someone steps up
 and tries to implement it, it's a moot conversation. I certainly don't
 have the time or knowledge to implement it. So is there anyone who is
 interested, or has tried (to re-ask the original question)?

 -Steve
I guess that's right. -- /Jacob Carlborg
Aug 12 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 12 Aug 2011 14:24:46 -0400, Jacob Carlborg <doob me.com> wrote:

 On 2011-08-12 15:49, Steven Schveighoffer wrote:
 Note that the pragmas are specific to that file only. So you don't have
 an import file which defines pragmas. This is to prevent conflicts
 between two files that declare the same package override.
Now I'm not quite sure I understand. Are you saying that every file needs to have these pragma imports ?
Let's say file a.d pragmas that module foo means http://foo.com/projectx, and module b.d from another project pragmas that module foo means http://bar.com/projecty. If I import both a and b, what happens? It only makes sense for a pragma to affect the current file. This is similar to how version=x statements only affect the current file.
 In addition to that as soon as you need to pass flags to the compiler
 you will most likely to put that in a file of some kind. In that case
 you can just as easily put them in a build script and use a build tool.
a batch files/shell script should suffice, no need for a "special" tool.
Yeah, but you would need to duplicate the shell script, one for Posix and one for Windows. As far as I know there is no scripting like file that both Windows and Posix can read out of the box.
But I don't have to have the user install yet other build tool. There already are script interpreters on both windows and posix systems. Especially for one-liners.
 One problem I think DSSS has, is, as far as I know, it can't handle
 top level packages with same name. Or at least not in any good way. If
 you go with the Java package naming scheme and name your top level
 package after your domain, example:

 module com.foo.bar;
 module com.foo.foobar;

 And another project does the same:

 module com.abc.defg;

 Then these two projects will both end up in the "com" folder. Not very
 good in my opinion. In my solution every package has a name
 independent of the packages it contains an all packages are placed in
 a folder named after the package, including the version number.
These all seem like implementation details. I don't care how the tool caches the files.
Well yes, but that is how DSSS works and that's what I'm explaining.
OK.
 Or instructions on the web site "use 'dmd -O -inline -release
 -version=SpecificVersion project.d' to compile"

 Or build.sh (build.bat) Note that dcollections has no makefile,
 everything is built from shell scripts. I almost never have to edit the
 build file, because the line's like:
You would need to shell script files, one for Windows and one for Posix, see above.
Yes.
 That's what a build tool can handle. I think you should read this:  
 http://dsource.org/projects/dsss/wiki/DSSSByExample and hopefully you'll  
 understand why a built tool is a good thing.
I'll take a look when I get a moment. -Steve
Aug 12 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-08-12 20:36, Steven Schveighoffer wrote:
 On Fri, 12 Aug 2011 14:24:46 -0400, Jacob Carlborg <doob me.com> wrote:

 On 2011-08-12 15:49, Steven Schveighoffer wrote:
 Note that the pragmas are specific to that file only. So you don't have
 an import file which defines pragmas. This is to prevent conflicts
 between two files that declare the same package override.
Now I'm not quite sure I understand. Are you saying that every file needs to have these pragma imports ?
Let's say file a.d pragmas that module foo means http://foo.com/projectx, and module b.d from another project pragmas that module foo means http://bar.com/projecty. If I import both a and b, what happens? It only makes sense for a pragma to affect the current file. This is similar to how version=x statements only affect the current file.
Again, will that mean you have to specify a pragma for each file? Just for the record, you cannot always solve all dependencies, it can happen that two packages conflict with each other. -- /Jacob Carlborg
Aug 12 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 12 Aug 2011 15:12:07 -0400, Jacob Carlborg <doob me.com> wrote:

 On 2011-08-12 20:36, Steven Schveighoffer wrote:
 On Fri, 12 Aug 2011 14:24:46 -0400, Jacob Carlborg <doob me.com> wrote:

 On 2011-08-12 15:49, Steven Schveighoffer wrote:
 Note that the pragmas are specific to that file only. So you don't  
 have
 an import file which defines pragmas. This is to prevent conflicts
 between two files that declare the same package override.
Now I'm not quite sure I understand. Are you saying that every file needs to have these pragma imports ?
Let's say file a.d pragmas that module foo means http://foo.com/projectx, and module b.d from another project pragmas that module foo means http://bar.com/projecty. If I import both a and b, what happens? It only makes sense for a pragma to affect the current file. This is similar to how version=x statements only affect the current file.
Again, will that mean you have to specify a pragma for each file?
Yes, or specify it on the command line/config file. I think the risk of inadvertently importing incorrect files is too great. Looking back at it, however, we probably would need some mechanism for files in the same package to inherit the source location. For example, if you pragma that a module foo = http://foo.com/projectx, and import foo.xyz, and foo.xyz imports foo.abc, we don't want foo.xyz to have to pragma the same url just to include another file in its own package. Clearly we need some more thought around this.
 Just for the record, you cannot always solve all dependencies, it can  
 happen that two packages conflict with each other.
As long as the dependencies are contained, there should be no conflict. If I can compile project x and y separately, and both have a conflicting dependency, then I should still be able to compile a project that depends on both x and y, as long as they don't import eachother. -Steve
Aug 12 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-08-12 21:22, Steven Schveighoffer wrote:

 Yes, or specify it on the command line/config file. I think the risk of
 inadvertently importing incorrect files is too great.

 Looking back at it, however, we probably would need some mechanism for
 files in the same package to inherit the source location.

 For example, if you pragma that a module foo = http://foo.com/projectx,
 and import foo.xyz, and foo.xyz imports foo.abc, we don't want foo.xyz
 to have to pragma the same url just to include another file in its own
 package.

 Clearly we need some more thought around this.
Yeah, that's what I was afraid of.
 Just for the record, you cannot always solve all dependencies, it can
 happen that two packages conflict with each other.
As long as the dependencies are contained, there should be no conflict. If I can compile project x and y separately, and both have a conflicting dependency, then I should still be able to compile a project that depends on both x and y, as long as they don't import eachother. -Steve
If x and y depends on two different versions of z, how would that be solved. As far as I know you cannot link the same library of two different version twice, you will get conflicting symbols. -- /Jacob Carlborg
Aug 13 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Sat, 13 Aug 2011 08:24:53 -0400, Jacob Carlborg <doob me.com> wrote:

 On 2011-08-12 21:22, Steven Schveighoffer wrote:

 Yes, or specify it on the command line/config file. I think the risk of
 inadvertently importing incorrect files is too great.

 Looking back at it, however, we probably would need some mechanism for
 files in the same package to inherit the source location.

 For example, if you pragma that a module foo = http://foo.com/projectx,
 and import foo.xyz, and foo.xyz imports foo.abc, we don't want foo.xyz
 to have to pragma the same url just to include another file in its own
 package.

 Clearly we need some more thought around this.
Yeah, that's what I was afraid of.
 Just for the record, you cannot always solve all dependencies, it can
 happen that two packages conflict with each other.
As long as the dependencies are contained, there should be no conflict. If I can compile project x and y separately, and both have a conflicting dependency, then I should still be able to compile a project that depends on both x and y, as long as they don't import eachother. -Steve
If x and y depends on two different versions of z, how would that be solved. As far as I know you cannot link the same library of two different version twice, you will get conflicting symbols.
It wouldn't be an actual conflict, it would be a naming issue. In other words, I'm not talking about two different versions of the same library. That has to be a compiler error. What could happen though, is this situation: a.d pragma's foo.bar as being http://somedreposiotry.com/foo/barv1.0 b.d includes the repository path http://somedrepository.com, whose default foo.bar goes to foo/barv2.0, but b does not depend on foo.bar I don't think a.d's pragma should include b's pragma, or vice versa. Especially if it is some sort of weird import order precedent. It's just much simpler to say "b's pragma only affects b, and a's pragma only affects a." However, I think we need to add that any files imported via a pragma implicitly include that path. I should probably add that to the DIP. Obviously if a.d depends on foo.bar, and b.d depends on foo.bar, but the locations are different, it should be a compiler error. -Steve
Aug 15 2011
parent Jacob Carlborg <doob me.com> writes:
On 2011-08-15 15:11, Steven Schveighoffer wrote:
 What could happen though, is this situation:

 a.d pragma's foo.bar as being http://somedreposiotry.com/foo/barv1.0
 b.d includes the repository path http://somedrepository.com, whose
 default foo.bar goes to foo/barv2.0, but b does not depend on foo.bar
That would be a problem.
 I don't think a.d's pragma should include b's pragma, or vice versa.
 Especially if it is some sort of weird import order precedent.

 It's just much simpler to say "b's pragma only affects b, and a's pragma
 only affects a." However, I think we need to add that any files imported
 via a pragma implicitly include that path. I should probably add that to
 the DIP.
I guess so.
 Obviously if a.d depends on foo.bar, and b.d depends on foo.bar, but the
 locations are different, it should be a compiler error.

 -Steve
Ok. My original concern was to have to add pragmas to every D file. That is not a solution that will last long. But if you can add a compiler flag instead I guess that's ok. -- /Jacob Carlborg
Aug 15 2011
prev sibling parent "Martin Nowak" <dawg dawgfoto.de> writes:
On Fri, 12 Aug 2011 15:49:30 +0200, Steven Schveighoffer  
<schveiguy yahoo.com> wrote:

 On Fri, 12 Aug 2011 03:23:30 -0400, Jacob Carlborg <doob me.com> wrote:

 On 2011-08-11 20:31, Steven Schveighoffer wrote:
 On Thu, 11 Aug 2011 14:19:35 -0400, Jacob Carlborg <doob me.com> wrote:
 So how would that be different if the compiler drives everything? Say
 you begin with a few local files. The compiler then scans through them
 looking for URL imports. Then asks a tool to download the dependencies
 it found and starts all over again.
Forgive my compiler ignorance (not a compiler writer), but why does the compiler have to start over? It's no different than importing a file, is it?
Probably it's no different. Well what's different is it will first parse a couple of files, then downloads a few files, then parse the downloaded files and the fetch some more files and so on. To me this seems inefficient but since it's not implemented I don't know. It feels more efficient if it could download all needed files in one step. And then compile all files in the next step. I don't know what's possible with this DIP but to me it seems that the current suggestion will download individual files. This also seems inefficient, my solution deals with packages, i.e. zip files.
The extendability is in the url. For example, yes, http://server/file.d downloads a single file, and would be slow if every import needed to download an individual file. But some other protocol, or some other cue to the download tool (like if the download has a path component that ends in .tgz or something), then you download all the files at once, and on subsequent imports, the cached file is used (no download necessary). I think there's even a suggestion in there for passing directives back to the compiler like "file already downloaded, open it here..." I think unless a single file *is* the package, it's going to be foolish to download individual files. I also think a protocol which defines a central repository would be beneficial. So you only need one -I parameter to include a whole community of D code (like dsource).
 This is how my package manager will work. You have a local file
 containing all the direct dependencies needed to build your project.
 When invoked, the package manager tool fetches a file containing all
 packages and all their dependencies, from the repository. It then
 figures out all dependencies, both direct and indirect. Then it
 downloads all dependencies. It does all this before the compiler is
 even invoked once.

 Then, preferably, but optionally, it hands over to a build tool that
 builds everything. The build tool would need to invoke the compiler
 twice, first to get all the dependencies of all the local files in the
 project that is being built. Then it finally runs the compiler to
 build everything.
The benefit of using source is the source code is already written with an import statement, there is no need to write an external build file (all you need is command line that configures the compiler).
I don't see the big difference. I think the most project (not the smallest ones) will end up with a special file anyway, containing the pragmas declaring these imports.
Note that the pragmas are specific to that file only. So you don't have an import file which defines pragmas. This is to prevent conflicts between two files that declare the same package override.
 In addition to that as soon as you need to pass flags to the compiler  
 you will most likely to put that in a file of some kind. In that case  
 you can just as easily put them in a build script and use a build tool.
a batch files/shell script should suffice, no need for a "special" tool.
 Essentially, the import statements become your "build file". I think
 dsss worked like this, but I don't remember completely.
Yes, this is similar how DSSS worked. The difference is that you didn't need a pragma to link a package to an URL you just wrote the import declarations as you do now.
IIRC, dsss still had a global config file that defined where to import things from. The DIP defines that -I switches can also define internet resources along with pragmas, so sticking those in the dmd.conf would probably be the equivalent.
 One problem I think DSSS has, is, as far as I know, it can't handle top  
 level packages with same name. Or at least not in any good way. If you  
 go with the Java package naming scheme and name your top level package  
 after your domain, example:

 module com.foo.bar;
 module com.foo.foobar;

 And another project does the same:

 module com.abc.defg;

 Then these two projects will both end up in the "com" folder. Not very  
 good in my opinion. In my solution every package has a name independent  
 of the packages it contains an all packages are placed in a folder  
 named after the package, including the version number.
These all seem like implementation details. I don't care how the tool caches the files.
 My ideal solution, no matter how it's implemented is, I get a file
 blah.d, and I do:

 xyz blah.d

 and xyz handles all the dirty work of figuring out what to build along
 with blah.d as well as where to get those resources. Whether xyz ==  
 dmd,
 I don't know. It sure sounds like it could be...


 -Steve
Yeah, I would like that too. But as I said above, as soon as you need compiler flags you need an additional file. With a built tool it can then be just: "$ build"
Or instructions on the web site "use 'dmd -O -inline -release -version=SpecificVersion project.d' to compile" Or build.sh (build.bat) Note that dcollections has no makefile, everything is built from shell scripts. I almost never have to edit the build file, because the line's like: dmd -lib -O -release -inline dcollections/*.d dcollections/model/*.d Any new files get included automatically. And it takes a second to build, so who cares if you rebuild every file every time? Interestingly, libraries would still need to specify all the files since they may not import eachother :) I don't know if there's a "good" solution that isn't too coarse for that. All of this discussion is good to determine the viability, and clarify some misinterpretations of DIP11, but I think unless someone steps up and tries to implement it, it's a moot conversation. I certainly don't have the time or knowledge to implement it. So is there anyone who is interested, or has tried (to re-ask the original question)? -Steve
Well, I would give it a try to implement a prototype. Given that we can sort out how to handle the actual building of remote sources. As long we as are only talking about imports this remains not so useful. Some approaches I could think of: I. make 'dmd mysource.d acme.a=http://acme.org/a -Iacme.b=http://acme.org/b' mean build every source that you download from acme.a II. for every import that is actually a source file (*.d) let the compiler decide if linking will be needed, if so build an object for that module III. specify a good naming scheme for distributing binary libraries Actually I only like the second solution. So instead of 'dmd a.d b.d' one could would write 'dmd -build-deps a.d' where a importing b. martin
Aug 12 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Steven Schveighoffer" <schveiguy yahoo.com> wrote in message 
news:op.vz166yaieav7ka localhost.localdomain...
 On Thu, 11 Aug 2011 12:24:48 -0400, Andrew Wiley 
 <wiley.andrew.j gmail.com> wrote:

 On Thu, Aug 11, 2011 at 5:52 AM, Steven Schveighoffer
 <schveiguy yahoo.com>wrote:
 I think the benefit of this approach over a build tool which wraps the
 compiler is, the compiler already has the information needed for
 dependencies, etc.  To a certain extent, the wrapping build tool has to
 re-implement some of the compiler pieces.
This last bit doesn't really come into play here because you can already ask the compiler to output all that information. and easily use it in a separate program. That much is already done.
Yes, but then you have to restart the compiler to figure out what's next. Let's say a source file needs a.d, and a.d needs b.d, and both a.d and b.d are on the network. You potentially need to run the compiler 3 times just to make sure you have all the files, then run it a fourth time to compile.
That's *only* true if you go along with DIP11's misguided file-oriented approach. With a real package manager, none of that is needed. Your app just says "I need packages X, Y and Z." And X, Y and Z do the same for their requirements. This is all trivial metadata. Emphasis on *trivial*. So, before DMD is ever invoked at all, before one line of the source is ever even read, the package manager has make sure that *everything* is *already* right there. No need to go off on some goofy half-cocked "compile/download/complile/download" dance. So DMD *never* needs to be invoked more than twice. Once to get the deps, once to compile. Better yet, if DMD gets the switch --compile-everything-dammit-not-just-the-explicit-files-fr m-the-command-line, then DMD never needs to be invoked more than *once*: Once to figure out the deps *while* being intelligent enough to actually compile all of them.
 And there is no parsing of the output data,
Parsing the .deps file is extremely simple. RDMD does it with one regex. Personally, I think even that is overkill. Better yet, with a switch to make DMD incorporate RDMD's --build-only functionality, there is *still* no parsing of output data. So all in all, there is *nothing* that DIP11 does that can't be done *better* by other (more typical) means.
Aug 11 2011
next sibling parent "Nick Sabalausky" <a a.a> writes:
"Nick Sabalausky" <a a.a> wrote in message 
news:j219ck$3i0$1 digitalmars.com...
 "Steven Schveighoffer" <schveiguy yahoo.com> wrote in message 
 news:op.vz166yaieav7ka localhost.localdomain...
 On Thu, 11 Aug 2011 12:24:48 -0400, Andrew Wiley 
 <wiley.andrew.j gmail.com> wrote:

 On Thu, Aug 11, 2011 at 5:52 AM, Steven Schveighoffer
 <schveiguy yahoo.com>wrote:
 I think the benefit of this approach over a build tool which wraps the
 compiler is, the compiler already has the information needed for
 dependencies, etc.  To a certain extent, the wrapping build tool has to
 re-implement some of the compiler pieces.
This last bit doesn't really come into play here because you can already ask the compiler to output all that information. and easily use it in a separate program. That much is already done.
Yes, but then you have to restart the compiler to figure out what's next. Let's say a source file needs a.d, and a.d needs b.d, and both a.d and b.d are on the network. You potentially need to run the compiler 3 times just to make sure you have all the files, then run it a fourth time to compile.
That's *only* true if you go along with DIP11's misguided file-oriented approach. With a real package manager, none of that is needed. Your app just says "I need packages X, Y and Z." And X, Y and Z do the same for their requirements. This is all trivial metadata. Emphasis on *trivial*. So, before DMD is ever invoked at all, before one line of the source is ever even read, the package manager has make sure that *everything* is *already* right there. No need to go off on some goofy half-cocked "compile/download/complile/download" dance. So DMD *never* needs to be invoked more than twice. Once to get the deps, once to compile. Better yet, if DMD gets the switch --compile-everything-dammit-not-just-the-explicit-files-fr m-the-command-line, then DMD never needs to be invoked more than *once*: Once to figure out the deps *while* being intelligent enough to actually compile all of them.
 And there is no parsing of the output data,
Parsing the .deps file is extremely simple. RDMD does it with one regex. Personally, I think even that is overkill. Better yet, with a switch to make DMD incorporate RDMD's --build-only functionality, there is *still* no parsing of output data. So all in all, there is *nothing* that DIP11 does that can't be done *better* by other (more typical) means.
In other words, DIP11 just reinvents the wheel, poorly.
Aug 11 2011
prev sibling next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 11 Aug 2011 15:09:15 -0400, Nick Sabalausky <a a.a> wrote:

 "Steven Schveighoffer" <schveiguy yahoo.com> wrote in message
 news:op.vz166yaieav7ka localhost.localdomain...
 On Thu, 11 Aug 2011 12:24:48 -0400, Andrew Wiley
 <wiley.andrew.j gmail.com> wrote:

 On Thu, Aug 11, 2011 at 5:52 AM, Steven Schveighoffer
 <schveiguy yahoo.com>wrote:
 I think the benefit of this approach over a build tool which wraps the
 compiler is, the compiler already has the information needed for
 dependencies, etc.  To a certain extent, the wrapping build tool has  
 to
 re-implement some of the compiler pieces.
This last bit doesn't really come into play here because you can already ask the compiler to output all that information. and easily use it in a separate program. That much is already done.
Yes, but then you have to restart the compiler to figure out what's next. Let's say a source file needs a.d, and a.d needs b.d, and both a.d and b.d are on the network. You potentially need to run the compiler 3 times just to make sure you have all the files, then run it a fourth time to compile.
That's *only* true if you go along with DIP11's misguided file-oriented approach. With a real package manager, none of that is needed. Your app just says "I need packages X, Y and Z."
It already does this: import std.stdio; says "I need the package std" The purpose of the DIP is to try and reuse this information without having to have an extraneous file that says "I depend on package std."
 And X, Y and Z do the same for their requirements. This is all trivial
 metadata. Emphasis on *trivial*. So, before DMD is ever invoked at all,
 before one line of the source is ever even read, the package manager has
 make sure that *everything* is *already* right there. No need to go off  
 on
 some goofy half-cocked "compile/download/complile/download" dance.
With the DIP, I envision encoding the package in the URL. For example, you do: -Ispiffy=dpm://spiffy.com/latest.dpm And then blah.d imports spiffy.neat. The download tool is given the url dpm://spiffy.com/latest.dpm/neat.d and understanding the d package module (dpm) protocol, downloads the package that contains neat.d, (along with compiled lib) and then pipes the contents of neat.d to the compiler, which treats it just like a file it just read from the filesystem. Then any other file needed from that package is simply piped from the already-downloaded package. It doesn't have to be one-file based, you just need to have a way to map physical packages to module packages. The DIP doesn't explain all this except for the sections titled "Packaging" and "protocols"
 So DMD *never* needs to be invoked more than twice. Once to get the deps,
 once to compile. Better yet, if DMD gets the
 switch  
 --compile-everything-dammit-not-just-the-explicit-files-from-the-command-line,
 then DMD never needs to be invoked more than *once*: Once to figure out  
 the
 deps *while* being intelligent enough to actually compile all of them.
This would be beneficial to DIP11 as well, since downloading and importing the file is only half the battle. -Steve
Aug 11 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-08-11 21:36, Steven Schveighoffer wrote:
be one-file based, you just need to have a way to map
 physical packages to module packages.

 The DIP doesn't explain all this except for the sections titled
 "Packaging" and "protocols"
The DIP needs to explain this, is that the whole point? -- /Jacob Carlborg
Aug 12 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 12 Aug 2011 05:04:13 -0400, Jacob Carlborg <doob me.com> wrote:

 On 2011-08-11 21:36, Steven Schveighoffer wrote:
 be one-file based, you just need to have a way to map
 physical packages to module packages.

 The DIP doesn't explain all this except for the sections titled
 "Packaging" and "protocols"
The DIP needs to explain this, is that the whole point?
The DIP focuses on the compiler changes needed + gives rudimentary functionality based on a simple protocol (http). I think the scope of work for creating the tool which does full-fledged packaging is way too much for the DIP, and probably would be counter productive to create a spec for it right now. The point of those two sections is to remind you not to think in terms of "this only downloads individual files via http", but to think about the future functinality that such a tool could provide once the compiler has been hooked. The details of the tool are purposely left open for design. -Steve
Aug 12 2011
parent Jacob Carlborg <doob me.com> writes:
On 2011-08-12 15:56, Steven Schveighoffer wrote:
 On Fri, 12 Aug 2011 05:04:13 -0400, Jacob Carlborg <doob me.com> wrote:

 On 2011-08-11 21:36, Steven Schveighoffer wrote:
 be one-file based, you just need to have a way to map
 physical packages to module packages.

 The DIP doesn't explain all this except for the sections titled
 "Packaging" and "protocols"
The DIP needs to explain this, is that the whole point?
The DIP focuses on the compiler changes needed + gives rudimentary functionality based on a simple protocol (http). I think the scope of work for creating the tool which does full-fledged packaging is way too much for the DIP, and probably would be counter productive to create a spec for it right now. The point of those two sections is to remind you not to think in terms of "this only downloads individual files via http", but to think about the future functinality that such a tool could provide once the compiler has been hooked. The details of the tool are purposely left open for design. -Steve
Ok, I see. -- /Jacob Carlborg
Aug 12 2011
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 8/11/11 1:09 PM, Nick Sabalausky wrote:
 "Steven Schveighoffer"<schveiguy yahoo.com>  wrote in message
 news:op.vz166yaieav7ka localhost.localdomain...
 On Thu, 11 Aug 2011 12:24:48 -0400, Andrew Wiley
 <wiley.andrew.j gmail.com>  wrote:

 On Thu, Aug 11, 2011 at 5:52 AM, Steven Schveighoffer
 <schveiguy yahoo.com>wrote:
 I think the benefit of this approach over a build tool which wraps the
 compiler is, the compiler already has the information needed for
 dependencies, etc.  To a certain extent, the wrapping build tool has to
 re-implement some of the compiler pieces.
This last bit doesn't really come into play here because you can already ask the compiler to output all that information. and easily use it in a separate program. That much is already done.
Yes, but then you have to restart the compiler to figure out what's next. Let's say a source file needs a.d, and a.d needs b.d, and both a.d and b.d are on the network. You potentially need to run the compiler 3 times just to make sure you have all the files, then run it a fourth time to compile.
That's *only* true if you go along with DIP11's misguided file-oriented approach. With a real package manager, none of that is needed. Your app just says "I need packages X, Y and Z." And X, Y and Z do the same for their requirements. This is all trivial metadata. Emphasis on *trivial*. So, before DMD is ever invoked at all, before one line of the source is ever even read, the package manager has make sure that *everything* is *already* right there. No need to go off on some goofy half-cocked "compile/download/complile/download" dance. So DMD *never* needs to be invoked more than twice. Once to get the deps, once to compile. Better yet, if DMD gets the switch --compile-everything-dammit-not-just-the-explicit-files-from-the-command-line, then DMD never needs to be invoked more than *once*: Once to figure out the deps *while* being intelligent enough to actually compile all of them.
It's difficult to get all dependencies when not all sources have been yet downloaded. Andrei
Aug 11 2011
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:j21g1a$ea4$1 digitalmars.com...
 It's difficult to get all dependencies when not all sources have been yet 
 downloaded.
With DIP11, yes. With a traditional-style package manager, no.
Aug 11 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 11 Aug 2011 17:20:04 -0400, Nick Sabalausky <a a.a> wrote:

 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message
 news:j21g1a$ea4$1 digitalmars.com...
 It's difficult to get all dependencies when not all sources have been  
 yet
 downloaded.
With DIP11, yes. With a traditional-style package manager, no.
With either style, you need to download a package in order to determine if you need to download other packages (package a may depend on package b even though your project does not depend on package b). The DIP11 version does this JIT, whereas your version does it before compilation. It's not really any different. -Steve
Aug 11 2011
next sibling parent reply kennytm <kennytm gmail.com> writes:
"Steven Schveighoffer" <schveiguy yahoo.com> wrote:
 On Thu, 11 Aug 2011 17:20:04 -0400, Nick Sabalausky <a a.a> wrote:
 
 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in
 message
 news:j21g1a$ea4$1 digitalmars.com...
 It's difficult to get all dependencies when not all sources have
 been  >> yet
 downloaded.
With DIP11, yes. With a traditional-style package manager, no.
With either style, you need to download a package in order to determine if you need to download other packages (package a may depend on package b even though your project does not depend on package b). The DIP11 version does this JIT, whereas your version does it before compilation. It's not really any different. -Steve
In Debian's apt, there will be a central index that records all dependencies for packages in the repository. So the client only needs to synchronize that index file regularly. The system will know package A depends on B which depends on C and D and download all 4 packages. That said, since you need to download the pakcgaes anyway, having a central index doesn't reduce the bytes you need to transfer and parse if DIP11 doesn't support updating.
Aug 11 2011
next sibling parent Jacob Carlborg <doob me.com> writes:
On 2011-08-12 08:17, kennytm wrote:
 "Steven Schveighoffer"<schveiguy yahoo.com>  wrote:
 On Thu, 11 Aug 2011 17:20:04 -0400, Nick Sabalausky<a a.a>  wrote:

 "Andrei Alexandrescu"<SeeWebsiteForEmail erdani.org>  wrote in
 message
 news:j21g1a$ea4$1 digitalmars.com...
 It's difficult to get all dependencies when not all sources have
 been>>  yet
 downloaded.
With DIP11, yes. With a traditional-style package manager, no.
With either style, you need to download a package in order to determine if you need to download other packages (package a may depend on package b even though your project does not depend on package b). The DIP11 version does this JIT, whereas your version does it before compilation. It's not really any different. -Steve
In Debian's apt, there will be a central index that records all dependencies for packages in the repository. So the client only needs to synchronize that index file regularly. The system will know package A depends on B which depends on C and D and download all 4 packages.
Exactly
 That said, since you need to download the pakcgaes anyway, having a
 central index doesn't reduce the bytes you need to transfer and parse if
 DIP11 doesn't support updating.
No, but I'm guessing it's more efficient to download zip files instead of individual D files. -- /Jacob Carlborg
Aug 12 2011
prev sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 12 Aug 2011 02:17:36 -0400, kennytm <kennytm gmail.com> wrote:

 "Steven Schveighoffer" <schveiguy yahoo.com> wrote:
 On Thu, 11 Aug 2011 17:20:04 -0400, Nick Sabalausky <a a.a> wrote:

 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in
 message
 news:j21g1a$ea4$1 digitalmars.com...
 It's difficult to get all dependencies when not all sources have
 been  >> yet
 downloaded.
With DIP11, yes. With a traditional-style package manager, no.
With either style, you need to download a package in order to determine if you need to download other packages (package a may depend on package b even though your project does not depend on package b). The DIP11 version does this JIT, whereas your version does it before compilation. It's not really any different. -Steve
In Debian's apt, there will be a central index that records all dependencies for packages in the repository. So the client only needs to synchronize that index file regularly. The system will know package A depends on B which depends on C and D and download all 4 packages.
This could be done (in fact, the tool could have a url system that uses apt). But it's beyond the scope of the DIP, which is to define a way to hook the compiler for a downloading tool when a file is on the Internet.
 That said, since you need to download the pakcgaes anyway, having a
 central index doesn't reduce the bytes you need to transfer and parse if
 DIP11 doesn't support updating.
All the DIP does is provide a hook so the compiler can ask an external tool to download files/packages/whatever. It leaves open the protocol used by the tool, except that it should be in url form. The tool can implement the download any way it wants (individual files, packages, full-blown metadata-based centralized-indexed kitchen-sinked github or apt or whatever). -Steve
Aug 12 2011
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2011-08-12 00:02, Steven Schveighoffer wrote:
 On Thu, 11 Aug 2011 17:20:04 -0400, Nick Sabalausky <a a.a> wrote:

 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message
 news:j21g1a$ea4$1 digitalmars.com...
 It's difficult to get all dependencies when not all sources have been
 yet
 downloaded.
With DIP11, yes. With a traditional-style package manager, no.
With either style, you need to download a package in order to determine if you need to download other packages (package a may depend on package b even though your project does not depend on package b). The DIP11 version does this JIT, whereas your version does it before compilation. It's not really any different. -Steve
No, not if you have a single meta file with all the dependencies. -- /Jacob Carlborg
Aug 12 2011
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-08-11 23:02, Andrei Alexandrescu wrote:
 It's difficult to get all dependencies when not all sources have been
 yet downloaded.

 Andrei
No, not when you have a single meta file containing all the dependencies of all packages. -- /Jacob Carlborg
Aug 12 2011
next sibling parent David Nadlinger <see klickverbot.at> writes:
On 8/12/11 11:05 AM, Jacob Carlborg wrote:
 On 2011-08-11 23:02, Andrei Alexandrescu wrote:
 It's difficult to get all dependencies when not all sources have been
 yet downloaded.

 Andrei
No, not when you have a single meta file containing all the dependencies of all packages.
Or a »spec« file along with each package containing the metadata, e.g. name, description, requirements, conflicts, … David
Aug 12 2011
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 8/12/11 3:05 AM, Jacob Carlborg wrote:
 On 2011-08-11 23:02, Andrei Alexandrescu wrote:
 It's difficult to get all dependencies when not all sources have been
 yet downloaded.

 Andrei
No, not when you have a single meta file containing all the dependencies of all packages.
I understand. I believe there is value in avoiding meta files. Andrei
Aug 12 2011
parent Jacob Carlborg <doob me.com> writes:
On 2011-08-12 16:11, Andrei Alexandrescu wrote:
 On 8/12/11 3:05 AM, Jacob Carlborg wrote:
 On 2011-08-11 23:02, Andrei Alexandrescu wrote:
 It's difficult to get all dependencies when not all sources have been
 yet downloaded.

 Andrei
No, not when you have a single meta file containing all the dependencies of all packages.
I understand. I believe there is value in avoiding meta files. Andrei
This meta file lives on in the repository and is an implementation detail that the user never needs to know about. If this meta file can be created without meta files for individual packages I don't know. -- /Jacob Carlborg
Aug 12 2011
prev sibling parent reply "Martin Nowak" <dawg dawgfoto.de> writes:
It really looks sound to me.

---
module myfile;
pragma(imppath, "dep=www.dpan.org/dep");
import dep.a;
---

remote file
---
module dep.a;

// link directives
pragma(libpath, "dep=www.dpan.org/dep");
pragma(lib, "dep");

// or alternatively some new pragma like this to cause linking the  
imported package
pragma(build, "dep")

// additional dependency
pragma(imppatch, "dep2=www.dpan.org/dep2");
---

Versioning can be easily resolved by using urls like  
www.dpan.org/dep?version=0.45 or www.dpan.org/dep-greaterthan-0.45.

Pro:
  - scales from local dependencies to VPN sharing to package websites to  
even dynamically generated source code
  - simple implementation compilerwise
Con:
  - doesn't help with mixed source builds except for prebuild libraries


Some thoughts:
  - allowing packages/libs to be packed in zip files would be nice as it's  
getting closer to single file packages
  - remote locations would need an accompanying hash file per source or a  
checksum index to allow local caching
  - sorting out some security issues would be nice but I've never seen  
anything in other package managers

On Thu, 11 Aug 2011 09:49:56 +0200, Jacob Carlborg <doob me.com> wrote:

 On 2011-08-11 09:41, Jonas Drewsen wrote:
 On 11/08/11 09.07, Jacob Carlborg wrote:
<snip>
 Yes I've noticed that. Seems very promising.

 What I do like about DIP11 is how seamless it would work. You just have
 to compile and stuff works.

 /Jonas
I think that DIP11 is too limited, for example, it doesn't deal with versions. Orbit combined with a build tool will be seamless as well. RDMD is a great tool but as soon as you need to add compiler flags or compile a library you need either some kind of script or a build tool. And in that case you can just go with the built tool and have it work on all platforms.
Aug 11 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-08-12 00:42, Martin Nowak wrote:
 It really looks sound to me.

 ---
 module myfile;
 pragma(imppath, "dep=www.dpan.org/dep");
 import dep.a;
 ---

 remote file
 ---
 module dep.a;

 // link directives
 pragma(libpath, "dep=www.dpan.org/dep");
 pragma(lib, "dep");

 // or alternatively some new pragma like this to cause linking the
 imported package
 pragma(build, "dep")

 // additional dependency
 pragma(imppatch, "dep2=www.dpan.org/dep2");
 ---

 Versioning can be easily resolved by using urls like
 www.dpan.org/dep?version=0.45 or www.dpan.org/dep-greaterthan-0.45.

 Pro:
 - scales from local dependencies to VPN sharing to package websites to
 even dynamically generated source code
 - simple implementation compilerwise
We will be very dependent on Walter or anyone else that knows the compiler, which, as far as I know, are quite few. I'm not sure if anything _is_ simple in the compiler. -- /Jacob Carlborg
Aug 12 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 12 Aug 2011 05:12:47 -0400, Jacob Carlborg <doob me.com> wrote:

 On 2011-08-12 00:42, Martin Nowak wrote:
 It really looks sound to me.

 ---
 module myfile;
 pragma(imppath, "dep=www.dpan.org/dep");
 import dep.a;
 ---

 remote file
 ---
 module dep.a;

 // link directives
 pragma(libpath, "dep=www.dpan.org/dep");
 pragma(lib, "dep");

 // or alternatively some new pragma like this to cause linking the
 imported package
 pragma(build, "dep")

 // additional dependency
 pragma(imppatch, "dep2=www.dpan.org/dep2");
 ---

 Versioning can be easily resolved by using urls like
 www.dpan.org/dep?version=0.45 or www.dpan.org/dep-greaterthan-0.45.

 Pro:
 - scales from local dependencies to VPN sharing to package websites to
 even dynamically generated source code
 - simple implementation compilerwise
We will be very dependent on Walter or anyone else that knows the compiler, which, as far as I know, are quite few. I'm not sure if anything _is_ simple in the compiler.
This is not true. The compiler implements *hooks* for a download tool. The download tool will be a separate process which turns urls (generated by the compiler) into source files. Once the hooks are implemented, the tool is independent, and we would be idiotic not to implement it in D. I think you may not have read the DIP fully, or it is not clear enough. -Steve
Aug 12 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-08-12 15:53, Steven Schveighoffer wrote:
 This is not true. The compiler implements *hooks* for a download tool.
 The download tool will be a separate process which turns urls (generated
 by the compiler) into source files. Once the hooks are implemented, the
 tool is independent, and we would be idiotic not to implement it in D.

 I think you may not have read the DIP fully, or it is not clear enough.

 -Steve
I've read the whole DIP and I know there is an external tool that downloads the files. I also know that DMD doesn't have these hooks, I rest my case. -- /Jacob Carlborg
Aug 12 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 12 Aug 2011 12:30:42 -0400, Jacob Carlborg <doob me.com> wrote:

 On 2011-08-12 15:53, Steven Schveighoffer wrote:
 This is not true. The compiler implements *hooks* for a download tool.
 The download tool will be a separate process which turns urls (generated
 by the compiler) into source files. Once the hooks are implemented, the
 tool is independent, and we would be idiotic not to implement it in D.

 I think you may not have read the DIP fully, or it is not clear enough.

 -Steve
I've read the whole DIP and I know there is an external tool that downloads the files. I also know that DMD doesn't have these hooks, I rest my case.
I thought you meant we would be dependent on Walter to write the *download part* of the DIP, not the compiler hooks. The hooks should be pretty simple I would think. But in any case, you should have a look at github and all the new people who are working on pull requests for the compiler. The group of dmd code contributors has significantly grown. -Steve
Aug 12 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-08-12 19:08, Steven Schveighoffer wrote:
 On Fri, 12 Aug 2011 12:30:42 -0400, Jacob Carlborg <doob me.com> wrote:

 On 2011-08-12 15:53, Steven Schveighoffer wrote:
 This is not true. The compiler implements *hooks* for a download tool.
 The download tool will be a separate process which turns urls (generated
 by the compiler) into source files. Once the hooks are implemented, the
 tool is independent, and we would be idiotic not to implement it in D.

 I think you may not have read the DIP fully, or it is not clear enough.

 -Steve
I've read the whole DIP and I know there is an external tool that downloads the files. I also know that DMD doesn't have these hooks, I rest my case.
I thought you meant we would be dependent on Walter to write the *download part* of the DIP, not the compiler hooks. The hooks should be pretty simple I would think. But in any case, you should have a look at github and all the new people who are working on pull requests for the compiler. The group of dmd code contributors has significantly grown. -Steve
Yeah, that's true and it's a good thing. I can tell you this, I've looked at the DMD source code and tried to do modifications but I couldn't understand much of the code at all and failed with the most simple things. -- /Jacob Carlborg
Aug 12 2011
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 12 Aug 2011 14:31:28 -0400, Jacob Carlborg <doob me.com> wrote:

 On 2011-08-12 19:08, Steven Schveighoffer wrote:
 On Fri, 12 Aug 2011 12:30:42 -0400, Jacob Carlborg <doob me.com> wrote:

 On 2011-08-12 15:53, Steven Schveighoffer wrote:
 This is not true. The compiler implements *hooks* for a download tool.
 The download tool will be a separate process which turns urls  
 (generated
 by the compiler) into source files. Once the hooks are implemented,  
 the
 tool is independent, and we would be idiotic not to implement it in D.

 I think you may not have read the DIP fully, or it is not clear  
 enough.

 -Steve
I've read the whole DIP and I know there is an external tool that downloads the files. I also know that DMD doesn't have these hooks, I rest my case.
I thought you meant we would be dependent on Walter to write the *download part* of the DIP, not the compiler hooks. The hooks should be pretty simple I would think. But in any case, you should have a look at github and all the new people who are working on pull requests for the compiler. The group of dmd code contributors has significantly grown. -Steve
Yeah, that's true and it's a good thing. I can tell you this, I've looked at the DMD source code and tried to do modifications but I couldn't understand much of the code at all and failed with the most simple things.
/me in same boat as you -Steve
Aug 12 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Jonas Drewsen" <jdrewsen nospam.com> wrote in message 
news:j20139$2sev$1 digitalmars.com...
 On 11/08/11 09.07, Jacob Carlborg wrote:
 On 2011-08-10 21:55, jdrewsen wrote:
 What is the status of DIP11

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Has anyone started implementing it? Has it been rejected?

 /Jonas
Not sure, personally I don't like it. Instead I'm working on a more traditional package manager called Orbit: https://github.com/jacob-carlborg/orbit/wiki/Orbit-Package-Manager-for-D
Yes I've noticed that. Seems very promising. What I do like about DIP11 is how seamless it would work. You just have to compile and stuff works.
I really see it as solving the wrong problem the wrong way. DIP11 tries to solve two main things: 1. Automatic downloading of external dependencies. 2. Eliminate the need for a separate invokation of DMD to find all needed D files (ie, efficiency). (and by that I'm referring to the "package manager" concept of "packages", not D's module system). DIP11 handles this at the individual source-file level, which I believe is wrong and causes plenty of problems. Plus, DIP11 is very, very limited compared to a tradtional-style package manager (and pretty much has to be since it works at the individual source-file level) and thus encourages very bad things like hardcoding exactly one possible location from which to retrieve a given file. incorporating RDMD's "--build-only" functionality into the compiler (as an optional flag). This has two benefits over DIP11: A, It just works, nobody ever has to deal with DIP11's weird "DMD wants X to be retrieved", umm, "callback" mechanism. B, It's not tied to DIP11's broken package management design. all: It may download the needed D files, but it won't compile them. A separate tool *still* has to pass all the D files to DMD. And to do that it needs to know what D files are needed. And to do that, it needs to either ask DMD or be told by DMD, and *then* send all the D files to DMD as a separate invokation. The motivation for DIP11 was clearly to get a D package-retreiving system up and running with minimal effort. And that would be great if it were a good proposal. But DIP11 just smacks of corner-cutting.
Aug 11 2011
parent Jacob Carlborg <doob me.com> writes:
On 2011-08-11 17:41, Nick Sabalausky wrote:
 "Jonas Drewsen"<jdrewsen nospam.com>  wrote in message
 news:j20139$2sev$1 digitalmars.com...
 On 11/08/11 09.07, Jacob Carlborg wrote:
 On 2011-08-10 21:55, jdrewsen wrote:
 What is the status of DIP11

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Has anyone started implementing it? Has it been rejected?

 /Jonas
Not sure, personally I don't like it. Instead I'm working on a more traditional package manager called Orbit: https://github.com/jacob-carlborg/orbit/wiki/Orbit-Package-Manager-for-D
Yes I've noticed that. Seems very promising. What I do like about DIP11 is how seamless it would work. You just have to compile and stuff works.
I really see it as solving the wrong problem the wrong way. DIP11 tries to solve two main things: 1. Automatic downloading of external dependencies. 2. Eliminate the need for a separate invokation of DMD to find all needed D files (ie, efficiency). (and by that I'm referring to the "package manager" concept of "packages", not D's module system). DIP11 handles this at the individual source-file level, which I believe is wrong and causes plenty of problems. Plus, DIP11 is very, very limited compared to a tradtional-style package manager (and pretty much has to be since it works at the individual source-file level) and thus encourages very bad things like hardcoding exactly one possible location from which to retrieve a given file. incorporating RDMD's "--build-only" functionality into the compiler (as an optional flag). This has two benefits over DIP11: A, It just works, nobody ever has to deal with DIP11's weird "DMD wants X to be retrieved", umm, "callback" mechanism. B, It's not tied to DIP11's broken package management design. all: It may download the needed D files, but it won't compile them. A separate tool *still* has to pass all the D files to DMD. And to do that it needs to know what D files are needed. And to do that, it needs to either ask DMD or be told by DMD, and *then* send all the D files to DMD as a separate invokation. The motivation for DIP11 was clearly to get a D package-retreiving system up and running with minimal effort. And that would be great if it were a good proposal. But DIP11 just smacks of corner-cutting.
I completely agree with this. -- /Jacob Carlborg
Aug 11 2011