www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - DIP11: Automatic downloading of libraries

reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

Destroy.


Andrei
Jun 14 2011
next sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Tue, 14 Jun 2011 16:53:16 +0300, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11
Why this is a bad idea: 1) It hard-codes URLs in source code. Projects often move to other code-hosting services. PHP, Python, Perl, not sure about Ruby all have a central website which stores package metadata. 2) It requires that the raw source code be available via HTTP. Not all code hosting services allow this. GitHub will redirect all HTTP requests to HTTPS. 3) It only solves the problem for D modules, but not any other possible dependencies. I understand that this is a very urgent problem, but my opinion is that this half-arsed solution will only delay implementing and cause migration problems to a real solution, which should be able to handle svn/hg/git checkout, proper packages with custom build scripts, versioning, miscellaneous dependencies, publishing, etc. -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jun 14 2011
next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
 1) It hard-codes URLs in source code.
This one is fairly easy to handle - use an http "Moved Permanently" redirect to the new location at the old url.
Jun 14 2011
parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 14.06.2011 16:39, schrieb Adam D. Ruppe:
 1) It hard-codes URLs in source code.
This one is fairly easy to handle - use an http "Moved Permanently" redirect to the new location at the old url.
Is this possible with plain webspace, i.e. when you don't control the server? And it's certainly not possible when you're using sourceforge, github etc and they decide to change their URLs without setting redirects.
Jun 14 2011
parent reply Adam D. Ruppe <destructionator gmail.com> writes:
Daniel Gibson:
 Is this possible with plain webspace, i.e. when you don't control
 the server?
If you have your own domain name, you can certainly set this up. But if your host is fully managed, probably not. A fix is to set up a redirection server people can share. This is kinda like a central repository, but you wouldn't have to upload your files directly. You might just put up a url (and other metadata?) to point people to the final location. However, I'd prefer to have simple files. Downloading from git is imo a mistake - those files are probably in development... meaning they are mutable. If the files are mutable, it includes a lot of pain for versioning and caching.
Jun 14 2011
parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 14.06.2011 17:31, schrieb Adam D. Ruppe:
 Daniel Gibson:
 Is this possible with plain webspace, i.e. when you don't control
 the server?
If you have your own domain name, you can certainly set this up. But if your host is fully managed, probably not. A fix is to set up a redirection server people can share. This is kinda like a central repository, but you wouldn't have to upload your files directly. You might just put up a url (and other metadata?) to point people to the final location. However, I'd prefer to have simple files. Downloading from git is imo a mistake - those files are probably in development... meaning they are mutable. If the files are mutable, it includes a lot of pain for versioning and caching.
There could be stable branches in the git repositories for exactly this purpose. Furthermore it's be nice as an additional feature - some people may always want to use the latest bleeding edge version from git/svn/whatever.
Jun 14 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/14/11 10:31 AM, Daniel Gibson wrote:
 Am 14.06.2011 17:31, schrieb Adam D. Ruppe:
 Daniel Gibson:
 Is this possible with plain webspace, i.e. when you don't control
 the server?
If you have your own domain name, you can certainly set this up. But if your host is fully managed, probably not. A fix is to set up a redirection server people can share. This is kinda like a central repository, but you wouldn't have to upload your files directly. You might just put up a url (and other metadata?) to point people to the final location. However, I'd prefer to have simple files. Downloading from git is imo a mistake - those files are probably in development... meaning they are mutable. If the files are mutable, it includes a lot of pain for versioning and caching.
There could be stable branches in the git repositories for exactly this purpose. Furthermore it's be nice as an additional feature - some people may always want to use the latest bleeding edge version from git/svn/whatever.
I noticed that many online git/svn repos have bridges that serve raw files via http directly. No need for using the git/svn tool on the client. Andrei
Jun 14 2011
parent Daniel Gibson <metalcaedes gmail.com> writes:
Am 14.06.2011 17:33, schrieb Andrei Alexandrescu:
 On 6/14/11 10:31 AM, Daniel Gibson wrote:
 Am 14.06.2011 17:31, schrieb Adam D. Ruppe:
 Daniel Gibson:
 Is this possible with plain webspace, i.e. when you don't control
 the server?
If you have your own domain name, you can certainly set this up. But if your host is fully managed, probably not. A fix is to set up a redirection server people can share. This is kinda like a central repository, but you wouldn't have to upload your files directly. You might just put up a url (and other metadata?) to point people to the final location. However, I'd prefer to have simple files. Downloading from git is imo a mistake - those files are probably in development... meaning they are mutable. If the files are mutable, it includes a lot of pain for versioning and caching.
There could be stable branches in the git repositories for exactly this purpose. Furthermore it's be nice as an additional feature - some people may always want to use the latest bleeding edge version from git/svn/whatever.
I noticed that many online git/svn repos have bridges that serve raw files via http directly. No need for using the git/svn tool on the client. Andrei
Maybe "many" do this, but i.e. custom SVN repos often don't to it. I like using SVN with trac - trac does have an interface for displaying the code, you can even download the raw files, but it's rather slow (due to trac overhead) and usually shouldn't be used to download the source. Cheers, - Daniel
Jun 14 2011
prev sibling next sibling parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 14.06.2011 16:09, schrieb Vladimir Panteleev:
 On Tue, 14 Jun 2011 16:53:16 +0300, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:
 
 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11
Why this is a bad idea: 1) It hard-codes URLs in source code. Projects often move to other code-hosting services. PHP, Python, Perl, not sure about Ruby all have a central website which stores package metadata. 2) It requires that the raw source code be available via HTTP. Not all code hosting services allow this. GitHub will redirect all HTTP requests to HTTPS.
It should support HTTPS anyway, to prevent DNS spoofing attacks and such (i.e. most attacks that don't need your own machine to be compromised). But maybe additional support for signing the code would be even better, to be able to detect compromised code on the server.
 3) It only solves the problem for D modules, but not any other possible
 dependencies.
 
 I understand that this is a very urgent problem, but my opinion is that
 this half-arsed solution will only delay implementing and cause
 migration problems to a real solution, which should be able to handle
 svn/hg/git checkout, proper packages with custom build scripts,
 versioning, miscellaneous dependencies, publishing, etc.
 
I personally think that a standard build tool that does this (and possibly also ships with DMD) would be better than support directly in the language. Especially the case that the projects website changes could be handled more easily by adjusting the URL in a config file instead of changing your code. Cheers, - Daniel
Jun 14 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/14/11 9:35 AM, Daniel Gibson wrote:
 Am 14.06.2011 16:09, schrieb Vladimir Panteleev:
 On Tue, 14 Jun 2011 16:53:16 +0300, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org>  wrote:

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11
Why this is a bad idea: 1) It hard-codes URLs in source code. Projects often move to other code-hosting services. PHP, Python, Perl, not sure about Ruby all have a central website which stores package metadata. 2) It requires that the raw source code be available via HTTP. Not all code hosting services allow this. GitHub will redirect all HTTP requests to HTTPS.
It should support HTTPS anyway, to prevent DNS spoofing attacks and such (i.e. most attacks that don't need your own machine to be compromised). But maybe additional support for signing the code would be even better, to be able to detect compromised code on the server.
Yah, I thought of a SHA1 optional parameter: pragma(liburl, mylib, "myurl", "mysha1");
 3) It only solves the problem for D modules, but not any other possible
 dependencies.

 I understand that this is a very urgent problem, but my opinion is that
 this half-arsed solution will only delay implementing and cause
 migration problems to a real solution, which should be able to handle
 svn/hg/git checkout, proper packages with custom build scripts,
 versioning, miscellaneous dependencies, publishing, etc.
I personally think that a standard build tool that does this (and possibly also ships with DMD) would be better than support directly in the language. Especially the case that the projects website changes could be handled more easily by adjusting the URL in a config file instead of changing your code.
This is still possible with D config files that contain pragmas. Andrei
Jun 14 2011
parent Daniel Gibson <metalcaedes gmail.com> writes:
Am 14.06.2011 16:49, schrieb Andrei Alexandrescu:
 On 6/14/11 9:35 AM, Daniel Gibson wrote:
 Am 14.06.2011 16:09, schrieb Vladimir Panteleev:
 On Tue, 14 Jun 2011 16:53:16 +0300, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org>  wrote:

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11
Why this is a bad idea: 1) It hard-codes URLs in source code. Projects often move to other code-hosting services. PHP, Python, Perl, not sure about Ruby all have a central website which stores package metadata. 2) It requires that the raw source code be available via HTTP. Not all code hosting services allow this. GitHub will redirect all HTTP requests to HTTPS.
It should support HTTPS anyway, to prevent DNS spoofing attacks and such (i.e. most attacks that don't need your own machine to be compromised). But maybe additional support for signing the code would be even better, to be able to detect compromised code on the server.
Yah, I thought of a SHA1 optional parameter: pragma(liburl, mylib, "myurl", "mysha1");
With the SHA1 containing the mylibs hash? That doesn't help much.. if you don't expect the file to change at all on the server, you can just bundle it with your source. One great advantage of being able to download code you're depending on is that the author can bugfix it and you can can easily update, isn't it? I thought about letting the author sign his code with GPG, e.g. having a file on the server with signed SHA1 hashes of the files or something like that.
 3) It only solves the problem for D modules, but not any other possible
 dependencies.

 I understand that this is a very urgent problem, but my opinion is that
 this half-arsed solution will only delay implementing and cause
 migration problems to a real solution, which should be able to handle
 svn/hg/git checkout, proper packages with custom build scripts,
 versioning, miscellaneous dependencies, publishing, etc.
I personally think that a standard build tool that does this (and possibly also ships with DMD) would be better than support directly in the language. Especially the case that the projects website changes could be handled more easily by adjusting the URL in a config file instead of changing your code.
This is still possible with D config files that contain pragmas.
OK. Still, I prefer a build-tool solution. I think that would be easier to expand to support git etc.
 
 Andrei
Cheers, - Daniel
Jun 14 2011
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/14/11 9:09 AM, Vladimir Panteleev wrote:
 On Tue, 14 Jun 2011 16:53:16 +0300, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11
Why this is a bad idea: 1) It hard-codes URLs in source code. Projects often move to other code-hosting services. PHP, Python, Perl, not sure about Ruby all have a central website which stores package metadata.
The way I see it, transitivity should take care of this. Consider: // File http://dsource.org/libs/mysql.d pragma(liburl, mysql, "http://some.site.org/d/mysql.d") import mysql; // User file pragma(liburl, dsource, "http://dsource.org/libs/"); import dsource.mysql; With transitivity you can arrange central repos via an additional level of indirection.
 2) It requires that the raw source code be available via HTTP. Not all
 code hosting services allow this. GitHub will redirect all HTTP requests
 to HTTPS.
URL includes https and ftp.
 3) It only solves the problem for D modules, but not any other possible
 dependencies.
Correct. So we should extend this towards handling other scenarios too.
 I understand that this is a very urgent problem, but my opinion is that
 this half-arsed solution will only delay implementing and cause
 migration problems to a real solution, which should be able to handle
 svn/hg/git checkout, proper packages with custom build scripts,
 versioning, miscellaneous dependencies, publishing, etc.
Finding weakness in a proposal is easy. The more difficult thing to do is to find ways to improve it or propose alternatives. Andrei
Jun 14 2011
parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Tue, 14 Jun 2011 17:47:32 +0300, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 Finding weakness in a proposal is easy. The more difficult thing to do  
 is to find ways to improve it or propose alternatives.
I think the only solid alternative is to stop trying to reinvent the wheel, and "start up our photocopiers" (copy CPAN/Gems/PECL). -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jun 14 2011
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/14/11 11:34 AM, Vladimir Panteleev wrote:
 On Tue, 14 Jun 2011 17:47:32 +0300, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 Finding weakness in a proposal is easy. The more difficult thing to do
 is to find ways to improve it or propose alternatives.
I think the only solid alternative is to stop trying to reinvent the wheel, and "start up our photocopiers" (copy CPAN/Gems/PECL).
Agreed. I've used CPAN in the past, and just read its basics again. I don't think the proposal is majorly diverging from it. Andrei
Jun 14 2011
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-06-14 18:34, Vladimir Panteleev wrote:
 On Tue, 14 Jun 2011 17:47:32 +0300, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 Finding weakness in a proposal is easy. The more difficult thing to do
 is to find ways to improve it or propose alternatives.
I think the only solid alternative is to stop trying to reinvent the wheel, and "start up our photocopiers" (copy CPAN/Gems/PECL).
That's what I'm doing (copying Rubygems), see https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D -- /Jacob Carlborg
Jun 17 2011
parent David Nadlinger <see klickverbot.at> writes:
On 6/17/11 6:18 PM, Jacob Carlborg wrote:
 On 2011-06-14 18:34, Vladimir Panteleev wrote:
 On Tue, 14 Jun 2011 17:47:32 +0300, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 Finding weakness in a proposal is easy. The more difficult thing to do
 is to find ways to improve it or propose alternatives.
I think the only solid alternative is to stop trying to reinvent the wheel, and "start up our photocopiers" (copy CPAN/Gems/PECL).
That's what I'm doing (copying Rubygems), see https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D
Oh, sorry, I just wanted to fix the typo in the headline, but broke the link as well. Now at: https://github.com/jacob-carlborg/orbit/wiki/Orbit-Package-Manager-for-D David
Jun 17 2011
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2011-06-14 16:09, Vladimir Panteleev wrote:
 On Tue, 14 Jun 2011 16:53:16 +0300, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11
Why this is a bad idea: 1) It hard-codes URLs in source code. Projects often move to other code-hosting services. PHP, Python, Perl, not sure about Ruby all have a central website which stores package metadata. 2) It requires that the raw source code be available via HTTP. Not all code hosting services allow this. GitHub will redirect all HTTP requests to HTTPS. 3) It only solves the problem for D modules, but not any other possible dependencies. I understand that this is a very urgent problem, but my opinion is that this half-arsed solution will only delay implementing and cause migration problems to a real solution, which should be able to handle svn/hg/git checkout, proper packages with custom build scripts, versioning, miscellaneous dependencies, publishing, etc.
I agree with this, see my ideas in an answer to the original post. -- /Jacob Carlborg
Jun 17 2011
prev sibling next sibling parent reply Bernard Helyer <b.helyer gmail.com> writes:
On Tue, 14 Jun 2011 08:53:16 -0500, Andrei Alexandrescu wrote:

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11
 
 Destroy.
It ultimately seems slap-dash and underspecified. Vladimir brings up a lot of good criticisms. I don't this is something that I'd support in its current state.
Jun 14 2011
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/14/11 9:28 AM, Bernard Helyer wrote:
 On Tue, 14 Jun 2011 08:53:16 -0500, Andrei Alexandrescu wrote:

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.
It ultimately seems slap-dash and underspecified. Vladimir brings up a lot of good criticisms. I don't this is something that I'd support in its current state.
I think many of Vladimir's points are themselves hasty. Please improve. Thanks, Andrei
Jun 14 2011
prev sibling next sibling parent Adam D. Ruppe <destructionator gmail.com> writes:
Andrei Alexandrescu:
 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11
This is pretty similar to what my build.d does, except I assume the base url instead of letting it be specified. (I considered having a metadata file, but since I control the server anyway, I don't need it for me!) What I like is that it's a very simple idea that has no central point of failure.
Jun 14 2011
prev sibling next sibling parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2011-06-14 09:53:16 -0400, Andrei Alexandrescu 
<SeeWebsiteForEmail erdani.org> said:

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11
I disagree about the model where you specify URLs in source code. I'd suggest an alternative solution: If DMD finds a dmd.conf file at the root of one of the -I directories, it uses the download parameters in that file to download to populate the directory with modules as needed. This would also apply to the current directory. The content of that dmd.conf file could look like this: [module-download] package.* = http://someserver.net/d-repo/latest/ otherpack.* = http://otherserver.net/d-repo/1.2/ That way it's easy to understand what comes from where without having to scan all your source code for pragmas. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Jun 14 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/14/11 9:45 AM, Michel Fortin wrote:
 On 2011-06-14 09:53:16 -0400, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> said:

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11
I disagree about the model where you specify URLs in source code.
Why?
 I'd suggest an alternative solution: If DMD finds a dmd.conf file at the
 root of one of the -I directories, it uses the download parameters in
 that file to download to populate the directory with modules as needed.
 This would also apply to the current directory. The content of that
 dmd.conf file could look like this:

 [module-download]
 package.* = http://someserver.net/d-repo/latest/
 otherpack.* = http://otherserver.net/d-repo/1.2/

 That way it's easy to understand what comes from where without having to
 scan all your source code for pragmas.
I agree this would be good to have, but not exclusively. The problem is that anyone who wants to share some code must require potential users to modify their config files, instead of offering them a key-ready solution. Andrei
Jun 14 2011
next sibling parent reply Bernard Helyer <b.helyer gmail.com> writes:
To add an editorial note, I feel that this DIP misses the problem. D 
needs a build tool to be used effectively. This addresses _an_ aspect, 
but not the whole thing. I think a 'Digital Mars build tool' that was 
shipped with DMD would be a far more valuable and effective thing to put 
this and a whole lot more than bolting things onto the language.
Jun 14 2011
parent Bernard Helyer <b.helyer gmail.com> writes:
On Tue, 14 Jun 2011 15:25:08 +0000, Bernard Helyer wrote:

 To add an editorial note, I feel that this DIP misses the problem. D
 needs a build tool to be used effectively. This addresses _an_ aspect,
 but not the whole thing. I think a 'Digital Mars build tool' that was
 shipped with DMD would be a far more valuable and effective thing to put
 this and a whole lot more than bolting things onto the language.
This was intended for my post with the IRC logs.
Jun 14 2011
prev sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 14 Jun 2011 10:51:06 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 6/14/11 9:45 AM, Michel Fortin wrote:
 On 2011-06-14 09:53:16 -0400, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> said:

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11
I disagree about the model where you specify URLs in source code.
Why?
Because the location of software is not controllable by the builder. For example, if the owner of the code decides to move his software, the user compiling code using that software is at his mercy. Contrast that with import paths, where the user compiling is in control of the paths on his hard drive. So essentially, code can "break" outside of your control, and the only recourse you have is to modify the source code (not a pleasant prospect for people who are not developers).
 I'd suggest an alternative solution: If DMD finds a dmd.conf file at the
 root of one of the -I directories, it uses the download parameters in
 that file to download to populate the directory with modules as needed.
 This would also apply to the current directory. The content of that
 dmd.conf file could look like this:

 [module-download]
 package.* = http://someserver.net/d-repo/latest/
 otherpack.* = http://otherserver.net/d-repo/1.2/

 That way it's easy to understand what comes from where without having to
 scan all your source code for pragmas.
I agree this would be good to have, but not exclusively. The problem is that anyone who wants to share some code must require potential users to modify their config files, instead of offering them a key-ready solution.
Just unpack them? I mean, there's no reason you can't install their code on your hard drive. If you want to publish your code for others to use, you have three options: 1. provide a download. 2. provide a config line for people to add to their dmd.conf 2. put your code in a central repository (like dsource.org) that is already added to everyone's configuration by default. What might be useful is a tool to allow simplified management of the sources, i.e.: dmdpkg addsource http://... I also think it would be nice to provide wildcards for centralized servers. So that as packages are added to the server, you don't need to modify your rules. For example, if someone adds a package to dsource, there should be a rule that allows people to publish the code without people having to add lines to their dmd.conf. This might require some sort of metadata file system. I really think this project needs more design work. -Steve
Jun 14 2011
prev sibling next sibling parent reply Bernard Helyer <b.helyer gmail.com> writes:
We just had a discussion with Andrei on IRC. Logs follow:

<andralex> bernardh: destroy
<andralex> so I can cry
<bernardh> This could be handled by an external tool, which would not 
increase the burden on the language and implementation. 
<bernardh> Because suddenly you've added networking to the compiler, 
which is fine, but it better be worth it. 
<bernardh> And really what you add to the language isn't worth it.
<bernardh> A pragma that says you can fetch it from that?
<bernardh> A build tool could do that. 
<bernardh> No need for that to be encoded in the source.
<bernardh> andralex, tl;dr - you don't add enough semantics to justify 
adding this to the compiler
<andralex> bernardh: well
<andralex> the advantage is that the barrier of entry for sharing code 
and using shared code is lowered
<andralex> just paste one line and you're ready to go
<bernardh> Yeah, but most projects use a build tool.
<bernardh> So that _still_ has to be set up.
<bernardh> That's non-zero.
<bernardh> If you add the capability to that tool.
<bernardh> The increase is identical to that if you added it the compiler.
<bernardh> Besides which, URLs to libraries seems better in build scripts 
than the source itself.
<bernardh> andralex, _and_ that cuts down on modifications to the 
compiler (you're proposing a change to the link behaviour in that case). 
<bernardh> So my point remains. 
<bernardh> You add burden to the language and the implementation for no 
gain.
<andralex> hm
<andralex> well I mentioned to walter that rdmd could take care of that
<andralex> he wanted it to be in dmd
<andralex> his point is this:
<andralex> say I'm John Smith and I wrote some cool D code
<andralex> how do you try it?
<andralex> well it depends on some libs
<andralex> so you need to download John's code and those libs
<andralex> put them somewhere
<andralex> then build accordingly
<andralex> this proposal cuts through all that red tape
<bernardh> You didn't respond to me at all. 
<andralex> ?
<andralex> You mentioned no gain
<bernardh> The build tool could refer to scripts and then the 
dependencies would be automatically set up just the same.
<andralex> I just told you where the gain is
<bernardh> ^ 
<andralex> who writes the scripts?
<bernardh> Library writers. 
<Suprano> ME
* wilsonkk has quit (Remote host closed the connection)
<andralex> exactly
* Joseph_ has quit (Ping timeout: 276 seconds)
<bernardh> What's your point?
<andralex> so everyone will choose their own paths, their own 
conventions, their own dependency setup
<bernardh> The simplest form for John
<bernardh> No, because the conventions would be set by the tool.
<mleise> I can understand both your points of view well. This feature 
fits with dmds other built-in capabilities: ddoc, profiler, code 
coverage, ... . Then again it is something you expect to be done once by 
built tools (like Maven) and it adds some non-determinism to the compiler.
<Nekuromento_work> I think it complicates things more than it helps
<Nekuromento_work> where dmd will fetch the code?
<Nekuromento_work> how can dmd interact with code that needs custom build 
sctips
<Nekuromento_work> what happens if several of my projects use one 
library? will dmd keep one copy for each?
<Nekuromento_work> what happens if I use different versions?
<Nekuromento_work> etc.
<bernardh> The end result for the user would be the same .
<bernardh> Paste url in file.
<bernardh> Use code. 
<bernardh> Happy. 
<Nekuromento_work> the point is, It's not compiler's responsibility 
<andralex> Nekuromento_work: why not?
<andralex> I mean I don't see a strong reason why not
<bernardh> andralex, because the compiler does no other build work
<andralex> hm
<bernardh> andralex, the build tool WOULD STILL BE REQUIRED, unless DMD 
does a whole lot more
<andralex> well I need to go; at best we should discuss in the newsgroup 
so there's an archive
<bernardh> So adding it to the compiler would only move it.
<bernardh> Unless the DIP is improved, I for one won't implement it. 
<andralex> bernardh: the problem with having a build tool is that it 
requires additional files
<bernardh> Waste of time in its current form.
<andralex> config, etc.
* wilsonkk (~kvirc S0106001b11030a92.cg.shawcable.net) has joined #d
<andralex> with this it's all included in the source
<andralex> where it belongs
<bernardh> Yes but it's STILL REQUIRED
<bernardh> You haven't removed the build tool
<bernardh> only one aspect
<andralex> right
<andralex> agreed
<bernardh> So without more support, it's a useless addition. 
<bernardh> That's my opinion, in the end.
Jun 14 2011
parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 14.06.2011 17:20, schrieb Bernard Helyer:
 We just had a discussion with Andrei on IRC. Logs follow:
 
 <andralex> bernardh: destroy
 <andralex> so I can cry
 <bernardh> This could be handled by an external tool, which would not 
 increase the burden on the language and implementation. 
 <bernardh> Because suddenly you've added networking to the compiler, 
 which is fine, but it better be worth it. 
 <bernardh> And really what you add to the language isn't worth it.
 <bernardh> A pragma that says you can fetch it from that?
 <bernardh> A build tool could do that. 
 <bernardh> No need for that to be encoded in the source.
 <bernardh> andralex, tl;dr - you don't add enough semantics to justify 
 adding this to the compiler
 <andralex> bernardh: well
 <andralex> the advantage is that the barrier of entry for sharing code 
 and using shared code is lowered
 <andralex> just paste one line and you're ready to go
 <bernardh> Yeah, but most projects use a build tool.
 <bernardh> So that _still_ has to be set up.
 <bernardh> That's non-zero.
 <bernardh> If you add the capability to that tool.
 <bernardh> The increase is identical to that if you added it the compiler.
 <bernardh> Besides which, URLs to libraries seems better in build scripts 
 than the source itself.
 <bernardh> andralex, _and_ that cuts down on modifications to the 
 compiler (you're proposing a change to the link behaviour in that case). 
 <bernardh> So my point remains. 
 <bernardh> You add burden to the language and the implementation for no 
 gain.
 <andralex> hm
 <andralex> well I mentioned to walter that rdmd could take care of that
 <andralex> he wanted it to be in dmd
 <andralex> his point is this:
 <andralex> say I'm John Smith and I wrote some cool D code
 <andralex> how do you try it?
 <andralex> well it depends on some libs
 <andralex> so you need to download John's code and those libs
 <andralex> put them somewhere
 <andralex> then build accordingly
 <andralex> this proposal cuts through all that red tape
 <bernardh> You didn't respond to me at all. 
 <andralex> ?
 <andralex> You mentioned no gain
 <bernardh> The build tool could refer to scripts and then the 
 dependencies would be automatically set up just the same.
 <andralex> I just told you where the gain is
 <bernardh> ^ 
 <andralex> who writes the scripts?
 <bernardh> Library writers. 
 <Suprano> ME
 * wilsonkk has quit (Remote host closed the connection)
 <andralex> exactly
 * Joseph_ has quit (Ping timeout: 276 seconds)
 <bernardh> What's your point?
 <andralex> so everyone will choose their own paths, their own 
 conventions, their own dependency setup
 <bernardh> The simplest form for John
 <bernardh> No, because the conventions would be set by the tool.
 <mleise> I can understand both your points of view well. This feature 
 fits with dmds other built-in capabilities: ddoc, profiler, code 
 coverage, ... . Then again it is something you expect to be done once by 
 built tools (like Maven) and it adds some non-determinism to the compiler.
 <Nekuromento_work> I think it complicates things more than it helps
 <Nekuromento_work> where dmd will fetch the code?
 <Nekuromento_work> how can dmd interact with code that needs custom build 
 sctips
 <Nekuromento_work> what happens if several of my projects use one 
 library? will dmd keep one copy for each?
 <Nekuromento_work> what happens if I use different versions?
 <Nekuromento_work> etc.
 <bernardh> The end result for the user would be the same .
 <bernardh> Paste url in file.
 <bernardh> Use code. 
 <bernardh> Happy. 
 <Nekuromento_work> the point is, It's not compiler's responsibility 
 <andralex> Nekuromento_work: why not?
 <andralex> I mean I don't see a strong reason why not
 <bernardh> andralex, because the compiler does no other build work
 <andralex> hm
 <bernardh> andralex, the build tool WOULD STILL BE REQUIRED, unless DMD 
 does a whole lot more
 <andralex> well I need to go; at best we should discuss in the newsgroup 
 so there's an archive
 <bernardh> So adding it to the compiler would only move it.
 <bernardh> Unless the DIP is improved, I for one won't implement it. 
 <andralex> bernardh: the problem with having a build tool is that it 
 requires additional files
 <bernardh> Waste of time in its current form.
 <andralex> config, etc.
 * wilsonkk (~kvirc S0106001b11030a92.cg.shawcable.net) has joined #d
 <andralex> with this it's all included in the source
 <andralex> where it belongs
 <bernardh> Yes but it's STILL REQUIRED
 <bernardh> You haven't removed the build tool
 <bernardh> only one aspect
 <andralex> right
 <andralex> agreed
 <bernardh> So without more support, it's a useless addition. 
 <bernardh> That's my opinion, in the end.
<andralex> bernardh: the problem with having a build tool is that it requires additional files One additional file. I don't think having one file would be a burden to the programmer, not much more than adding pragmas in his code. But if there's central metadata repository even this additional file isn't needed - neither are pragmas - (the build-tool will ask that repo where to find the lib/module), unless the lib is kind of obscure or brand-new and thus not known by the metadata repo. And in that case: it's just a single file. (Of course it would be possible to periodically or via "build-tool update" - like apt-get update - fetch the metadata, so the server doesn't have to be asked each time.) Cheers, - Daniel
Jun 14 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/14/11 10:26 AM, Daniel Gibson wrote:
 One additional file. I don't think having one file would be a burden to
 the programmer, not much more than adding pragmas in his code.

 But if there's central metadata repository even this additional file
 isn't needed - neither are pragmas - (the build-tool will ask that repo
 where to find the lib/module), unless the lib is kind of obscure or
 brand-new and thus not known by the metadata repo. And in that case:
 it's just a single file.

 (Of course it would be possible to periodically or via "build-tool
 update" - like apt-get update - fetch the metadata, so the server
 doesn't have to be asked each time.)
I agree that a build tool is an alternative. The problem is that it's front-heavy - we need to design the config file format, design the tool to deal with various dependency issues etc. Many of these issues are solved by design or don't even exist with the pragma, for example there's no need for a config file format or for config files in the first place (although they can be easily defined as small .d files consisting of pragmas). One thing I like about the pragma is that it's mostly mechanism and very little policy, while at the same time fostering very simple, straightforward policies. Source files can carry their own dependencies (but they don't need to), transitivity just works, (re|in)direction and central repos are possible without being required. One other interesting aspect is that the string literal can be CTFE-constructed, i.e. may include a path to a library depending on the OS, version, etc. An external tool would need to give up on that (and use multiple configuration files) or invent its own string manipulation primitives. Andrei
Jun 14 2011
next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
 One other interesting aspect is that the string literal can be
 CTFE-constructed,
Oh, or it could be in version {} blocks. I like that. I think we should actually whip up a working model. It needn't be a compiler feature at this point - we can use pragma(msg, "BUILD: " ~ param) for now and have a helper program scan dmd's output.
Jun 14 2011
parent reply Graham Fawcett <fawcett uwindsor.ca> writes:
On Tue, 14 Jun 2011 15:59:15 +0000, Adam D. Ruppe wrote:

 One other interesting aspect is that the string literal can be
 CTFE-constructed,
Oh, or it could be in version {} blocks. I like that. I think we should actually whip up a working model. It needn't be a compiler feature at this point - we can use pragma(msg, "BUILD: " ~ param) for now and have a helper program scan dmd's output.
+1, sounds fun. :) Rather than pragma(msg), you could also use pragma(liburl), and run dmd with "-ignore -v". You can parse the pragma from there. (I think you'd need to write `pragma(liburl, "name-in-quotes", "url-in-quotes")`, a slight diversion from Andrei's syntax, but otherwise it would work.) Graham
Jun 14 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/14/11 11:32 AM, Graham Fawcett wrote:
 On Tue, 14 Jun 2011 15:59:15 +0000, Adam D. Ruppe wrote:

 One other interesting aspect is that the string literal can be
 CTFE-constructed,
Oh, or it could be in version {} blocks. I like that. I think we should actually whip up a working model. It needn't be a compiler feature at this point - we can use pragma(msg, "BUILD: " ~ param) for now and have a helper program scan dmd's output.
+1, sounds fun. :) Rather than pragma(msg), you could also use pragma(liburl), and run dmd with "-ignore -v". You can parse the pragma from there. (I think you'd need to write `pragma(liburl, "name-in-quotes", "url-in-quotes")`, a slight diversion from Andrei's syntax, but otherwise it would work.) Graham
I just realized that one advantage of the download being integrated in the compiler is that the compiler is the sole tool with full knowledge and control of what modules are imported. A tool could repeatedly run the compiler with -v and see what modules it couldn't load, to then download them. (Also, of course, a tool could rely on extralinguistic library management control that has its own advantages and disadvantages as we discussed.) Andrei
Jun 14 2011
next sibling parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 14.06.2011 18:35, schrieb Andrei Alexandrescu:
 On 6/14/11 11:32 AM, Graham Fawcett wrote:
 On Tue, 14 Jun 2011 15:59:15 +0000, Adam D. Ruppe wrote:

 One other interesting aspect is that the string literal can be
 CTFE-constructed,
Oh, or it could be in version {} blocks. I like that. I think we should actually whip up a working model. It needn't be a compiler feature at this point - we can use pragma(msg, "BUILD: " ~ param) for now and have a helper program scan dmd's output.
+1, sounds fun. :) Rather than pragma(msg), you could also use pragma(liburl), and run dmd with "-ignore -v". You can parse the pragma from there. (I think you'd need to write `pragma(liburl, "name-in-quotes", "url-in-quotes")`, a slight diversion from Andrei's syntax, but otherwise it would work.) Graham
I just realized that one advantage of the download being integrated in the compiler is that the compiler is the sole tool with full knowledge and control of what modules are imported. A tool could repeatedly run the compiler with -v and see what modules it couldn't load, to then download them. (Also, of course, a tool could rely on extralinguistic library management control that has its own advantages and disadvantages as we discussed.) Andrei
Hmm I thought somebody (possibly Walter) mentioned adding something to the compiler that makes this more easy (probably outputting all imports or all imports that are not in Phobos or something like that). Unfortunately my memory is to vague to find the corresponding message or to even remember what exactly was done - maybe somebody else remembers it?
Jun 14 2011
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/14/11 11:39 AM, Daniel Gibson wrote:
 Am 14.06.2011 18:35, schrieb Andrei Alexandrescu:
 On 6/14/11 11:32 AM, Graham Fawcett wrote:
 On Tue, 14 Jun 2011 15:59:15 +0000, Adam D. Ruppe wrote:

 One other interesting aspect is that the string literal can be
 CTFE-constructed,
Oh, or it could be in version {} blocks. I like that. I think we should actually whip up a working model. It needn't be a compiler feature at this point - we can use pragma(msg, "BUILD: " ~ param) for now and have a helper program scan dmd's output.
+1, sounds fun. :) Rather than pragma(msg), you could also use pragma(liburl), and run dmd with "-ignore -v". You can parse the pragma from there. (I think you'd need to write `pragma(liburl, "name-in-quotes", "url-in-quotes")`, a slight diversion from Andrei's syntax, but otherwise it would work.) Graham
I just realized that one advantage of the download being integrated in the compiler is that the compiler is the sole tool with full knowledge and control of what modules are imported. A tool could repeatedly run the compiler with -v and see what modules it couldn't load, to then download them. (Also, of course, a tool could rely on extralinguistic library management control that has its own advantages and disadvantages as we discussed.) Andrei
Hmm I thought somebody (possibly Walter) mentioned adding something to the compiler that makes this more easy (probably outputting all imports or all imports that are not in Phobos or something like that). Unfortunately my memory is to vague to find the corresponding message or to even remember what exactly was done - maybe somebody else remembers it?
That is actually possible only with multiple runs due to the transitivity issues. For example, module A tries to import module B and fails -> dmd outputs B. But then module B tries to import other modules, so a new run of dmd is needed etc. This is not pernicious, but it definitely requires a nontrivial tool to handle all that. Andrei
Jun 14 2011
prev sibling next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
Andrei Alexandrescu wrote:
 A tool could repeatedly run
 the compiler with -v and see what modules it couldn't load, to
 then download them.
This is what my build.d does. <http://arsdnet.net/dcode/build.d> There's a problem though: it's pretty slow. You always run the compiler at least twice with this setup with any libs - once to get the dependencies and build the compile line, once to actually compile. If there's nested dependencies, it just gets worse. Build into the compiler, and the speed ought to be improvable by several times.
Jun 14 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/14/11 11:51 AM, Adam D. Ruppe wrote:
 Andrei Alexandrescu wrote:
 A tool could repeatedly run
 the compiler with -v and see what modules it couldn't load, to
 then download them.
This is what my build.d does.<http://arsdnet.net/dcode/build.d> There's a problem though: it's pretty slow. You always run the compiler at least twice with this setup with any libs - once to get the dependencies and build the compile line, once to actually compile. If there's nested dependencies, it just gets worse. Build into the compiler, and the speed ought to be improvable by several times.
This is a very compelling data point. I added a mention of it to the proposal. See "Alternatives" at the end. http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11 Andrei
Jun 14 2011
parent reply bearophile <bearophileHUGS lycos.com> writes:
Andrei:

 This is a very compelling data point. I added a mention of it to the 
 proposal. See "Alternatives" at the end.
 
 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11
Isn't this also an argument for the (ancient request of) inclusion of a normal build feature into the D compiler? Bye, bearophile
Jun 14 2011
parent reply Adam D. Ruppe <destructionator gmail.com> writes:
bearophile:
 Isn't this also an argument for the (ancient request of) inclusion
 of a normal build feature into the D compiler?
I think if the compiler is going to be downloading files, it is necessarily looking for those files... thus it'd become a bit of a smarter build tool as a side effect of this change. It'd be silly if it said "this module is missing, and I'll download it, but I refuse to actually look at it once it's downloaded!"
Jun 14 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/14/11 12:21 PM, Adam D. Ruppe wrote:
 bearophile:
 Isn't this also an argument for the (ancient request of) inclusion
 of a normal build feature into the D compiler?
I think if the compiler is going to be downloading files, it is necessarily looking for those files... thus it'd become a bit of a smarter build tool as a side effect of this change. It'd be silly if it said "this module is missing, and I'll download it, but I refuse to actually look at it once it's downloaded!"
Agreed. I think we should have a pragma that mentions "add this file to the build" (possibly by adapting the existing pragma(lib, ...)), and that pragma(liburl, ...) should imply that other pragma. Andrei
Jun 14 2011
parent Graham Fawcett <fawcett uwindsor.ca> writes:
On Tue, 14 Jun 2011 12:20:33 -0500, Andrei Alexandrescu wrote:

 On 6/14/11 12:21 PM, Adam D. Ruppe wrote:
 bearophile:
 Isn't this also an argument for the (ancient request of) inclusion of
 a normal build feature into the D compiler?
I think if the compiler is going to be downloading files, it is necessarily looking for those files... thus it'd become a bit of a smarter build tool as a side effect of this change. It'd be silly if it said "this module is missing, and I'll download it, but I refuse to actually look at it once it's downloaded!"
Agreed. I think we should have a pragma that mentions "add this file to the build" (possibly by adapting the existing pragma(lib, ...)), and that pragma(liburl, ...) should imply that other pragma.
pragma(resolve_all_dependencies_just_like_rdmd_does)? As a data point: the Glasgow Haskell Compiler has a "--make" option which discovers and includes all dependent libraries, in the "rdmd" style. Only recently (ghc 7?), they made it the default behaviour for the compiler. I'd *really* like to see "dmd --make". Even better, "dmd --disable-make", because "--make" is the default. Graham
Jun 14 2011
prev sibling next sibling parent Graham Fawcett <fawcett uwindsor.ca> writes:
On Tue, 14 Jun 2011 11:35:44 -0500, Andrei Alexandrescu wrote:

 On 6/14/11 11:32 AM, Graham Fawcett wrote:
 On Tue, 14 Jun 2011 15:59:15 +0000, Adam D. Ruppe wrote:

 One other interesting aspect is that the string literal can be
 CTFE-constructed,
Oh, or it could be in version {} blocks. I like that. I think we should actually whip up a working model. It needn't be a compiler feature at this point - we can use pragma(msg, "BUILD: " ~ param) for now and have a helper program scan dmd's output.
+1, sounds fun. :) Rather than pragma(msg), you could also use pragma(liburl), and run dmd with "-ignore -v". You can parse the pragma from there. (I think you'd need to write `pragma(liburl, "name-in-quotes", "url-in-quotes")`, a slight diversion from Andrei's syntax, but otherwise it would work.) Graham
I just realized that one advantage of the download being integrated in the compiler is that the compiler is the sole tool with full knowledge and control of what modules are imported. A tool could repeatedly run the compiler with -v and see what modules it couldn't load, to then download them. (Also, of course, a tool could rely on extralinguistic library management control that has its own advantages and disadvantages as we discussed.)
You could get pretty far with an external tool, as long as the pragmas were explicit in the source, and not generated by mixins at compile time. That's the point at which compiler support becomes more than nice-to-have. Graham
Jun 14 2011
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 14 Jun 2011 12:35:44 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 6/14/11 11:32 AM, Graham Fawcett wrote:
 On Tue, 14 Jun 2011 15:59:15 +0000, Adam D. Ruppe wrote:

 One other interesting aspect is that the string literal can be
 CTFE-constructed,
Oh, or it could be in version {} blocks. I like that. I think we should actually whip up a working model. It needn't be a compiler feature at this point - we can use pragma(msg, "BUILD: " ~ param) for now and have a helper program scan dmd's output.
+1, sounds fun. :) Rather than pragma(msg), you could also use pragma(liburl), and run dmd with "-ignore -v". You can parse the pragma from there. (I think you'd need to write `pragma(liburl, "name-in-quotes", "url-in-quotes")`, a slight diversion from Andrei's syntax, but otherwise it would work.) Graham
I just realized that one advantage of the download being integrated in the compiler is that the compiler is the sole tool with full knowledge and control of what modules are imported. A tool could repeatedly run the compiler with -v and see what modules it couldn't load, to then download them. (Also, of course, a tool could rely on extralinguistic library management control that has its own advantages and disadvantages as we discussed.)
I think it should be split as follows: dmd: determine *what* to download (i.e. I need to import module x.y.z) external tool: determine *where* and *how* to download it. (i.e. module x.y.z lives on http://somerepository.org/x, go get it and save it) The advantages being: 1. there exists umpteen billion already-existing tools that fetch and install data over the network 2. dmd does not contain parts that have nothing to do with compiling, which could potentially screw up the compiling part. 3. Depending on the tool language, the barrier to development of it would be significantly reduced. Most people feel quite uncomfortable messing with compiler source code, but have no problems editing something like a shell script, or even a simple d-based tool. 4. The compiler is written in C++, and hence so would the part that does this have to be... yuck! -Steve
Jun 14 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/14/11 1:58 PM, Steven Schveighoffer wrote:
 I think it should be split as follows:

 dmd: determine *what* to download (i.e. I need to import module x.y.z)
It can't (unless it does the download too) due to transitive dependencies.
 external tool: determine *where* and *how* to download it. (i.e. module
 x.y.z lives on http://somerepository.org/x, go get it and save it)

 The advantages being:

 1. there exists umpteen billion already-existing tools that fetch and
 install data over the network
 2. dmd does not contain parts that have nothing to do with compiling,
 which could potentially screw up the compiling part.
 3. Depending on the tool language, the barrier to development of it
 would be significantly reduced. Most people feel quite uncomfortable
 messing with compiler source code, but have no problems editing
 something like a shell script, or even a simple d-based tool.
 4. The compiler is written in C++, and hence so would the part that does
 this have to be... yuck!
Not sure I grok 3 and 4, but as far as I can tell the crux of the matter is that dependencies are already embedded in .d files. That's why I think it's simpler to just let dmd take care of them all instead of maintaining dependency description files in separation from the .d files. The umpteen billion tools don't know what it takes to download and build everything starting from one or a few root modules. They could execute the download, yes (and of course we'll use such a library for that), but we need a means to drive them. I think Adam's tool is a good identification and alternative solution for the problem that the pragma solves. Andrei
Jun 14 2011
next sibling parent reply Robert Clipsham <robert octarineparrot.com> writes:
On 14/06/2011 20:14, Andrei Alexandrescu wrote:
 On 6/14/11 1:58 PM, Steven Schveighoffer wrote:
 I think it should be split as follows:

 dmd: determine *what* to download (i.e. I need to import module x.y.z)
It can't (unless it does the download too) due to transitive dependencies.
It can work with the build tool though - build foo -> depends on bar. Build tool gets bar -> build bar -> depends baz, etc. -- Robert http://octarineparrot.com/
Jun 14 2011
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/14/11 2:24 PM, Robert Clipsham wrote:
 On 14/06/2011 20:14, Andrei Alexandrescu wrote:
 On 6/14/11 1:58 PM, Steven Schveighoffer wrote:
 I think it should be split as follows:

 dmd: determine *what* to download (i.e. I need to import module x.y.z)
It can't (unless it does the download too) due to transitive dependencies.
It can work with the build tool though - build foo -> depends on bar. Build tool gets bar -> build bar -> depends baz, etc.
Yes, that's what Adam's tool does. Langsam, langsam, aber sicher... Andrei
Jun 14 2011
prev sibling next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 14 Jun 2011 15:14:28 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 6/14/11 1:58 PM, Steven Schveighoffer wrote:
 I think it should be split as follows:

 dmd: determine *what* to download (i.e. I need to import module x.y.z)
It can't (unless it does the download too) due to transitive dependencies.
dmd: I need module foo.awesome. Where is it? filesystem: nope, don't have it dmd: damn, I guess I'll need to check with the downloader, hey downloader you have that? downloader: hm... oh, yeah! I'll get it for you filesystem: got it dmd: ok, now what's in foo.awesome? Oh, hm... foo.awesome needs bar.gnarly. Let me guess filesystem... filesystem: yeah, I suck today, go ask downloader ... How hard is that? I mean the actual downloading of files is pretty straightforward, at some point the problem reduces to "download a file". Why do we have to reinvent *that* wheel.
 external tool: determine *where* and *how* to download it. (i.e. module
 x.y.z lives on http://somerepository.org/x, go get it and save it)

 The advantages being:

 1. there exists umpteen billion already-existing tools that fetch and
 install data over the network
 2. dmd does not contain parts that have nothing to do with compiling,
 which could potentially screw up the compiling part.
 3. Depending on the tool language, the barrier to development of it
 would be significantly reduced. Most people feel quite uncomfortable
 messing with compiler source code, but have no problems editing
 something like a shell script, or even a simple d-based tool.
 4. The compiler is written in C++, and hence so would the part that does
 this have to be... yuck!
Not sure I grok 3 and 4, but as far as I can tell the crux of the matter is that dependencies are already embedded in .d files. That's why I think it's simpler to just let dmd take care of them all instead of maintaining dependency description files in separation from the .d files.
And it would, why wouldn't it? I think you may not be getting something here...
 The umpteen billion tools don't know what it takes to download and build  
 everything starting from one or a few root modules. They could execute  
 the download, yes (and of course we'll use such a library for that), but  
 we need a means to drive them.
dmd would drive them.
 I think Adam's tool is a good identification and alternative solution  
 for the problem that the pragma solves.
I haven't seen it. Just thinking out loud... -Steve
Jun 14 2011
next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
Steven Schveighoffer wrote:
 dmd: damn, I guess I'll need to check with the downloader, hey downloader
That's a really good idea. It could be a command line argument that you set to any external program you want. The dmd distribution might bundle a default downloader, so it just works out of the box, but is still a separate program.
 I haven't seen it.
Here's the code: http://arsdnet.net/dcode/build.d The idea behind it: try: dmd -v <args> watch for missing import file errors if there is a missing file, download the file from a server to the local directory. Use the name dmd provided. once it's downloaded, add that file to the dmd command line. goto try;
Jun 14 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/14/11 2:42 PM, Adam D. Ruppe wrote:
 Steven Schveighoffer wrote:
 dmd: damn, I guess I'll need to check with the downloader, hey downloader
That's a really good idea. It could be a command line argument that you set to any external program you want. The dmd distribution might bundle a default downloader, so it just works out of the box, but is still a separate program.
I, too, think this is awesome. Best of all we get to write that tool in D. Andrei
Jun 14 2011
parent reply Adam D. Ruppe <destructionator gmail.com> writes:
BTW, I don't think it should be limited to just passing a
url to the helper program.

I'd do it something like this:

dget module.name url_from_pragma


Basically, the compiler shouldn't withhold info - tell the
get program everything it might need, to keep it's options open
in how to do it's job.

If a url isn't given, the dget program is free to figure it out
by some alternative means if it can, given the module name.
Jun 14 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 14 Jun 2011 16:47:01 -0400, Adam D. Ruppe  
<destructionator gmail.com> wrote:

 BTW, I don't think it should be limited to just passing a
 url to the helper program.

 I'd do it something like this:

 dget module.name url_from_pragma
I still don't like the url being stored in the source file -- where *specifically* on the network to get the file has nothing to do with compiling the code, and fixing a path problem shouldn't involve editing a source file -- there is too much risk. For comparison, you don't have to specify a full path to the compiler of where to get modules, they are specified relative to the configured include paths. I think this model works well, and we should be able to re-use it for this purpose also. You could even just use urls as include paths: -Ihttp://www.dsource.org/projects/dcollections/import -Steve
Jun 15 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/15/11 7:53 AM, Steven Schveighoffer wrote:
 On Tue, 14 Jun 2011 16:47:01 -0400, Adam D. Ruppe
 <destructionator gmail.com> wrote:

 BTW, I don't think it should be limited to just passing a
 url to the helper program.

 I'd do it something like this:

 dget module.name url_from_pragma
I still don't like the url being stored in the source file -- where *specifically* on the network to get the file has nothing to do with compiling the code, and fixing a path problem shouldn't involve editing a source file -- there is too much risk.
First, clearly we need command-line equivalents for the pragmas. They can be subsequently loaded from a config file. The embedded URLs are for people who want to distribute libraries without requiring their users to change their config files. I think that simplifies matters for many. Again - the ULTIMATE place where dependencies exist is in the source files.
 For comparison, you don't have to specify a full path to the compiler of
 where to get modules, they are specified relative to the configured
 include paths. I think this model works well, and we should be able to
 re-use it for this purpose also. You could even just use urls as include
 paths:

 -Ihttp://www.dsource.org/projects/dcollections/import
I also think that model works well, except HTTP does not offer search the same way a filesystem does. You could do that with FTP though. Andrei
Jun 15 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 15 Jun 2011 09:53:31 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 6/15/11 7:53 AM, Steven Schveighoffer wrote:
 On Tue, 14 Jun 2011 16:47:01 -0400, Adam D. Ruppe
 <destructionator gmail.com> wrote:

 BTW, I don't think it should be limited to just passing a
 url to the helper program.

 I'd do it something like this:

 dget module.name url_from_pragma
I still don't like the url being stored in the source file -- where *specifically* on the network to get the file has nothing to do with compiling the code, and fixing a path problem shouldn't involve editing a source file -- there is too much risk.
First, clearly we need command-line equivalents for the pragmas. They can be subsequently loaded from a config file. The embedded URLs are for people who want to distribute libraries without requiring their users to change their config files. I think that simplifies matters for many. Again - the ULTIMATE place where dependencies exist is in the source files.
We have been getting along swimmingly without pragmas for adding local include paths. Why do we need to add them using pragmas for network include paths? Also, I don't see the major difference in someone who's making a piece of software from adding the include path to their source file vs. adding it to their build script. But in any case, it doesn't matter if both options are available -- it doesn't hurt to have a pragma option as long as a config option is available. I just don't want to *require* the pragma solution.
 For comparison, you don't have to specify a full path to the compiler of
 where to get modules, they are specified relative to the configured
 include paths. I think this model works well, and we should be able to
 re-use it for this purpose also. You could even just use urls as include
 paths:

 -Ihttp://www.dsource.org/projects/dcollections/import
I also think that model works well, except HTTP does not offer search the same way a filesystem does. You could do that with FTP though.
dget would just add the appropriate path: import dcollections.TreeMap => get http://www.dsource.org/projects/dcollections/import/dcollections/TreeMap.d hm.. doesn't work get http://www.dsource.org/projects/dcollections/import/dcollections/TreeMap.di ok, there it is! As I said in another post, you could also specify a zip file or tarball as a base path, and the whole package is downloaded instead. We may need some sort of manifest instead in order to verify the import will be found instead of downloading the entire package to find out. -Steve
Jun 15 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/15/11 9:13 AM, Steven Schveighoffer wrote:
 On Wed, 15 Jun 2011 09:53:31 -0400, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 On 6/15/11 7:53 AM, Steven Schveighoffer wrote:
 On Tue, 14 Jun 2011 16:47:01 -0400, Adam D. Ruppe
 <destructionator gmail.com> wrote:

 BTW, I don't think it should be limited to just passing a
 url to the helper program.

 I'd do it something like this:

 dget module.name url_from_pragma
I still don't like the url being stored in the source file -- where *specifically* on the network to get the file has nothing to do with compiling the code, and fixing a path problem shouldn't involve editing a source file -- there is too much risk.
First, clearly we need command-line equivalents for the pragmas. They can be subsequently loaded from a config file. The embedded URLs are for people who want to distribute libraries without requiring their users to change their config files. I think that simplifies matters for many. Again - the ULTIMATE place where dependencies exist is in the source files.
We have been getting along swimmingly without pragmas for adding local include paths. Why do we need to add them using pragmas for network include paths?
That doesn't mean the situation is beyond improvement. If I had my way I'd add pragma(liburl) AND pragma(libpath).
 Also, I don't see the major difference in someone who's making a piece
 of software from adding the include path to their source file vs. adding
 it to their build script.
Because in the former case the whole need for a build script may be obviated. That's where I'm trying to be.
 But in any case, it doesn't matter if both options are available -- it
 doesn't hurt to have a pragma option as long as a config option is
 available. I just don't want to *require* the pragma solution.
Sounds good. I actually had the same notion, just forgot to mention it in the dip (fixed).
 For comparison, you don't have to specify a full path to the compiler of
 where to get modules, they are specified relative to the configured
 include paths. I think this model works well, and we should be able to
 re-use it for this purpose also. You could even just use urls as include
 paths:

 -Ihttp://www.dsource.org/projects/dcollections/import
I also think that model works well, except HTTP does not offer search the same way a filesystem does. You could do that with FTP though.
dget would just add the appropriate path: import dcollections.TreeMap => get http://www.dsource.org/projects/dcollections/import/dcollections/TreeMap.d hm.. doesn't work get http://www.dsource.org/projects/dcollections/import/dcollections/TreeMap.di ok, there it is!
This assumes the URL contains the package prefix. That would work, but imposes too much on the URL structure. I find the notation -Upackage=url more general.
 As I said in another post, you could also specify a zip file or tarball
 as a base path, and the whole package is downloaded instead. We may need
 some sort of manifest instead in order to verify the import will be
 found instead of downloading the entire package to find out.
Sounds cool. Andrei
Jun 15 2011
next sibling parent reply Robert Clipsham <robert octarineparrot.com> writes:
On 15/06/2011 15:33, Andrei Alexandrescu wrote:
 On 6/15/11 9:13 AM, Steven Schveighoffer wrote:
 We have been getting along swimmingly without pragmas for adding local
 include paths. Why do we need to add them using pragmas for network
 include paths?
That doesn't mean the situation is beyond improvement. If I had my way I'd add pragma(liburl) AND pragma(libpath).
pragma(lib) doesn't (and can't) work as it is, why do you want to add more useless pragmas? Command line arguments are the correct way to go here. Not to mention that paths won't be standardized across machines most likely so the latter would be useless.
 Also, I don't see the major difference in someone who's making a piece
 of software from adding the include path to their source file vs. adding
 it to their build script.
Because in the former case the whole need for a build script may be obviated. That's where I'm trying to be.
This can't happen in a lot of cases, eg if you're interfacing with a scripting language, you need certain files automatically generating during build etc. Admittedly, for the most part, you'll just want to be able to build libraries given a directory or an executable given a file with _Dmain() in. There'll still be a lot of cases where you want to specify some things to be dynamic libs, other static libs, and what if any of it you want in a resulting binary.
 But in any case, it doesn't matter if both options are available -- it
 doesn't hurt to have a pragma option as long as a config option is
 available. I just don't want to *require* the pragma solution.
Sounds good. I actually had the same notion, just forgot to mention it in the dip (fixed).
I'd agree with Steven that we need command line arguments for it, I completely disagree about pragmas though given that they don't work (as mentioned above). Just because I know you're going to ask: $ dmd a.d $ dmd b.d $ dmd a.o b.o <Linker errors> This is unavoidable unless you put metadata in the object files, and even then you leave clutter in the resulting binary, unless you specify that the linker should remove it (I don't know if it can).
 dget would just add the appropriate path:

 import dcollections.TreeMap =>
 get
 http://www.dsource.org/projects/dcollections/import/dcollections/TreeMap.d

 hm.. doesn't work
 get
 http://www.dsource.org/projects/dcollections/import/dcollections/TreeMap.di

 ok, there it is!
This assumes the URL contains the package prefix. That would work, but imposes too much on the URL structure. I find the notation -Upackage=url more general.
I personally think there should be a central repository listing packages and their URLs etc, which massively simplifies what needs passing on a command line. Eg -RmyPackage would cause myPackage to be looked up on the central server, which will have the relevant URL etc. Of course, there should be some sort of override method for private remote servers.
 As I said in another post, you could also specify a zip file or tarball
 as a base path, and the whole package is downloaded instead. We may need
 some sort of manifest instead in order to verify the import will be
 found instead of downloading the entire package to find out.
Sounds cool.
I don't believe this tool should exist without compression being default. -- Robert http://octarineparrot.com/
Jun 15 2011
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/15/11 9:56 AM, Robert Clipsham wrote:
 On 15/06/2011 15:33, Andrei Alexandrescu wrote:
 On 6/15/11 9:13 AM, Steven Schveighoffer wrote:
 We have been getting along swimmingly without pragmas for adding local
 include paths. Why do we need to add them using pragmas for network
 include paths?
That doesn't mean the situation is beyond improvement. If I had my way I'd add pragma(liburl) AND pragma(libpath).
pragma(lib) doesn't (and can't) work as it is, why do you want to add more useless pragmas?
Then we should yank it or change it. That pragma was defined in a completely different context from today's, and right now we have a much larger user base to draw experience and insight from.
 Command line arguments are the correct way to go
 here.
Why? At this point enough time has been collectively spent on this that I'm genuinely curious to find a reason that would have me "huh, haven't thought about it that way. Fine, no need for the dip."
 Not to mention that paths won't be standardized across machines
 most likely so the latter would be useless.
version() for the win.
 Also, I don't see the major difference in someone who's making a piece
 of software from adding the include path to their source file vs. adding
 it to their build script.
Because in the former case the whole need for a build script may be obviated. That's where I'm trying to be.
This can't happen in a lot of cases, eg if you're interfacing with a scripting language, you need certain files automatically generating during build etc.
Sure. For those cases, use tools. For everything else, there's liburl.
 Admittedly, for the most part, you'll just want to be
 able to build libraries given a directory or an executable given a file
 with _Dmain() in.
That's the spirit. This is what the proposal aims at: you have the root file and the process takes care of everything - no configs, no metadata, no XML info, no command-line switches, no fuss, no muss. With such a feature, "hello world" equivalents demoing dcollections, qt, mysql (some day), etc. etc. will be as simple as few-liners that anyone can download and compile flag-free. I find it difficult to understand how only a few find that appealing.
 There'll still be a lot of cases where you want to
 specify some things to be dynamic libs, other static libs, and what if
 any of it you want in a resulting binary.
Sure. But won't you think it's okay to have the DIP leave such cases to other tools without impeding them in any way?
 Sounds good. I actually had the same notion, just forgot to mention it
 in the dip (fixed).
I'd agree with Steven that we need command line arguments for it, I completely disagree about pragmas though given that they don't work (as mentioned above). Just because I know you're going to ask: $ dmd a.d $ dmd b.d $ dmd a.o b.o <Linker errors> This is unavoidable unless you put metadata in the object files, and even then you leave clutter in the resulting binary, unless you specify that the linker should remove it (I don't know if it can).
I now understand, thanks. So I take it a compile-and-link command would succeed, whereas a compile-separately succession of commands wouldn't? That wouldn't mean the pragma doesn't work, just that it only works under certain build scenarios.
 This assumes the URL contains the package prefix. That would work, but
 imposes too much on the URL structure. I find the notation -Upackage=url
 more general.
I personally think there should be a central repository listing packages and their URLs etc, which massively simplifies what needs passing on a command line. Eg -RmyPackage would cause myPackage to be looked up on the central server, which will have the relevant URL etc. Of course, there should be some sort of override method for private remote servers.
That is tantamount to planting a flag in the distributed dmd.conf. Sounds fine.
 As I said in another post, you could also specify a zip file or tarball
 as a base path, and the whole package is downloaded instead. We may need
 some sort of manifest instead in order to verify the import will be
 found instead of downloading the entire package to find out.
Sounds cool.
I don't believe this tool should exist without compression being default.
Hm. Well fine. Andrei
Jun 15 2011
next sibling parent Adam D. Ruppe <destructionator gmail.com> writes:
Andrei Alexandrescu wrote:
 pragma(lib) doesn't (and can't) work as it is, why do you want to add
 more useless pragmas?
Then we should yank it or change it.
Please no! pragma(lib) rocks. Just because it doesn't work in *all* cases doesn't mean we should get rid of it. The presence of a pragma(lib) doesn't break separate compilation, since you can still do it on the command line. Granted, it doesn't help there, but it doesn't hurt either. It does help for the simple case where you want "dmd myapp.d" to just work.
 I don't believe this tool should exist without compression being default.
Hm. Well fine.
Note that compression for single files it built into the http protocol. If you gzip a file ahead of time and serve it up with Content-transfer-encoding: gzip in the headers (or something like that), it is supposed to be transparently un-gzipped by the user agent. All the browsers do it, but I'm not sure if libcurl does (but it prolly does, it's pretty complete). Regardless, even if not, it's trivial to implement ourselves. Compressing an entire package depends on agreeing what a package is, but just serving up .zips of common modules works... importing modules from a library if you must :) The dget is free to download it and unzip to it's cache transparently before passing the path to dmd.
Jun 15 2011
prev sibling parent reply Robert Clipsham <robert octarineparrot.com> writes:
On 15/06/2011 16:15, Andrei Alexandrescu wrote:
 pragma(lib) doesn't (and can't) work as it is, why do you want to add
 more useless pragmas?
Then we should yank it or change it. That pragma was defined in a completely different context from today's, and right now we have a much larger user base to draw experience and insight from.
Note that rebuild had pragma(link) which got around this problem - it was the build tool, it could keep track of all of these without modifying object files or other such hackery. So I guess pragma(lib) could be fixed in the hypothetical tool.
 Command line arguments are the correct way to go
 here.
Why? At this point enough time has been collectively spent on this that I'm genuinely curious to find a reason that would have me "huh, haven't thought about it that way. Fine, no need for the dip."
I'm assuming you hadn't read my reasoning for being against pragma(lib) at this point, let me know if this isn't the case.
 Not to mention that paths won't be standardized across machines
 most likely so the latter would be useless.
version() for the win.
version() isn't much use when it isn't completely standardized - take C/C++, the place headers/libraries are vary greatly between distros, /usr/lib, /usr/lib32, /usr/lib64, /usr/local/lib, etc etc. version() is of no use here - the path would need to be defined on the command line.
 Also, I don't see the major difference in someone who's making a piece
 of software from adding the include path to their source file vs.
 adding
 it to their build script.
Because in the former case the whole need for a build script may be obviated. That's where I'm trying to be.
This can't happen in a lot of cases, eg if you're interfacing with a scripting language, you need certain files automatically generating during build etc.
Sure. For those cases, use tools. For everything else, there's liburl.
This is where you have me confused. What is the scope of this tool? If it's not destined to become a full D package manager, an equivalent of gem/cpan/pecl etc, what's the point?
 Admittedly, for the most part, you'll just want to be
 able to build libraries given a directory or an executable given a file
 with _Dmain() in.
That's the spirit. This is what the proposal aims at: you have the root file and the process takes care of everything - no configs, no metadata, no XML info, no command-line switches, no fuss, no muss.
I believe for these cases it should be zero effort - but the tool should support custom builds, unless it's not eventually gonna become a package manager.
 With such a feature, "hello world" equivalents demoing dcollections, qt,
 mysql (some day), etc. etc. will be as simple as few-liners that anyone
 can download and compile flag-free. I find it difficult to understand
 how only a few find that appealing.

 There'll still be a lot of cases where you want to
 specify some things to be dynamic libs, other static libs, and what if
 any of it you want in a resulting binary.
Sure. But won't you think it's okay to have the DIP leave such cases to other tools without impeding them in any way?
Again, see above. I really don't see the point in this tool if it's not eventually going to become a complete package manager. Just seems like a half baked solution to the problem if it can't handle it.
 Sounds good. I actually had the same notion, just forgot to mention it
 in the dip (fixed).
I'd agree with Steven that we need command line arguments for it, I completely disagree about pragmas though given that they don't work (as mentioned above). Just because I know you're going to ask: $ dmd a.d $ dmd b.d $ dmd a.o b.o <Linker errors> This is unavoidable unless you put metadata in the object files, and even then you leave clutter in the resulting binary, unless you specify that the linker should remove it (I don't know if it can).
I now understand, thanks. So I take it a compile-and-link command would succeed, whereas a compile-separately succession of commands wouldn't? That wouldn't mean the pragma doesn't work, just that it only works under certain build scenarios.
Correct. This is why I don't like pragma(lib) and the new things you are proposing. As nice as it is, if it doesn't work with incremential building and one at a time building, it's not much use.
 This assumes the URL contains the package prefix. That would work, but
 imposes too much on the URL structure. I find the notation -Upackage=url
 more general.
I personally think there should be a central repository listing packages and their URLs etc, which massively simplifies what needs passing on a command line. Eg -RmyPackage would cause myPackage to be looked up on the central server, which will have the relevant URL etc. Of course, there should be some sort of override method for private remote servers.
That is tantamount to planting a flag in the distributed dmd.conf. Sounds fine.
Indeed, the central repository can be in a dmd.conf rather than hard coded.
 As I said in another post, you could also specify a zip file or tarball
 as a base path, and the whole package is downloaded instead. We may
 need
 some sort of manifest instead in order to verify the import will be
 found instead of downloading the entire package to find out.
Sounds cool.
I don't believe this tool should exist without compression being default.
Hm. Well fine.
Just seems silly to not use compression, given how fast it is to compress/decompress and how much bandwidth it saves. -- Robert http://octarineparrot.com/
Jun 15 2011
parent Jacob Carlborg <doob me.com> writes:
On 2011-06-15 20:11, Robert Clipsham wrote:
 On 15/06/2011 16:15, Andrei Alexandrescu wrote:
 pragma(lib) doesn't (and can't) work as it is, why do you want to add
 more useless pragmas?
Then we should yank it or change it. That pragma was defined in a completely different context from today's, and right now we have a much larger user base to draw experience and insight from.
Note that rebuild had pragma(link) which got around this problem - it was the build tool, it could keep track of all of these without modifying object files or other such hackery. So I guess pragma(lib) could be fixed in the hypothetical tool.
It's too bad that pragma(lib) doesn't behave like pragma(link), it's quite an easy fix as well. It's also unnecessary to have two tools reading the same files, the build tool for reading pragma(link) and the compiler for reading the whole file to compile it. -- /Jacob Carlborg
Jun 17 2011
prev sibling parent reply Graham Fawcett <fawcett uwindsor.ca> writes:
On Wed, 15 Jun 2011 15:56:54 +0100, Robert Clipsham wrote:

 On 15/06/2011 15:33, Andrei Alexandrescu wrote:
 On 6/15/11 9:13 AM, Steven Schveighoffer wrote:
 We have been getting along swimmingly without pragmas for adding local
 include paths. Why do we need to add them using pragmas for network
 include paths?
That doesn't mean the situation is beyond improvement. If I had my way I'd add pragma(liburl) AND pragma(libpath).
pragma(lib) doesn't (and can't) work as it is, why do you want to add more useless pragmas?
In what sense doesn't pragma(lib) work? This is news to me. Graham
Jun 15 2011
parent Robert Clipsham <robert octarineparrot.com> writes:
On 15/06/2011 20:02, Graham Fawcett wrote:
 In what sense doesn't pragma(lib) work? This is news to me.

 Graham
Scroll down in my post, I explain it.
 On Wed, 15 Jun 2011 15:56:54 +0100, Robert Clipsham wrote:
 Just because I know you're going to ask:


 $ dmd a.d
 $ dmd b.d
 $ dmd a.o b.o
 <Linker errors>
-- Robert http://octarineparrot.com/
Jun 15 2011
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 15 Jun 2011 10:33:21 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 dget would just add the appropriate path:

 import dcollections.TreeMap =>
 get
 http://www.dsource.org/projects/dcollections/import/dcollections/TreeMap.d
 hm.. doesn't work
 get
 http://www.dsource.org/projects/dcollections/import/dcollections/TreeMap.di
 ok, there it is!
This assumes the URL contains the package prefix. That would work, but imposes too much on the URL structure. I find the notation -Upackage=url more general.
Look at the url again, I'll split out the include path and the import: [http://www.dsource.org/projects/dcollections/import] / [dcollections/TreeMap.di] There is nothing being assumed by dget. It could try and import dcollections.TreeMap from some other remote path as well, and fail. It follows the same rules as the current import scheme, just with urls instead of paths. -Steve
Jun 15 2011
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/15/11 10:07 AM, Steven Schveighoffer wrote:
 On Wed, 15 Jun 2011 10:33:21 -0400, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 dget would just add the appropriate path:

 import dcollections.TreeMap =>
 get
 http://www.dsource.org/projects/dcollections/import/dcollections/TreeMap.d

 hm.. doesn't work
 get
 http://www.dsource.org/projects/dcollections/import/dcollections/TreeMap.di

 ok, there it is!
This assumes the URL contains the package prefix. That would work, but imposes too much on the URL structure. I find the notation -Upackage=url more general.
Look at the url again, I'll split out the include path and the import: [http://www.dsource.org/projects/dcollections/import] / [dcollections/TreeMap.di]
I understood the first time. Yes, so it imposes on the url structure that it ends with /dcollections/.
 There is nothing being assumed by dget. It could try and import
 dcollections.TreeMap from some other remote path as well, and fail. It
 follows the same rules as the current import scheme, just with urls
 instead of paths.
I don't think it's a good idea to search several paths for a given import. One import should map to two download attempts: one for the .di, next for .d. Andrei
Jun 15 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 15 Jun 2011 11:23:58 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 6/15/11 10:07 AM, Steven Schveighoffer wrote:
 On Wed, 15 Jun 2011 10:33:21 -0400, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 dget would just add the appropriate path:

 import dcollections.TreeMap =>
 get
 http://www.dsource.org/projects/dcollections/import/dcollections/TreeMap.d

 hm.. doesn't work
 get
 http://www.dsource.org/projects/dcollections/import/dcollections/TreeMap.di

 ok, there it is!
This assumes the URL contains the package prefix. That would work, but imposes too much on the URL structure. I find the notation -Upackage=url more general.
Look at the url again, I'll split out the include path and the import: [http://www.dsource.org/projects/dcollections/import] / [dcollections/TreeMap.di]
I understood the first time. Yes, so it imposes on the url structure that it ends with /dcollections/.
No, that was just because dcollections' home base is www.dsource.org.projects/dcollections. I.e. the fact that the import starts with dcollections and the include path contains dcollections are not significant. dget tries all paths just like dmd tries all paths.
 There is nothing being assumed by dget. It could try and import
 dcollections.TreeMap from some other remote path as well, and fail. It
 follows the same rules as the current import scheme, just with urls
 instead of paths.
I don't think it's a good idea to search several paths for a given import. One import should map to two download attempts: one for the .di, next for .d.
In the ideas I've outlined, dget is given include paths, not what packages are in those include paths. So the two attempts are per import path. However, this only happens on first usage, after that, they are cached, so there is no try-and-fail required. I can see why you want this, but in order for it to fit in with the current import path scheme, it would have to be this way. Otherwise, you'd need a different switch besides -I to implement it (or the pragma). Not that those cannot be implemented, but the simplistic "just specify a network include path like you would a local one" has appeal to me. -Steve
Jun 15 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/15/11 10:39 AM, Steven Schveighoffer wrote:
 I can see why you want this, but in order for it to fit in with the
 current import path scheme, it would have to be this way. Otherwise,
 you'd need a different switch besides -I to implement it (or the
 pragma). Not that those cannot be implemented, but the simplistic "just
 specify a network include path like you would a local one" has appeal to
 me.
I understand the appeal, but I also understand the inherent limitations of HTTP versus a file system. I don't think it's wise pounding the former into the shape of the other. Andrei
Jun 15 2011
parent reply Adam D. Ruppe <destructionator gmail.com> writes:
 I understand the appeal, but I also understand the inherent
 limitations of HTTP versus a file system. I don't think it's wise
 pounding the former into the shape of the other.
AIUI, you can't search for a module on a local filesystem either, aside from guessing a name. In D, a module name doesn't necessarily have to match the file name. If you import foo.bar;, you have two options: call it foo/bar.d pass whatever.d that has "module foo.bar;" in it on the command line explicitly. Being able to do a list directory entries feature doesn't really help locate a given named D module anyway.
Jun 15 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/15/11 11:00 AM, Adam D. Ruppe wrote:
 I understand the appeal, but I also understand the inherent
 limitations of HTTP versus a file system. I don't think it's wise
 pounding the former into the shape of the other.
AIUI, you can't search for a module on a local filesystem either, aside from guessing a name. In D, a module name doesn't necessarily have to match the file name. If you import foo.bar;, you have two options: call it foo/bar.d pass whatever.d that has "module foo.bar;" in it on the command line explicitly. Being able to do a list directory entries feature doesn't really help locate a given named D module anyway.
OK, good point. Still search is going on across all -I paths, which I think we shouldn't extend to URLs. Andrei
Jun 15 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 15 Jun 2011 12:35:07 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 6/15/11 11:00 AM, Adam D. Ruppe wrote:
 I understand the appeal, but I also understand the inherent
 limitations of HTTP versus a file system. I don't think it's wise
 pounding the former into the shape of the other.
AIUI, you can't search for a module on a local filesystem either, aside from guessing a name. In D, a module name doesn't necessarily have to match the file name. If you import foo.bar;, you have two options: call it foo/bar.d pass whatever.d that has "module foo.bar;" in it on the command line explicitly. Being able to do a list directory entries feature doesn't really help locate a given named D module anyway.
OK, good point. Still search is going on across all -I paths, which I think we shouldn't extend to URLs.
I thought of a good reason why -Iurl is likely to cause problems: -Ihttp://xxx.yyy.zzz/package1 -Ihttp://aaa.bbb.ccc/package2 import a; if xxx.yyy.zzz/package1 and aaa.bbb.ccc/package2 both contain a module called a, then which one is used depends on the conditions of the network. For example, xxx.yyy.zzz could go offline momentarily, and so package2 is used. Or xxx.yyy.zzz/package1 could be moved to a different url. I know this is a remote possibility, but like you said, internet urls are not the same things as files. I propose the following: 1. the parameter to -I (called an import path) can have an optional equals sign in it. If the equals sign is present, then the form is: full.module.path=<path_to_module> If this is the case, then failure to find full.module.path.x module inside the given path is recorded as an error (i.e. no other paths are searched). 2. we have a pragma(importpath, "<valid_import_path>"); which makes dmd treat the imports from that specific file as if -I<valid_import_path> was first in the path list. If the imported file is indirect (i.e. foo.d imports bar.d which imports baz.d), then the pragma'd path does not count when parsing the indirect file. 3. <path_to_module> can be either a file path or a url. 4. remote paths are treated like I've specified elsewhere, where the -I parameters of dmd (+ the pragmas) would be passed to dget. I think this should give us maximum flexibility, and fit within the current system quite well -- as well as giving us more specific import control even for local paths. -Steve
Jun 15 2011
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/15/11 12:35 PM, Steven Schveighoffer wrote:
 I propose the following:
Excellent. I'm on board with everything. Could you please update the DIP reflecting these ideas? Thanks, Andrei
Jun 15 2011
prev sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
It occurs to me that you ought to treat network things identically
to local modules in every way...

dmd app.d ../library.d

just works. I propose:

dmd app.d http://whatever.com/library.d

should just work too - dmd would need only to recognize module name
starts with xxxx:// and pass it to the dget program to translate.
Jun 15 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/15/11 10:52 AM, Adam D. Ruppe wrote:
 It occurs to me that you ought to treat network things identically
 to local modules in every way...

 dmd app.d ../library.d

 just works. I propose:

 dmd app.d http://whatever.com/library.d

 should just work too - dmd would need only to recognize module name
 starts with xxxx:// and pass it to the dget program to translate.
Thought of that, too, and also of the idea of two posters in this thread to have dget pipe the module to stdout. The main issue we need to address is what __FILE__ is for such modules and how we point users to the location of possible compilation and runtime errors. Andrei
Jun 15 2011
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 15 Jun 2011 11:56:25 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 6/15/11 10:52 AM, Adam D. Ruppe wrote:
 It occurs to me that you ought to treat network things identically
 to local modules in every way...

 dmd app.d ../library.d

 just works. I propose:

 dmd app.d http://whatever.com/library.d

 should just work too - dmd would need only to recognize module name
 starts with xxxx:// and pass it to the dget program to translate.
Thought of that, too, and also of the idea of two posters in this thread to have dget pipe the module to stdout. The main issue we need to address is what __FILE__ is for such modules and how we point users to the location of possible compilation and runtime errors.
Change as little as possible internally IMO, __FILE__ should be the url that dmd receives from the command line. That also brings up a good point -- dget has to tell dmd where it got the file from so it can fill in __FILE__ in the case where it's not in a pragma. -Steve
Jun 15 2011
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/14/11 2:25 PM, Steven Schveighoffer wrote:
 On Tue, 14 Jun 2011 15:14:28 -0400, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 On 6/14/11 1:58 PM, Steven Schveighoffer wrote:
 I think it should be split as follows:

 dmd: determine *what* to download (i.e. I need to import module x.y.z)
It can't (unless it does the download too) due to transitive dependencies.
dmd: I need module foo.awesome. Where is it? filesystem: nope, don't have it dmd: damn, I guess I'll need to check with the downloader, hey downloader you have that? downloader: hm... oh, yeah! I'll get it for you filesystem: got it dmd: ok, now what's in foo.awesome? Oh, hm... foo.awesome needs bar.gnarly. Let me guess filesystem... filesystem: yeah, I suck today, go ask downloader ... How hard is that? I mean the actual downloading of files is pretty straightforward, at some point the problem reduces to "download a file". Why do we have to reinvent *that* wheel.
It's not hard, in fact that's almost how we want to implement it: a straight function that wraps a call to wget. The difference is that you'd migrate the function into a separate utility, and I think that's a good idea. (Walter prefers it inside the compiler.)
 Not sure I grok 3 and 4, but as far as I can tell the crux of the
 matter is that dependencies are already embedded in .d files. That's
 why I think it's simpler to just let dmd take care of them all instead
 of maintaining dependency description files in separation from the .d
 files.
And it would, why wouldn't it? I think you may not be getting something here...
 The umpteen billion tools don't know what it takes to download and
 build everything starting from one or a few root modules. They could
 execute the download, yes (and of course we'll use such a library for
 that), but we need a means to drive them.
dmd would drive them.
 I think Adam's tool is a good identification and alternative solution
 for the problem that the pragma solves.
I haven't seen it. Just thinking out loud...
http://www.digitalmars.com/pnews/read.php?server=news.digitalmars.com&group=digitalmars.D&artnum=138556 Andrei
Jun 14 2011
next sibling parent Gerrit Wichert <gwichert yahoo.com> writes:
Am 14.06.2011 22:22, schrieb Andrei Alexandrescu:
 It's not hard, in fact that's almost how we want to implement it: a 
 straight function that wraps a call to wget. The difference is that 
 you'd migrate the function into a separate utility, and I think that's 
 a good idea. (Walter prefers it inside the compiler.)
It even seems to be possible that this 'plug-in' tool can become a source code provider which not only downloads the files for dmd to read them, but pipe their content directly back into the compiler. I really can't imagine what possibillities this may offer.
Jun 14 2011
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
What is dmdz?
Jun 14 2011
prev sibling parent Jimmy Cao <jcao219 gmail.com> writes:
On Tue, Jun 14, 2011 at 5:25 PM, Andrej Mitrovic <andrej.mitrovich gmail.com
 wrote:
 What is dmdz?
http://www.digitalmars.com/d/archives/digitalmars/D/dmdz_107472.html http://www.digitalmars.com/d/archives/digitalmars/D/dmdz_take_2_110937.html
Jun 14 2011
prev sibling parent "Simen Kjaeraas" <simen.kjaras gmail.com> writes:
On Tue, 14 Jun 2011 21:14:28 +0200, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 6/14/11 1:58 PM, Steven Schveighoffer wrote:
 I think it should be split as follows:

 dmd: determine *what* to download (i.e. I need to import module x.y.z)
It can't (unless it does the download too) due to transitive dependencies.
So pipe dmd's output to the build tool, dmd waits for the build tool to feed it some info on stdin, and then continues, using the new info to locate the downloaded files? This way, the build tool is free to grab stuff using git, svn, from additional library folders on the local computer, or even generate random modules to feed to dmd. -- Simen
Jun 14 2011
prev sibling parent Daniel Gibson <metalcaedes gmail.com> writes:
Am 14.06.2011 17:41, schrieb Andrei Alexandrescu:
 On 6/14/11 10:26 AM, Daniel Gibson wrote:
 One additional file. I don't think having one file would be a burden to
 the programmer, not much more than adding pragmas in his code.

 But if there's central metadata repository even this additional file
 isn't needed - neither are pragmas - (the build-tool will ask that repo
 where to find the lib/module), unless the lib is kind of obscure or
 brand-new and thus not known by the metadata repo. And in that case:
 it's just a single file.

 (Of course it would be possible to periodically or via "build-tool
 update" - like apt-get update - fetch the metadata, so the server
 doesn't have to be asked each time.)
I agree that a build tool is an alternative. The problem is that it's front-heavy - we need to design the config file format, design the tool to deal with various dependency issues etc.
And that would be harder than getting support for this in the compiler? (Especially since there are already tools that could be enhanced)
 Many of these issues are
 solved by design or don't even exist with the pragma, for example
 there's no need for a config file format or for config files in the
 first place (although they can be easily defined as small .d files
 consisting of pragmas).
 
 One thing I like about the pragma is that it's mostly mechanism and very
 little policy, while at the same time fostering very simple,
 straightforward policies. Source files can carry their own dependencies
 (but they don't need to), transitivity just works, (re|in)direction and
 central repos are possible without being required.
 
 One other interesting aspect is that the string literal can be
 CTFE-constructed, i.e. may include a path to a library depending on the
 OS, version, etc. An external tool would need to give up on that (and
 use multiple configuration files) or invent its own string manipulation
 primitives.
 
 
 Andrei
Jun 14 2011
prev sibling next sibling parent reply Graham Fawcett <fawcett uwindsor.ca> writes:
On Tue, 14 Jun 2011 08:53:16 -0500, Andrei Alexandrescu wrote:

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11
 
 Destroy.
 
 
 Andrei
What's the expected payload format? A text/plain D source-file? A zip or tar archive? If an archive, what's the required directory layout? Should "dsource.foo.baz" be required to be in "/dsource/foo/baz.d" within the archive? And if not an archive, how to reasonably handle multi-file packages? Best, Graham
Jun 14 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/14/11 10:27 AM, Graham Fawcett wrote:
 On Tue, 14 Jun 2011 08:53:16 -0500, Andrei Alexandrescu wrote:

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.


 Andrei
What's the expected payload format? A text/plain D source-file? A zip or tar archive?
Text for now.
 If an archive, what's the required directory layout? Should
 "dsource.foo.baz" be required to be in "/dsource/foo/baz.d" within the
 archive?
I agree we need to address these in the future, and also binary distributions (e.g. .di files + .a/.lib files).
 And if not an archive, how to reasonably handle multi-file packages?
Consider a library "acme" consisting of three files: widgets.d, gadgets.d, fidgets.d in "http://acme.com/d/". It also depends on the external library monads on "http://nad.mo/d". // User code: pragma(lib, acme, "http://acme.com/d/"); import acme.widgets; ... use ... // widgets.d // Assume it depends on other stuff in the same lib // and on monads.d pragma(lib, monads, "http://nad.mo/d/"); import acme.gadgets, acme.fidgets, monads.io; This is all that's needed for the compiler to download and compile everything needed. Andrei
Jun 14 2011
next sibling parent reply Graham Fawcett <fawcett uwindsor.ca> writes:
On Tue, 14 Jun 2011 10:31:59 -0500, Andrei Alexandrescu wrote:

 On 6/14/11 10:27 AM, Graham Fawcett wrote:
 On Tue, 14 Jun 2011 08:53:16 -0500, Andrei Alexandrescu wrote:

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.


 Andrei
What's the expected payload format? A text/plain D source-file? A zip or tar archive?
Text for now.
 If an archive, what's the required directory layout? Should
 "dsource.foo.baz" be required to be in "/dsource/foo/baz.d" within the
 archive?
I agree we need to address these in the future, and also binary distributions (e.g. .di files + .a/.lib files).
 And if not an archive, how to reasonably handle multi-file packages?
Consider a library "acme" consisting of three files: widgets.d, gadgets.d, fidgets.d in "http://acme.com/d/". It also depends on the external library monads on "http://nad.mo/d". // User code: pragma(lib, acme, "http://acme.com/d/"); import acme.widgets; ... use ... // widgets.d // Assume it depends on other stuff in the same lib // and on monads.d pragma(lib, monads, "http://nad.mo/d/"); import acme.gadgets, acme.fidgets, monads.io; This is all that's needed for the compiler to download and compile everything needed.
So, to clarify:
 pragma(lib, acme, "http://acme.com/d/");
...establishes a "prefix" relationship: modules names prefixed with "acme." may be found at URLs prefixed with "http://acme.com/d/". So we would expect to find: acme.widgets at http://acme.com/d/widgets.d acme.widgets.core.x at http://acme.com/d/widgets/core/x.d Correct? Graham
Jun 14 2011
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/14/11 11:26 AM, Graham Fawcett wrote:
 On Tue, 14 Jun 2011 10:31:59 -0500, Andrei Alexandrescu wrote:

 On 6/14/11 10:27 AM, Graham Fawcett wrote:
 On Tue, 14 Jun 2011 08:53:16 -0500, Andrei Alexandrescu wrote:

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.


 Andrei
What's the expected payload format? A text/plain D source-file? A zip or tar archive?
Text for now.
 If an archive, what's the required directory layout? Should
 "dsource.foo.baz" be required to be in "/dsource/foo/baz.d" within the
 archive?
I agree we need to address these in the future, and also binary distributions (e.g. .di files + .a/.lib files).
 And if not an archive, how to reasonably handle multi-file packages?
Consider a library "acme" consisting of three files: widgets.d, gadgets.d, fidgets.d in "http://acme.com/d/". It also depends on the external library monads on "http://nad.mo/d". // User code: pragma(lib, acme, "http://acme.com/d/"); import acme.widgets; ... use ... // widgets.d // Assume it depends on other stuff in the same lib // and on monads.d pragma(lib, monads, "http://nad.mo/d/"); import acme.gadgets, acme.fidgets, monads.io; This is all that's needed for the compiler to download and compile everything needed.
So, to clarify:
 pragma(lib, acme, "http://acme.com/d/");
...establishes a "prefix" relationship: modules names prefixed with "acme." may be found at URLs prefixed with "http://acme.com/d/". So we would expect to find: acme.widgets at http://acme.com/d/widgets.d acme.widgets.core.x at http://acme.com/d/widgets/core/x.d Correct? Graham
Yes, that's the intent. I added an explanation and an example to subsection "Package case". Andrei
Jun 14 2011
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-06-14 17:31, Andrei Alexandrescu wrote:
 On 6/14/11 10:27 AM, Graham Fawcett wrote:
 On Tue, 14 Jun 2011 08:53:16 -0500, Andrei Alexandrescu wrote:

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.


 Andrei
What's the expected payload format? A text/plain D source-file? A zip or tar archive?
Text for now.
If the tool will download individual text files it will be quite ineffective, it's like comparing a svn checkout with a git clone. The git clone is a lot more effective. It would be better to download an archive of some sort.
 If an archive, what's the required directory layout? Should
 "dsource.foo.baz" be required to be in "/dsource/foo/baz.d" within the
 archive?
I agree we need to address these in the future, and also binary distributions (e.g. .di files + .a/.lib files).
 And if not an archive, how to reasonably handle multi-file packages?
Consider a library "acme" consisting of three files: widgets.d, gadgets.d, fidgets.d in "http://acme.com/d/". It also depends on the external library monads on "http://nad.mo/d". // User code: pragma(lib, acme, "http://acme.com/d/"); import acme.widgets; ... use ... // widgets.d // Assume it depends on other stuff in the same lib // and on monads.d pragma(lib, monads, "http://nad.mo/d/"); import acme.gadgets, acme.fidgets, monads.io; This is all that's needed for the compiler to download and compile everything needed. Andrei
-- /Jacob Carlborg
Jun 17 2011
parent Adam D. Ruppe <destructionator gmail.com> writes:
 It would be better to download an archive of some sort.
For cases when this is necessary, it'd be easy enough to grab a .zip for the package rather than the .d for the module. The .zips could take a cue from the Slackware package format too. They simply puts things in the appropriate folders to be added to your installation, then zip it right up. src/package.name/file.d bin/package.name.dll lib/package.name.lib package.name.txt (this contains metadata if you want it) You can unzip it locally, in your dmd folder, or whatever and then the files are available for use if your -L and -I paths are good. Thus, the same basic idea covers binary library distributions too. This could be in addition to grabbing the .d files alone.
Jun 17 2011
prev sibling next sibling parent Brad Anderson <eco gnuk.net> writes:
I like it (specifically because of its simplicity).  It's not going to work
for projects that require a more complex build process using a build tool
but for simple modules it's a rather elegant solution.  Projects that need a
build tool don't need to use it and can just continue using a build tool and
manually managing their external packages (hopefully eventually using
whatever gem/CPAN-style package proposal is finally adopted).

I think it's a great stopgap until the D community has the manpower to
create (and more importantly, maintain) something like gem.  There are
certainly some details to work out but I like the overall idea.

For people new to any language the most confusing (and usually poorly
documented) part is the build environment.  "Where do I get this package,
where do I have to put it to use it, how do I even build it?"  Having to
learn that for every external package you want to use is a big roadblock to
anyone who is new.  This proposal doesn't eliminate entirely but it does get
rid of the simpler cases for those who choose to use it.

Regards,
Brad Anderson

On Tue, Jun 14, 2011 at 7:53 AM, Andrei Alexandrescu <
SeeWebsiteForEmail erdani.org> wrote:

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.


 Andrei
Jun 14 2011
prev sibling next sibling parent reply Jesse Phillips <jessekphillips+D gmail.com> writes:
Andrei Alexandrescu Wrote:

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11
 
 Destroy.
 
 
 Andrei
I'm in agreement with those saying it doesn't belong in the compiler/language. In order for it to make sense in this location it would need to take on of a build tool role, which has already be stated to add cause more confusion when something goes wrong. How confusing is it to say, DMD downloads required modules for you, but will fail to build an executable because you must tell it to use those files. Sure using rdmd is to provide a seamless experience, but this results in more back and forth between compiler and build tool. Then there is the caching issue. The proposal has no solution on this, and for good reason. The goal isn't to cache it is to install the library, meaning it is persistent and usable by other projects. Installing a library can involve many things, and especially for D can mean compiling C or installing libraries. This solution is looking at a smaller scope, but I don't think it really saves on a "configuration" file. On a note about build tools. I'm like you and Walter in that they always seem so complicated and very fragile. And personally go with simple Makefiles. I'm not really familiar with the problems many of these tools are trying to solve. I was reading up on redo, and whether it was intended or not, I found one idea that really stuck with me. File transformation operations. Make does a really nice job of dependency resolution and I think this idea of taking a list of dependencies and transforming them into another file makes simple files. So to do an incremental build for D: .o: .d dmd -c $2 : .o dmd -of$3 $2 mytarget: target.d depended.d on.d files.d Ok, I haven't gone into depth with what build tools should also be solving, or how to get it working. but this is just my initial "hey I want that."
Jun 14 2011
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/14/11 1:14 PM, Jesse Phillips wrote:
 Andrei Alexandrescu Wrote:

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.


 Andrei
I'm in agreement with those saying it doesn't belong in the compiler/language. In order for it to make sense in this location it would need to take on of a build tool role, which has already be stated to add cause more confusion when something goes wrong. How confusing is it to say, DMD downloads required modules for you, but will fail to build an executable because you must tell it to use those files. Sure using rdmd is to provide a seamless experience, but this results in more back and forth between compiler and build tool.
I agree that if this makes it through, we need to have a means to instruct dmd to also include the module in the build.
 Then there is the caching issue. The proposal has no solution on this, and for
good reason. The goal isn't to cache it is to install the library, meaning it
is persistent and usable by other projects.
I used "caching" informally and I mentioned the liabilities of using e.g. /tmp/ for it. CPAN uses a similar technique, it puts installed libraries in a dir dictated by PERL5LIB. We should devise a similar method.
 Installing a library can involve many things, and especially for D can mean
compiling C or installing libraries. This solution is looking at a smaller
scope, but I don't think it really saves on a "configuration" file.
I agree there are libraries that won't be automatically installable with this feature.
 On a note about build tools. I'm like you and Walter in that they always seem
so complicated and very fragile. And personally go with simple Makefiles. I'm
not really familiar with the problems many of these tools are trying to solve.

 I was reading up on redo, and whether it was intended or not, I found one idea
that really stuck with me. File transformation operations. Make does a really
nice job of dependency resolution and I think this idea of taking a list of
dependencies and transforming them into another file makes simple files. So to
do an incremental build for D:

 .o: .d
      dmd -c $2

 : .o
      dmd -of$3 $2

 mytarget: target.d depended.d on.d files.d

 Ok, I haven't gone into depth with what build tools should also be solving, or
how to get it working. but this is just my initial "hey I want that."
I use rdmd in conjunction with small makefiles too. I think the proposal complements such approaches. Andrei
Jun 14 2011
prev sibling next sibling parent reply Robert Clipsham <robert octarineparrot.com> writes:
On 14/06/2011 14:53, Andrei Alexandrescu wrote:
 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.


 Andrei
This doesn't seem like the right solution to the problem - the correct solution, in my opinion, is to have a build tool/package manager handle this, not the compiler. Problems I see: * Remote server gets hacked, everyone using the library now executes malicious code * Remote source changes how it is built, your code suddenly breaks and has to be updated, rather than being handled automatically * Adds a lot of unnecessary bloat and/or dependency on external modules + Want to compress source code? dmd now depends on decompression libs + Want to use git? dmd now depends on git + Remote code uses new compression method that an older dmd doesn't support * Remote server is down - build takes forever while waiting + Make dmd time out after a couple of seconds - build fails * Makes the assumption that the build machine is has internet connectivity, if it doesn't building suddenly gets a lot more complicated * Source code changes location, build breaks unless a redirect is possible - if it changes protocol it's useless I could go on. I believe the real solution to this is to have (as discussed a lot recently) a proper D package management tool like PHP's pecl, ruby's gem or perl's cpan. Of course, this doesn't mean we have to lose the ability to list dependencies in D etc. In fact, it seems like the perfect opportunity to get people to switch to using D to build projects (everyone should, I do it and it's the best build tool I've *ever* used). Hypothetical D Package Manager: foobar | ` pragma(depend, "foo", "v1.2.x"); ` pragma(depend, "bar", "v1.4.3"); $ dpm install foobar -> Do packages exist locally, are they the right version? -> Do they exist remotely, and do the given versions exist? -> Get the remote packages -> Get the D Build Tool to build it (or use a binary, if available) $ dbt build foobar -> Is there a default.dbt or foobar.dbt file? -> If not, attempt to build a binary, use -lib to attempt to build as a library -> If there is, pass it to dmd, it's actually a D file describing how to build Of course, the dbt file would have access to some helper functions, eg library("myDir").build for building a library out of all the files in myDir (should be a way to specify the files etc). dbt would obviously take care of compiler flags/compiler etc. I started implementing this the other day, got a few lines into a main() then realised I didn't have enough time to build the tool I wanted :> -- Robert http://octarineparrot.com/
Jun 14 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/14/11 1:22 PM, Robert Clipsham wrote:
 On 14/06/2011 14:53, Andrei Alexandrescu wrote:
 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.


 Andrei
This doesn't seem like the right solution to the problem - the correct solution, in my opinion, is to have a build tool/package manager handle this, not the compiler. Problems I see: * Remote server gets hacked, everyone using the library now executes malicious code
This liability is not different from a traditional setup.
 * Remote source changes how it is built, your code suddenly breaks and
 has to be updated, rather than being handled automatically
This is a deployment issue affecting this approach and any other relying on downloading stuff.
 * Adds a lot of unnecessary bloat and/or dependency on external modules
 + Want to compress source code? dmd now depends on decompression libs
Indeed, I think compression will indeed be commonly requested. The same has happened about Java - initially it relied on downloading .class files, but then jar files were soon to follow. It's been a feature asked in this forum, independently of downloads. A poster implemented a complete rdmd-like program that deals with .zip files.
 + Want to use git? dmd now depends on git
Not if the server can serve files, or if you use a different tool.
 + Remote code uses new compression method that an older dmd doesn't
 support
If compression handling is needed, dmd can standardize on it just like jar files do.
 * Remote server is down - build takes forever while waiting
So does downloading or building with another tool.
 + Make dmd time out after a couple of seconds - build fails
So would build directed with any other tool.
 * Makes the assumption that the build machine is has internet
 connectivity, if it doesn't building suddenly gets a lot more
 complicated
Fair point.
 * Source code changes location, build breaks unless a redirect is
 possible - if it changes protocol it's useless
See my answer with a central repo. My understanding is that you find automated download during the first build untenable, but manual download prior to the first build acceptable. I don't see such a large fracture between the two cases as you do. Andrei
Jun 14 2011
parent reply Robert Clipsham <robert octarineparrot.com> writes:
On 14/06/2011 20:07, Andrei Alexandrescu wrote:
 On 6/14/11 1:22 PM, Robert Clipsham wrote:
 On 14/06/2011 14:53, Andrei Alexandrescu wrote:
 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.


 Andrei
This doesn't seem like the right solution to the problem - the correct solution, in my opinion, is to have a build tool/package manager handle this, not the compiler. Problems I see: * Remote server gets hacked, everyone using the library now executes malicious code
This liability is not different from a traditional setup.
Perhaps, but with a proper package management tool this can be avoided with sha sums etc, this can't happen with a direct get. Admittedly this line of defense falls if the intermediate server is hacked.
 * Remote source changes how it is built, your code suddenly breaks and
 has to be updated, rather than being handled automatically
This is a deployment issue affecting this approach and any other relying on downloading stuff.
It doesn't affect anything if a proper package management/build tool is in use, as the remote code specifies how it is built, rather than the local code.
 * Adds a lot of unnecessary bloat and/or dependency on external modules
 + Want to compress source code? dmd now depends on decompression libs
Indeed, I think compression will indeed be commonly requested. The same has happened about Java - initially it relied on downloading .class files, but then jar files were soon to follow. It's been a feature asked in this forum, independently of downloads. A poster implemented a complete rdmd-like program that deals with .zip files.
 + Want to use git? dmd now depends on git
Not if the server can serve files, or if you use a different tool.
But then you lose the advantages of using git to get the source at all.
 + Remote code uses new compression method that an older dmd doesn't
 support
If compression handling is needed, dmd can standardize on it just like jar files do.
 * Remote server is down - build takes forever while waiting
So does downloading or building with another tool.
Not so if you get all the source at once rather than depending on getting it during build.
 + Make dmd time out after a couple of seconds - build fails
So would build directed with any other tool.
 * Makes the assumption that the build machine is has internet
 connectivity, if it doesn't building suddenly gets a lot more
 complicated
Fair point.
For the previous few points, where you're unable to download the package for whatever reason, it means you have to duplicate build instructions. Do this, otherwise here's how to do it all manually.
 * Source code changes location, build breaks unless a redirect is
 possible - if it changes protocol it's useless
See my answer with a central repo. My understanding is that you find automated download during the first build untenable, but manual download prior to the first build acceptable. I don't see such a large fracture between the two cases as you do.
I don't have a problem with automatically downloading source during a first build, I do see a problem with getting the compiler to do it though. I don't believe the compiler should have anything to do with getting source code, unless the compiler also becomes a package manager and build tool. -- Robert http://octarineparrot.com/
Jun 14 2011
next sibling parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 14.06.2011 21:34, schrieb Robert Clipsham:
 On 14/06/2011 20:07, Andrei Alexandrescu wrote:
 On 6/14/11 1:22 PM, Robert Clipsham wrote:
 On 14/06/2011 14:53, Andrei Alexandrescu wrote:
 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.


 Andrei
This doesn't seem like the right solution to the problem - the correct solution, in my opinion, is to have a build tool/package manager handle this, not the compiler. Problems I see: * Remote server gets hacked, everyone using the library now executes malicious code
This liability is not different from a traditional setup.
Perhaps, but with a proper package management tool this can be avoided with sha sums etc, this can't happen with a direct get. Admittedly this line of defense falls if the intermediate server is hacked.
Signing the files/hashes with GPG helps (as long as the developers private key isn't on the server).
Jun 14 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/14/11 2:41 PM, Daniel Gibson wrote:
 Am 14.06.2011 21:34, schrieb Robert Clipsham:
 On 14/06/2011 20:07, Andrei Alexandrescu wrote:
 On 6/14/11 1:22 PM, Robert Clipsham wrote:
 On 14/06/2011 14:53, Andrei Alexandrescu wrote:
 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.


 Andrei
This doesn't seem like the right solution to the problem - the correct solution, in my opinion, is to have a build tool/package manager handle this, not the compiler. Problems I see: * Remote server gets hacked, everyone using the library now executes malicious code
This liability is not different from a traditional setup.
Perhaps, but with a proper package management tool this can be avoided with sha sums etc, this can't happen with a direct get. Admittedly this line of defense falls if the intermediate server is hacked.
Signing the files/hashes with GPG helps (as long as the developers private key isn't on the server).
Could you please add a subsection to the trust model discussing such a possibility? Thanks, Andrei
Jun 14 2011
parent Daniel Gibson <metalcaedes gmail.com> writes:
Am 14.06.2011 22:27, schrieb Andrei Alexandrescu:
 On 6/14/11 2:41 PM, Daniel Gibson wrote:
 Am 14.06.2011 21:34, schrieb Robert Clipsham:
 On 14/06/2011 20:07, Andrei Alexandrescu wrote:
 On 6/14/11 1:22 PM, Robert Clipsham wrote:
 On 14/06/2011 14:53, Andrei Alexandrescu wrote:
 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.


 Andrei
This doesn't seem like the right solution to the problem - the correct solution, in my opinion, is to have a build tool/package manager handle this, not the compiler. Problems I see: * Remote server gets hacked, everyone using the library now executes malicious code
This liability is not different from a traditional setup.
Perhaps, but with a proper package management tool this can be avoided with sha sums etc, this can't happen with a direct get. Admittedly this line of defense falls if the intermediate server is hacked.
Signing the files/hashes with GPG helps (as long as the developers private key isn't on the server).
Could you please add a subsection to the trust model discussing such a possibility? Thanks, Andrei
Done
Jun 14 2011
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/14/11 2:34 PM, Robert Clipsham wrote:
 On 14/06/2011 20:07, Andrei Alexandrescu wrote:
 On 6/14/11 1:22 PM, Robert Clipsham wrote:
 On 14/06/2011 14:53, Andrei Alexandrescu wrote:
 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.


 Andrei
This doesn't seem like the right solution to the problem - the correct solution, in my opinion, is to have a build tool/package manager handle this, not the compiler. Problems I see: * Remote server gets hacked, everyone using the library now executes malicious code
This liability is not different from a traditional setup.
Perhaps, but with a proper package management tool this can be avoided with sha sums etc, this can't happen with a direct get. Admittedly this line of defense falls if the intermediate server is hacked.
You may want to update the proposal with the appropriate security artifacts. [snip]
 I don't have a problem with automatically downloading source during a
 first build, I do see a problem with getting the compiler to do it
 though. I don't believe the compiler should have anything to do with
 getting source code, unless the compiler also becomes a package manager
 and build tool.
Would you agree with the setup in which the compiler interacts during compilation with an external executable, placed in the same dir as the compiler, and with this spec? dget "url" Gets "url" and prints the local dir to stdout, or fails and prints an error message to stderr. Then the matter is to write dget - in D! I feel this is going somewhere. Andrei
Jun 14 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 14 Jun 2011 16:26:34 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 6/14/11 2:34 PM, Robert Clipsham wrote:
 On 14/06/2011 20:07, Andrei Alexandrescu wrote:
 On 6/14/11 1:22 PM, Robert Clipsham wrote:
 On 14/06/2011 14:53, Andrei Alexandrescu wrote:
 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.


 Andrei
This doesn't seem like the right solution to the problem - the correct solution, in my opinion, is to have a build tool/package manager handle this, not the compiler. Problems I see: * Remote server gets hacked, everyone using the library now executes malicious code
This liability is not different from a traditional setup.
Perhaps, but with a proper package management tool this can be avoided with sha sums etc, this can't happen with a direct get. Admittedly this line of defense falls if the intermediate server is hacked.
You may want to update the proposal with the appropriate security artifacts. [snip]
 I don't have a problem with automatically downloading source during a
 first build, I do see a problem with getting the compiler to do it
 though. I don't believe the compiler should have anything to do with
 getting source code, unless the compiler also becomes a package manager
 and build tool.
Would you agree with the setup in which the compiler interacts during compilation with an external executable, placed in the same dir as the compiler, and with this spec? dget "url"
I'd rather have it be dget "includepath" module1 [module2 module3 ...] Then use -I to specify include paths that are url forms. Then you specify the possible network include paths with: -Ihttp://path/to/source I think this goes well with the current dmd import model. dget would be responsible for caching and updating the cache if the remote file changes. -Steve
Jun 15 2011
parent "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
On Wed, 15 Jun 2011 08:57:04 -0400, Steven Schveighoffer wrote:

 On Tue, 14 Jun 2011 16:26:34 -0400, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:
 Would you agree with the setup in which the compiler interacts during
 compilation with an external executable, placed in the same dir as the
 compiler, and with this spec?

 dget "url"
I'd rather have it be dget "includepath" module1 [module2 module3 ...] Then use -I to specify include paths that are url forms. Then you specify the possible network include paths with: -Ihttp://path/to/source I think this goes well with the current dmd import model. dget would be responsible for caching and updating the cache if the remote file changes.
++vote; -Lars
Jun 15 2011
prev sibling next sibling parent foobar <foo bar.com> writes:
Andrei Alexandrescu Wrote:

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11
 
 Destroy.
 
 
 Andrei
Are we trying to implement the confused deputy problem? [ http://en.wikipedia.org/wiki/Confused_deputy_problem ]
Jun 14 2011
prev sibling next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:it7pd2$2m07$1 digitalmars.com...
 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.
After all that talk about how we need to be very cautious about adding new features to the compiler and work with the existing language whenever possible, only a few days later now we're seriously considering adding an entire *build system* to the compiler? And let's not fool ourselves: in order for this not to be half-baked, it would have to completely take over all the roles handled by a full-featured build-and-package-management system. Just off the top of my head: - Putting it in the compiler forces it all to be written in C++. As an external tool, we could use D. - By default, it ends up downloading an entire library one inferred source file at a time. Why? Libraries are a packaged whole. Standard behavior should be for libraries should be treated as such. - Are we abandoning zdmd now? (Or is it "dmdz"?) - Does it automatically *compile* the files it downloads or merely use them to satisfy imports? If the latter, then the whole proposal becomes pointless - you'll just need to tie it in with RDMD anyway, so you may as well just keep it outside the compiler. If the former, then you're implicitly having DMD creep into RDMD's territory - So either be explicit about it and take it all the way putting all of rdmd into there, or get rid of it and let the build tools handle package-management matters. - Does every project that uses libX have to download it separately? If not (or really even if so), how does the compiler handle different versions of the lib and prevent "dll hell"? Versioning seems to be an afterthought in this DIP - and that's a guaranteed way to eventually find yourself in dll hell. - How do you tell it to "update libX"? Not by expecting the user to manually clear the cache, I hope. - With a *real* package management tool, you'd have a built-in (and configurable) list of central data sources. If you want to use something you don't have installed, and it exists in one of the stores (maybe even one of the built-in ones), you don't have to edit *ANYTHING AT ALL*. It'll just grab it, no changes to your source needed at all, and any custom steps needed would be automatically handled. And if it was only in a data store that you didn't already have in your list, all you have to do is add *one* line. Which is just as easy as the DIP, but that *one* step will also suffice for any other project that needs libX - no need to add the line for *each* of your libX-using projects. Heck, you wouldn't even need to edit a file, just do "package-tool addsource http://...". The DIP doesn't even remotely compare. - I think you're severely overestimating the amount of extra dmd-invokations that would be needed by using an external build tool. I beleive this is because your idea centers around discovering one file at a time instead of properly handling packages at the *package* level. Consider this: You tell BuildToolX to build MyApp. It looks at MyApp.config to see what libs it needs. It discovers LibX is needed. It fetches LibX.config, and finds it's dependencies. Etc, building up a dependency graph. It checks for any problems with the dependency graph before doing any real work (something the DIP can't do). Then it downloads the libs, and *maybe* runs some custom setup on each one. If the libs don't have any custom setup, you only have *one* DMD invokation (two if you use RDMD). If the libs do have any custom setup, and it involves running dmd, then that *only* happens the first time you build MyApp (until you update one of the libs, causing it's one-time setup to run once more). I think this proposal is a hasty idea that just amounts to chasing after "the easy way out".
Jun 14 2011
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Nick Sabalausky" <a a.a> wrote in message 
news:it8kkv$20hr$1 digitalmars.com...
 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
 news:it7pd2$2m07$1 digitalmars.com...
 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.
After all that talk about how we need to be very cautious about adding new features to the compiler and work with the existing language whenever possible, only a few days later now we're seriously considering adding an entire *build system* to the compiler? And let's not fool ourselves: in order for this not to be half-baked, it would have to completely take over all the roles handled by a full-featured build-and-package-management system. Just off the top of my head: - Putting it in the compiler forces it all to be written in C++. As an external tool, we could use D. - By default, it ends up downloading an entire library one inferred source file at a time. Why? Libraries are a packaged whole. Standard behavior should be for libraries should be treated as such. - Are we abandoning zdmd now? (Or is it "dmdz"?) - Does it automatically *compile* the files it downloads or merely use them to satisfy imports? If the latter, then the whole proposal becomes pointless - you'll just need to tie it in with RDMD anyway, so you may as well just keep it outside the compiler. If the former, then you're implicitly having DMD creep into RDMD's territory - So either be explicit about it and take it all the way putting all of rdmd into there, or get rid of it and let the build tools handle package-management matters. - Does every project that uses libX have to download it separately? If not (or really even if so), how does the compiler handle different versions of the lib and prevent "dll hell"? Versioning seems to be an afterthought in this DIP - and that's a guaranteed way to eventually find yourself in dll hell. - How do you tell it to "update libX"? Not by expecting the user to manually clear the cache, I hope. - With a *real* package management tool, you'd have a built-in (and configurable) list of central data sources. If you want to use something you don't have installed, and it exists in one of the stores (maybe even one of the built-in ones), you don't have to edit *ANYTHING AT ALL*. It'll just grab it, no changes to your source needed at all, and any custom steps needed would be automatically handled. And if it was only in a data store that you didn't already have in your list, all you have to do is add *one* line. Which is just as easy as the DIP, but that *one* step will also suffice for any other project that needs libX - no need to add the line for *each* of your libX-using projects. Heck, you wouldn't even need to edit a file, just do "package-tool addsource http://...". The DIP doesn't even remotely compare. - I think you're severely overestimating the amount of extra dmd-invokations that would be needed by using an external build tool. I beleive this is because your idea centers around discovering one file at a time instead of properly handling packages at the *package* level. Consider this: You tell BuildToolX to build MyApp. It looks at MyApp.config to see what libs it needs. It discovers LibX is needed. It fetches LibX.config, and finds it's dependencies. Etc, building up a dependency graph. It checks for any problems with the dependency graph before doing any real work (something the DIP can't do). Then it downloads the libs, and *maybe* runs some custom setup on each one. If the libs don't have any custom setup, you only have *one* DMD invokation (two if you use RDMD). If the libs do have any custom setup, and it involves running dmd, then that *only* happens the first time you build MyApp (until you update one of the libs, causing it's one-time setup to run once more).
Also, if you do want to throw away the "*.config" file (which might not be a good idea) and truly have "no editing needed" by inferring library dependencies from dmd's deps output, you still don't need a lot of extra dmd invokations: Just one extra deps-gathering invokation each time a deps-gathering invokation finds unsatisfied depenencies, and *only* the first time you build.
 I think this proposal is a hasty idea that just amounts to chasing after 
 "the easy way out".


 
Jun 14 2011
parent reply Adam D. Ruppe <destructionator gmail.com> writes:
Nick Sabalausky wrote:
 Just one extra deps-gathering invokation each time a
 deps-gathering invokation finds unsatisfied depenencies, and *only*
 the first time you build.
It could probably cache the last successful command...
Jun 14 2011
next sibling parent Adam D. Ruppe <destructionator gmail.com> writes:
I left this thought half finished. Where would it cache? What if
there's several different projects or configurations in one folder?

If redoing the process just works each time, it really simplifies
those scenarios.
Jun 14 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Adam D. Ruppe" <destructionator gmail.com> wrote in message 
news:it91b0$aa0$1 digitalmars.com...
 Nick Sabalausky wrote:
 Just one extra deps-gathering invokation each time a
 deps-gathering invokation finds unsatisfied depenencies, and *only*
 the first time you build.
It could probably cache the last successful command...
Nothing would need to be cached. After the initial "gather everything and build" build, all it would ever have to do is exactly what RDMD already does right now: Run DMD once to find the deps, check them to see if anything needs rebuilt, and if so, run DMD the second time to build. There'd never be any need for more than those two invokations (and the first one tends to be much faster anyway) until a new library dependency is introduced.
Jun 14 2011
next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
 After the initial "gather everything and
 build" build, all it would ever have to do is exactly what RDMD
 already does right now: Run DMD once to find the deps, check them
 to see if anything needs rebuilt, and if so, run DMD the second
 time to build.
Does rdmd handle cases where the dependencies have dependencies? Suppose app.d imports foo.d which imports bar.d dmd app.d can't find module in foo.d retry: dmd app.d foo.d can't find module bar.d try again: dmd app.d foo.d bar.d success. Is it possible to cut out any one of those steps without caching that third dmd line? Until you try to compile foo.d, it can't know bar.d is required...
Jun 14 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"Adam D. Ruppe" <destructionator gmail.com> wrote in message 
news:it93ah$ekb$1 digitalmars.com...
 After the initial "gather everything and
 build" build, all it would ever have to do is exactly what RDMD
 already does right now: Run DMD once to find the deps, check them
 to see if anything needs rebuilt, and if so, run DMD the second
 time to build.
Does rdmd handle cases where the dependencies have dependencies? Suppose app.d imports foo.d which imports bar.d dmd app.d can't find module in foo.d retry: dmd app.d foo.d can't find module bar.d try again: dmd app.d foo.d bar.d success. Is it possible to cut out any one of those steps without caching that third dmd line? Until you try to compile foo.d, it can't know bar.d is required...
RDMD never needs to invoke DMD more than twice. Once to find "all" the dependencies, and once to do the actual compile. When DMD is run to find a file's dependencies, it finds the *entire* dependency graph, not just one step of it.
Jun 14 2011
next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
Nick Sabalausky wrote:
 RDMD never needs to invoke DMD more than twice.
rdmd also doesn't attempt to download libraries.
Jun 14 2011
parent "Nick Sabalausky" <a a.a> writes:
"Adam D. Ruppe" <destructionator gmail.com> wrote in message 
news:it97ee$ome$1 digitalmars.com...
 Nick Sabalausky wrote:
 RDMD never needs to invoke DMD more than twice.
rdmd also doesn't attempt to download libraries.
I know, but that's beside the point. You may need a few invocations of DMD or RDMD to *get* dependencies you don't already have (but *at most* only one per library, less if any part of the chain has more than one dependency) but *once you have them*, then *one* call to DMD will find *all* the files needed. I'll use an example: MyApp: - main.d: Imports 'helper' and 'libA.all' - helper.d: Imports 'libB.all' LibA: - libA/all.d: Imports 'libA.util' - libA/util.d: Imports nothing of interest. LibB: - libB/all.d: Imports 'libB.baz' - libB/baz.d: Imports 'libC' LibC: - libC.d: Imports nothing of interest. Now, you *only* have the source for MyApp, none of the libs. You built it: $ nicks-build-tool main.d -of:MyApp.exe Invoking dmd to find deps of main.d... Deps: helper, libA.all, libB.all Missing: libA.all, libB.all Checking if deps can be downloaded... libA.all: Exists in LibA libB.all: Exists in LibB Downloading LibA...done Downloading LibB...done Invoking dmd to find deps of main.d... Deps: helper, libA.all, libB.all, libA.util, libB.baz, libC Missing: libC Checking if deps can be downloaded... libC.d: Exists in LibC Downloading LibC...done Invoking dmd to find deps of main.d... Deps: helper, libA.all, libB.all, libA.util, libB.baz, libC Missing: {none} Checking if need to rebuild...yes, MyApp.exe missing Invoking dmd to compile everything... Done. Now you make changes to MyApp and want to build again: $ nicks-build-tool main.d -of:MyApp.exe Invoking dmd to find deps of main.d... Deps: helper, libA.all, libB.all, libA.util, libB.baz, libC Missing: {none} Checking if need to rebuild...yes, main.d and helper.d changed. Invoking dmd to compile everything... Done. DMD only needs to be invoked a small handful of times, and only when a library is missing. However, IMO, it would be far better to have dependency metadata for each lib/project rather than picking through the source and inferring packages.
Jun 14 2011
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 06/14/2011 09:29 PM, Nick Sabalausky wrote:
 "Adam D. Ruppe"<destructionator gmail.com>  wrote in message
 news:it93ah$ekb$1 digitalmars.com...
 After the initial "gather everything and
 build" build, all it would ever have to do is exactly what RDMD
 already does right now: Run DMD once to find the deps, check them
 to see if anything needs rebuilt, and if so, run DMD the second
 time to build.
Does rdmd handle cases where the dependencies have dependencies? Suppose app.d imports foo.d which imports bar.d dmd app.d can't find module in foo.d retry: dmd app.d foo.d can't find module bar.d try again: dmd app.d foo.d bar.d success. Is it possible to cut out any one of those steps without caching that third dmd line? Until you try to compile foo.d, it can't know bar.d is required...
RDMD never needs to invoke DMD more than twice. Once to find "all" the dependencies, and once to do the actual compile. When DMD is run to find a file's dependencies, it finds the *entire* dependency graph, not just one step of it.
It can do so because all files are present. The remote tool can't do that. Andrei
Jun 14 2011
parent "Nick Sabalausky" <a a.a> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:it9dt7$16fs$1 digitalmars.com...
 On 06/14/2011 09:29 PM, Nick Sabalausky wrote:
 "Adam D. Ruppe"<destructionator gmail.com>  wrote in message
 news:it93ah$ekb$1 digitalmars.com...
 After the initial "gather everything and
 build" build, all it would ever have to do is exactly what RDMD
 already does right now: Run DMD once to find the deps, check them
 to see if anything needs rebuilt, and if so, run DMD the second
 time to build.
Does rdmd handle cases where the dependencies have dependencies? Suppose app.d imports foo.d which imports bar.d dmd app.d can't find module in foo.d retry: dmd app.d foo.d can't find module bar.d try again: dmd app.d foo.d bar.d success. Is it possible to cut out any one of those steps without caching that third dmd line? Until you try to compile foo.d, it can't know bar.d is required...
RDMD never needs to invoke DMD more than twice. Once to find "all" the dependencies, and once to do the actual compile. When DMD is run to find a file's dependencies, it finds the *entire* dependency graph, not just one step of it.
It can do so because all files are present. The remote tool can't do that.
Right. All I'm saying is: 1. The remote tool *can* do that **after** the first build. 2. On the first build, the number of times DMD needs to be invoked is fairly limited. As far as finding deps goes (and not counting any special lib-specific setup steps), the upper bound is 1+(number of libs needed). Of course, that's if you do things one-library-at-a-time. If you try to do things one-file-at-a-time (which is of dubious benefit), *then* the number of DMD invocations would explode.
Jun 14 2011
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/14/11 8:44 PM, Nick Sabalausky wrote:
 "Adam D. Ruppe"<destructionator gmail.com>  wrote in message
 news:it91b0$aa0$1 digitalmars.com...
 Nick Sabalausky wrote:
 Just one extra deps-gathering invokation each time a
 deps-gathering invokation finds unsatisfied depenencies, and *only*
 the first time you build.
It could probably cache the last successful command...
Nothing would need to be cached. After the initial "gather everything and build" build, all it would ever have to do is exactly what RDMD already does right now: Run DMD once to find the deps, check them to see if anything needs rebuilt, and if so, run DMD the second time to build. There'd never be any need for more than those two invokations (and the first one tends to be much faster anyway) until a new library dependency is introduced.
I think this works, but I personally find it clumsy. Particularly because when dmd fails, you don't know exactly why - may have been an import, may have been something else. So the utility needs to essentially remember the last import attempted (won't work when the compiler will use multiple threads) and scrape dmd's stderr output and parse it for something that looks like a specific "module not found" error message (see http://arsdnet.net/dcode/build.d). It's quite a shaky design that relies on a bunch of stars aligning. Andrei
Jun 15 2011
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/14/11 4:38 PM, Nick Sabalausky wrote:
 - Putting it in the compiler forces it all to be written in C++. As an
 external tool, we could use D.
Having the compiler communicate with a download tool supplied with the distribution seems to be a very promising approach that would address this concern.
 - By default, it ends up downloading an entire library one inferred source
 file at a time. Why? Libraries are a packaged whole. Standard behavior
 should be for libraries should be treated as such.
Fair point, though in fact the effect is that one ends up downloading exactly the used modules from that library and potentially others. Although it may seem that libraries are packaged as a whole, that view ignores the interdependencies across them. This proposal solves the interdependencies organically.
 - Are we abandoning zdmd now? (Or is it "dmdz"?)
It is a related topic. That project, although it has been implemented, hasn't unfortunately captured the interest of people.
 - Does it automatically *compile* the files it downloads or merely use them
 to satisfy imports?
We need to arrange things such that the downloaded files are also compiled and linked together with the project.
 - Does every project that uses libX have to download it separately? If not
 (or really even if so), how does the compiler handle different versions of
 the lib and prevent "dll hell"? Versioning seems to be an afterthought in
 this DIP - and that's a guaranteed way to eventually find yourself in dll
 hell.
Versioning is a policy matter that can, I think, be addressed within the URL structure. This proposal tries to support versioning without explicitly imposing it or standing in its way.
 - How do you tell it to "update libX"? Not by expecting the user to manually
 clear the cache, I hope.
The external tool that would work in conjunction with dmd could have such a flag.
 - With a *real* package management tool, you'd have a built-in (and
 configurable) list of central data sources.
I don't see why you can't have with this approach too.
 If you want to use something you
 don't have installed, and it exists in one of the stores (maybe even one of
 the built-in ones), you don't have to edit *ANYTHING AT ALL*. It'll just
 grab it, no changes to your source needed at all, and any custom steps
 needed would be automatically handled. And if it was only in a data store
 that you didn't already have in your list, all you have to do is add *one*
 line. Which is just as easy as the DIP, but that *one* step will also
 suffice for any other project that needs libX - no need to add the line for
 *each* of your libX-using projects. Heck, you wouldn't even need to edit a
 file, just do "package-tool addsource http://...". The DIP doesn't even
 remotely compare.
I think it does. Clearly a command-line equivalent for the pragma needs to exist, and the appropriate pragmas can be added to dmd.conf. With the appropriate setup, a program would just issue: using dsource.libX; and get everything automatically.
 - I think you're severely overestimating the amount of extra dmd-invokations
 that would be needed by using an external build tool.
I'm not estimating much. It's Adam who shared impressions from actual use.
 I beleive this is
 because your idea centers around discovering one file at a time instead of
 properly handling packages at the *package* level.
The issue with package-level is that http does not have a protocol for listing files in a directory. However, if we arrange to support zip files, the tool could detect that a zip file is at the location of the package and download it entirely.
 Consider this:

 You tell BuildToolX to build MyApp. It looks at MyApp.config to see what
 libs it needs. It discovers LibX is needed. It fetches LibX.config, and
 finds it's dependencies. Etc, building up a dependency graph. It checks for
 any problems with the dependency graph before doing any real work (something
 the DIP can't do). Then it downloads the libs, and *maybe* runs some custom
 setup on each one. If the libs don't have any custom setup, you only have
 *one* DMD invokation (two if you use RDMD). If the libs do have any custom
 setup, and it involves running dmd, then that *only* happens the first time
 you build MyApp (until you update one of the libs, causing it's one-time
 setup to run once more).

 I think this proposal is a hasty idea that just amounts to chasing after
 "the easy way out".
I'm just trying to define a simple backend that facilitates sharing code and using of shared code, without arrogating the role and merits of a more sophisticated package management tool and without standing in the way of one. Ideally, the backend should be useful to such a tool - e.g. I imagine a tool could take a plain file format and transform it into a series of pragmas directing library locations. As always, criticism is appreciated, particularly of the kind that prompts pushing things forward - as was the case with the idea of a download tool that's a separate executable, companion to dmd. Thanks, Andrei
Jun 14 2011
next sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
On Jun 14, 2011, at 2:56 PM, Andrei Alexandrescu wrote:
=20
 Versioning is a policy matter that can, I think, be addressed within =
the URL structure. This proposal tries to support versioning without = explicitly imposing it or standing in its way. For now anyway. If we want to support using multiple versions of the = same lib in an app then some more thought will have to go into how this = will work.=
Jun 14 2011
parent reply Adam D. Ruppe <destructionator gmail.com> writes:
Sean Kelly wrote:
 If we want to support using multiple versions of the
 same lib in an app then some more thought will have to go into how
 this will work.
Versioning is absolutely trivial if the version is part of the module name. This is a better way to do it long term too; it's future proof. Download any module, no matter how old, and it will still compile. To update your code to the new version, do a "grep import foo_1" and replace them with "import foo_2" and recompile. This is probably less effort than actually updating your code to use version 2!
Jun 14 2011
parent "Nick Sabalausky" <a a.a> writes:
"Adam D. Ruppe" <destructionator gmail.com> wrote in message 
news:it91me$b2g$1 digitalmars.com...
 Sean Kelly wrote:
 If we want to support using multiple versions of the
 same lib in an app then some more thought will have to go into how
 this will work.
Versioning is absolutely trivial if the version is part of the module name. This is a better way to do it long term too; it's future proof. Download any module, no matter how old, and it will still compile. To update your code to the new version, do a "grep import foo_1" and replace them with "import foo_2" and recompile. This is probably less effort than actually updating your code to use version 2!
First of all, that can't be automated reliably (at least not within reason). Suppose there's two spaces. Or (heaven forbid!) a tab. Or anything else that just happens to be non-standard that you just happened to not take into account in your grep line. Such as mixin-generated imports, which are only going to be more common now that there's going to be usable inside functions. Plus not everyone's a command-line whiz, which makes that a lot of manual tedium for them. Second, you shouldn't need to edit code to compile against a different version of a lib. I'm not *necessarily* opposed to the idea versions being part of the name (although it does prevent you from being able to reliably do an ordered comparison of versions unless you have a standardize naming scheme - in which case it's effectively not really part of the name anyway). But it shouldn't seep into the user code unless the user of the library wants it to.
Jun 14 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:4DF7D92A.8050606 erdani.org...
 On 6/14/11 4:38 PM, Nick Sabalausky wrote:
 - Putting it in the compiler forces it all to be written in C++. As an
 external tool, we could use D.
Having the compiler communicate with a download tool supplied with the distribution seems to be a very promising approach that would address this concern.
A two way "compiler <-> build tool" channel is messier than "build tool invoked compier", and I don't really see much benefit.
 - By default, it ends up downloading an entire library one inferred 
 source
 file at a time. Why? Libraries are a packaged whole. Standard behavior
 should be for libraries should be treated as such.
Fair point, though in fact the effect is that one ends up downloading exactly the used modules from that library and potentially others.
I really don't see a problem with that. And you'll typically end up needing most, if not all, anyway. It's very difficult to see this as an actual drawback.
 Although it may seem that libraries are packaged as a whole, that view 
 ignores the interdependencies across them. This proposal solves the 
 interdependencies organically.
How does my proposal not handle that? I think it does.
 - Are we abandoning zdmd now? (Or is it "dmdz"?)
It is a related topic. That project, although it has been implemented, hasn't unfortunately captured the interest of people.
Not surprising since there's been very little mention of it. In fact, I've been under the impression that it wasn't even finished. Is this not so? If it is done, I bet I'm not the only one that didn't know. Plus, I bet most people aren't even aware of it at all. RDMD gets trotted out and promoted *far* more often and I come across a lot of D users (usually newbies) who aren't even aware of *it*.
 - Does it automatically *compile* the files it downloads or merely use 
 them
 to satisfy imports?
We need to arrange things such that the downloaded files are also compiled and linked together with the project.
And that's akward under your the model you're proposing. But by handling package management in a separate tool, it's a non-issue.
 - Does every project that uses libX have to download it separately? If 
 not
 (or really even if so), how does the compiler handle different versions 
 of
 the lib and prevent "dll hell"? Versioning seems to be an afterthought in
 this DIP - and that's a guaranteed way to eventually find yourself in dll
 hell.
Versioning is a policy matter that can, I think, be addressed within the URL structure. This proposal tries to support versioning without explicitly imposing it or standing in its way.
That's exactly my point. If you leave it open like that, everyone will come up with thier own way to do it, many will not even give it any attention at all, and most of those approaches will end up being wrong WRT avoiding dll hell. Hence, dll hell will get in and library users will end up having to deal it. The only way to avoid it is to design it out of the system up from *with explicitly imposing it*.
 - How do you tell it to "update libX"? Not by expecting the user to 
 manually
 clear the cache, I hope.
The external tool that would work in conjunction with dmd could have such a flag.
That's a messier solution than what I outlined.
 - With a *real* package management tool, you'd have a built-in (and
 configurable) list of central data sources.
I don't see why you can't have with this approach too.
The problem is you end up having both. One one them, the default one, is a mess and shouldn't really be used, and then the other is the one that you'd already get anyway with a real package management tool.
 If you want to use something you
 don't have installed, and it exists in one of the stores (maybe even one 
 of
 the built-in ones), you don't have to edit *ANYTHING AT ALL*. It'll just
 grab it, no changes to your source needed at all, and any custom steps
 needed would be automatically handled. And if it was only in a data store
 that you didn't already have in your list, all you have to do is add 
 *one*
 line. Which is just as easy as the DIP, but that *one* step will also
 suffice for any other project that needs libX - no need to add the line 
 for
 *each* of your libX-using projects. Heck, you wouldn't even need to edit 
 a
 file, just do "package-tool addsource http://...". The DIP doesn't even
 remotely compare.
I think it does. Clearly a command-line equivalent for the pragma needs to exist, and the appropriate pragmas can be added to dmd.conf. With the appropriate setup, a program would just issue: using dsource.libX; and get everything automatically.
The approach in the DIP encourages such things to not be used and leaves them as afterthoughts. I think this is backwards.
 - I think you're severely overestimating the amount of extra 
 dmd-invokations
 that would be needed by using an external build tool.
I'm not estimating much. It's Adam who shared impressions from actual use.
 I beleive this is
 because your idea centers around discovering one file at a time instead 
 of
 properly handling packages at the *package* level.
The issue with package-level is that http does not have a protocol for listing files in a directory. However, if we arrange to support zip files, the tool could detect that a zip file is at the location of the package and download it entirely.
There is no need to deal with individual files. Like I've said, that's the wrong level to be dealing with this anyway.
 Consider this:

 You tell BuildToolX to build MyApp. It looks at MyApp.config to see what
 libs it needs. It discovers LibX is needed. It fetches LibX.config, and
 finds it's dependencies. Etc, building up a dependency graph. It checks 
 for
 any problems with the dependency graph before doing any real work 
 (something
 the DIP can't do). Then it downloads the libs, and *maybe* runs some 
 custom
 setup on each one. If the libs don't have any custom setup, you only have
 *one* DMD invokation (two if you use RDMD). If the libs do have any 
 custom
 setup, and it involves running dmd, then that *only* happens the first 
 time
 you build MyApp (until you update one of the libs, causing it's one-time
 setup to run once more).

 I think this proposal is a hasty idea that just amounts to chasing after
 "the easy way out".
I'm just trying to define a simple backend that facilitates sharing code and using of shared code, without arrogating the role and merits of a more sophisticated package management tool and without standing in the way of one. Ideally, the backend should be useful to such a tool - e.g. I imagine a tool could take a plain file format and transform it into a series of pragmas directing library locations.
I appreciate the motivation behind it, but I see the whole approach as: 1. Not really helping a package management tool, and likely even getting in its way. 2. Encouraging people to use a dangerously ad-hoc "package management" instead of a proper fully-thought-out one. I see this as adding more to the language/compiler in order to make the wrong things easier.
 As always, criticism is appreciated, particularly of the kind that prompts 
 pushing things forward - as was the case with the idea of a download tool 
 that's a separate executable, companion to dmd.
Maybe I'm tired, or maybe it's just the unfortunate nature of text, but I can't tell if you're saying you appreciate the criticism I've given here or implying that you want better criticism than what I've given...?
Jun 14 2011
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 14 Jun 2011 22:24:04 -0400, Nick Sabalausky <a a.a> wrote:

 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message
 news:4DF7D92A.8050606 erdani.org...
 On 6/14/11 4:38 PM, Nick Sabalausky wrote:
 - Putting it in the compiler forces it all to be written in C++. As an
 external tool, we could use D.
Having the compiler communicate with a download tool supplied with the distribution seems to be a very promising approach that would address this concern.
A two way "compiler <-> build tool" channel is messier than "build tool invoked compier", and I don't really see much benefit.
It's neither. It's not a build tool, it's a fetch tool. The build tool has nothing to do with getting the modules. The drawback here is that the build tool has to interface with said fetch tool in order to do incremental builds. However, we could make an assumption that files that are downloaded are rather static, and therefore, the target doesn't "depend" on them. To override this, just do a rebuild-from-scratch on the rare occasion you have to update the files.
 - By default, it ends up downloading an entire library one inferred
 source
 file at a time. Why? Libraries are a packaged whole. Standard behavior
 should be for libraries should be treated as such.
Fair point, though in fact the effect is that one ends up downloading exactly the used modules from that library and potentially others.
I really don't see a problem with that. And you'll typically end up needing most, if not all, anyway. It's very difficult to see this as an actual drawback.
When requesting a given module, it might be that it's part of a package (I would say most definitely). The fetch tool could know to get the entire package and extract it into the cache.
 - Does every project that uses libX have to download it separately? If
 not
 (or really even if so), how does the compiler handle different versions
 of
 the lib and prevent "dll hell"? Versioning seems to be an afterthought  
 in
 this DIP - and that's a guaranteed way to eventually find yourself in  
 dll
 hell.
Versioning is a policy matter that can, I think, be addressed within the URL structure. This proposal tries to support versioning without explicitly imposing it or standing in its way.
That's exactly my point. If you leave it open like that, everyone will come up with thier own way to do it, many will not even give it any attention at all, and most of those approaches will end up being wrong WRT avoiding dll hell. Hence, dll hell will get in and library users will end up having to deal it. The only way to avoid it is to design it out of the system up from *with explicitly imposing it*.
If the proposal becomes one where the include path specifies base urls, then the build tool can specify exact versions. The cache should be responsible for making sure files named the same from different URLs do not conflict. for example: -Ihttp://url.to.project/v1.2.3 in one project and -Ihttp://url.to.project/v1.2.4 in another. I still feel that specifying the url in the source is the wrong approach -- it puts too much information into the source, and any small change requires modifying source code. We don't specify full paths for local imports, why should we specify full paths for remote ones? -Steve
Jun 15 2011
prev sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
Nick Sabalausky:
 - By default, it ends up downloading an entire library one inferred
 source file at a time. Why? Libraries are a packaged whole.
 Standard behavior should be for libraries should be treated as
 such.
I don't agree. You don't import a library - you import a module. It's natural to just download that module and get what you need that way.
 Does every project that uses libX have to download it separately?
My approach is to download the libraries to a local subdirectory. $ cd foo $ dir $ build app $ dir app app.o app.d foo/ If you want to share a library, you can link the local subdir to a central lib dir using your operating system's features. (symlinks, junctions, whatever) I'm not sure what Andrei had in mind, but I like my approach because it's easy to implement, clear to see what it is actually doing, and packing your application for distribution is as simple as zipping up the directory. Dependencies included automatically.
 It'll just grab it, no changes to your source needed at all, and
 any custom steps needed would be automatically handled
My approach again allowed a central repo, which may direct you elsewhere using standard http. It builds the default url by: http://centraldomain.com/repository/package/module.d I think the DIP should do this too if a liburl is not specified.
Jun 14 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"Adam D. Ruppe" <destructionator gmail.com> wrote in message 
news:it927g$c7c$1 digitalmars.com...
 Nick Sabalausky:
 - By default, it ends up downloading an entire library one inferred
 source file at a time. Why? Libraries are a packaged whole.
 Standard behavior should be for libraries should be treated as
 such.
I don't agree. You don't import a library - you import a module. It's natural to just download that module and get what you need that way.
You import a module *from* a library. Even if you only import one module, that module is likely going to import others from the same lib, which may import others still, and chances are you'll end up needing most of the modules anyway. Also, if a library needs any special "setup" step, then this won't even work anyway. Plus I see no real benefit to being able to have a "partial" library installation.
 Does every project that uses libX have to download it separately?
My approach is to download the libraries to a local subdirectory. $ cd foo $ dir $ build app $ dir app app.o app.d foo/ If you want to share a library, you can link the local subdir to a central lib dir using your operating system's features. (symlinks, junctions, whatever)
I think a substantial number of people (*especially* windows users - it's unrealistic to expect windows users to use anything like junctions) would expect to be able to use an already-installed library without special setup for every single project that uses it. And here's a real killer: If someone downloads your lib, or the source for your app, should they *really* be expected to wire up all your lib's/app's dependencies manually? This works against the whole point of easy package management.
 doing, and packing your application for distribution is as simple
 as zipping up the directory. Dependencies included automatically.
That can't always be done, shouldn't always be done, and not everyone wants to. There *are* benefits to packages being independent, but this throws them away. Yes, there are downsides to not having dependencies included automatically, but those are already solved by a good package management system.
 It'll just grab it, no changes to your source needed at all, and
 any custom steps needed would be automatically handled
My approach again allowed a central repo, which may direct you elsewhere using standard http. It builds the default url by: http://centraldomain.com/repository/package/module.d I think the DIP should do this too if a liburl is not specified.
A central repo per se, isn't really a good idea. What there should be is a standard built-in list of official repos (even if there's initially only one). Then others can be added. The system shouldn't have "single point-of-failure" built-in. I think things like apt-get and 0install are very good models for us to follow. In fact, we should probably think about whether we want to actualy just *use* 0install, either outright or behind-the-scenes.
Jun 14 2011
parent reply Adam D. Ruppe <destructionator gmail.com> writes:
Nick Sabalausky wrote:
 You import a module *from* a library.
I don't think libraries, aside from individual modules, should even exist in D, since you can and should put all interdependent stuff in a single file. If something is moved to a separate module, that means it has usefulness independent from original module - if not, it wouldn't be factored out to begin with... A module might import a whole web of modules, but each one it imports would still satisfy the definition of module - something that's useful to more than one other modules. Suppose I write a database "library". It is composed of three modules: database.d which is the common interface, and then mysql.d and sqlite.d that implement that interface for their respective underlying engines. The only shared component is that database.d interface. Does that warrant making a library package for them? I don't think so. Suppose a third party wants to implement access to Microsoft SQL via my same interface. If modules are the building blocks, that's easy for him. He just "import adams.database;" and puts the file up. He doesn't have to petition his module to be adopted by my database library while still being compatible with it. A user can then pull any one of the implementation modules: adams.mysql or adams.sqlite or other_fellows.mssql and it just works. You could do the same thing with a library concept, but would you? Do you download a whole library just so you can implement a shared interface that is otherwise unrelated?
 Also, if a library needs any special "setup" step, then this
 won't even work anyway.
This is true, but I see it as a strike *against* packaged libraries, not for it. Going back to the database interface. Suppose I only offered database.d as part of a "Adam's Database Library" package, which, since it offers mysql, lists mysql as a dependency. Then Microsoft implements my interface. Someone who wants to use Microsoft's library is told it depends on mine... which depends on mysql. So it prompts them to install mysql to use mssql! That's awful. To fix this, you might say "library mysql depends on library database".... but, taking that to it's logical conclusion, library == module anyway. BTW, you might be thinking, if you import my mysql module, how does it handle the C library it depends on? Answer: it doesn't. That's the library user's responsibility. If he's on CentOS, he can yum install mysql. If he's on Debian, he can apt-get install mysql. A D package manager shouldn't step on the toes of existing package managers. Lord knows they have enough problems of their own without dealing with our interop.
 I think a substantial number of people (*especially* windows
 users - it's unrealistic to expect windows users to use anything
 like junctions) would expect to be able to use an already-
 installed library without special setup
 for every single project that uses it.
The download program could automatically make locally available libs just work without hitting the network too.
 A central repo per se, isn't really a good idea.
Agreed. I just like having the option there for max convenience in getting started. The central repo might just provide a list of other repos to try.
 I think things like apt-get and 0install are very good models for
 us to follow
Blargh. I often think I'm the last person people should listen to when it comes to package management because the topic always brings three words to my mind: "shitload of fuck". I've never seen one that I actually like. I've seen only two that I don't hate with the burning passion of 1,000 suns, and both of them are pretty minimal (Slackware's old tgz system and my build.d. Note: they both suck, just not as much as the alternatives) On the other hand, this is exactly why I jump in these threads. There's some small part of me that thinks maybe, just maybe, we can be the first to create a system that's not a steaming pile of putrid dogmeat. Some specific things I hate about the ones I've used: 1) What if I want a version that isn't in the repos? Installing a piece of software myself almost *always* breaks something since the package manager is too stupid to even realize there's a potential conflict and just does its own thing. This was one of biggest problems with Ruby gems when I was forced to use it a few years back and it comes up virtually every time I have to use yum. This is why I really like it only downloading a module if it's missing. If I put the module in myself, it's knows to not bother with it - the compile succeeds, so there's no need to invoke the downloader at all. 2) What if I want to keep an old version for one app, but have the new version for another? This is one reason why my program default to local subdirectories - so there'd be no risk of stepping on other apps at all. 3) Can I run it as non-root? CPAN seemed almost decent to me until I had to use it on a client's shared host server. It failed miserably. (this was 2006, like with gems, maybe they fixed it since then.) If it insists on installing operating system files as a dependency to my module, it's evil. 4) Is it going to suddenly stop working if I leave it for a few months? It's extremely annoying to me when every command just complains about 404 (just run yum update! if it's so easy, why doesn't the stupid thing do it itself?). This is one reason why I really want an immutable repository. Append to it if you want, but don't invalidate my slices plz. Another one of my big problems with Ruby gems was that it was extremely painful to install on other operating systems. At the time, installing it on FreeBSD and Solaris wasted way too much of my time. A good package manager should be OS agnostic in installation, use, and implementation. It's job is to fetch me some D stuff to use. Leave the operating system related stuff to me. I will not give it root under any circumstances - a compiler and build tool has no legitimate requirement for it. (btw if it needs root because some user wanted a system wide thing, that's ok. Just never *require* it.)
Jun 14 2011
next sibling parent "Nick Sabalausky" <a a.a> writes:
"Adam D. Ruppe" <destructionator gmail.com> wrote in message 
news:it9ch1$140r$1 digitalmars.com...
 Nick Sabalausky wrote:
 You import a module *from* a library.
I don't think libraries, aside from individual modules, should even exist in D, since you can and should put all interdependent stuff in a single file.
Well, even if that's a valid point, the problem still remains that many people don't feel that way and many (most?) projects don't work that way. Should we just leave those people/projects out in the dark? Your approach, on the other hand, can be achieved by making each module a separate library.
 You could do the same thing with a library concept, but would you?
 Do you download a whole library just so you can implement a shared
 interface that is otherwise unrelated?
Libraries are small and disk space/bandwidth is cheap. And note that that's
 Also, if a library needs any special "setup" step, then this
 won't even work anyway.
This is true, but I see it as a strike *against* packaged libraries, not for it.
Even if it's inappropriate for most libraries, such as your example, I do think there are good uses for it. But regardless, operating a a per-lib basis instead of per-file doesn't *force* us to support such a feature if we decided we didn't want it.
 I think a substantial number of people (*especially* windows
 users - it's unrealistic to expect windows users to use anything
 like junctions) would expect to be able to use an already-
 installed library without special setup
 for every single project that uses it.
The download program could automatically make locally available libs just work without hitting the network too.
I'm just opposed to "duplicate every lib in every project that uses it" being the default.
 I think things like apt-get and 0install are very good models for
 us to follow
Blargh. I often think I'm the last person people should listen to when it comes to package management because the topic always brings three words to my mind: "shitload of fuck". I've never seen one that I actually like. I've seen only two that I don't hate with the burning passion of 1,000 suns, and both of them are pretty minimal (Slackware's old tgz system and my build.d. Note: they both suck, just not as much as the alternatives) On the other hand, this is exactly why I jump in these threads. There's some small part of me that thinks maybe, just maybe, we can be the first to create a system that's not a steaming pile of putrid dogmeat. Some specific things I hate about the ones I've used: 1) What if I want a version that isn't in the repos? Installing a piece of software myself almost *always* breaks something since the package manager is too stupid to even realize there's a potential conflict and just does its own thing. This was one of biggest problems with Ruby gems when I was forced to use it a few years back and it comes up virtually every time I have to use yum. This is why I really like it only downloading a module if it's missing. If I put the module in myself, it's knows to not bother with it - the compile succeeds, so there's no need to invoke the downloader at all. 2) What if I want to keep an old version for one app, but have the new version for another? This is one reason why my program default to local subdirectories - so there'd be no risk of stepping on other apps at all. 3) Can I run it as non-root? CPAN seemed almost decent to me until I had to use it on a client's shared host server. It failed miserably. (this was 2006, like with gems, maybe they fixed it since then.) If it insists on installing operating system files as a dependency to my module, it's evil. 4) Is it going to suddenly stop working if I leave it for a few months? It's extremely annoying to me when every command just complains about 404 (just run yum update! if it's so easy, why doesn't the stupid thing do it itself?). This is one reason why I really want an immutable repository. Append to it if you want, but don't invalidate my slices plz. Another one of my big problems with Ruby gems was that it was extremely painful to install on other operating systems. At the time, installing it on FreeBSD and Solaris wasted way too much of my time. A good package manager should be OS agnostic in installation, use, and implementation. It's job is to fetch me some D stuff to use. Leave the operating system related stuff to me. I will not give it root under any circumstances - a compiler and build tool has no legitimate requirement for it. (btw if it needs root because some user wanted a system wide thing, that's ok. Just never *require* it.)
These are all very good points that I think we should definitely keep in mind when designing this system. Also, have you looked at 0install? I think it may match a lot of what you say you want here (granted I've never actually used it). It doesn't require admin to install things, for instance. And it keeps different versions of the same lib instead of replacing version N with version N+1. My point about apt-get and 0install being good models for us to follow was really referring more to: Ok, I want to install "XYZ", so I tell it to install XYZ and the damn thing *just works*, I don't have to fuck around with the dependencies myself, the machine does it. I don't have to give a shit what requires what, or what version. The damn thing just does it. In fact, that was one of the main reasons I gave up on Linux the first time I tried it. Installing anything was idiotically convoluted. Despite any shortcomings they may have, things like apt-get at least make the situation tolerable.
Jun 14 2011
prev sibling parent Don <nospam nospam.com> writes:
Adam D. Ruppe wrote:
 Nick Sabalausky wrote:
 I think things like apt-get and 0install are very good models for
 us to follow
Blargh. I often think I'm the last person people should listen to when it comes to package management because the topic always brings three words to my mind: "shitload of fuck". I've never seen one that I actually like. I've seen only two that I don't hate with the burning passion of 1,000 suns, and both of them are pretty minimal (Slackware's old tgz system and my build.d. Note: they both suck, just not as much as the alternatives) On the other hand, this is exactly why I jump in these threads. There's some small part of me that thinks maybe, just maybe, we can be the first to create a system that's not a steaming pile of putrid dogmeat. Some specific things I hate about the ones I've used:
[snip] This seems to me to be very similar to the situation with search engines prior to google. Remember AltaVista, where two out of every three search results were a broken link? Seems to me, that what's ultimately needed is a huge compatibility matrix, containing every version of every library, and its compatibility with every version of every other library. Or something like that. Package manager shouldn't silently use packages which have never been used with each other before. It's a very difficult problem, I think, but at least package owners could manually supply a list of other packages they've tested with.
Jun 15 2011
prev sibling next sibling parent reply Robert Clipsham <robert octarineparrot.com> writes:
On 14/06/2011 14:53, Andrei Alexandrescu wrote:
 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.


 Andrei
More thoughts: * The compiler should be a compiler * Adding this makes the compiler a downloader and a compiler * If the compiler is a downloader, it should also be a builder and a package manager * Compiler now contains more C++ code that isn't to do with compiling. The DIP mentions speed as a reason to integrate it into the compiler rather than have it separate. How about making dmd a bit more modular if you want the speed of having it in the compiler? Make a dmd library, the compiler can just be a main wrapper around it. This way: * The downloader/builder/package manager can be separate * The tool can be written in D, and still have the speed you want * It paves the way for other tools that could use dmd as a library The way I see this proposal is as a response to cpan/gem/pecl etc, a much needed package manager for D. I don't believe integrating it into the compiler is the right way to go, nor do I believe that a pragma is the right way to do it - I even refuse to use pragma(lib) as it doesn't work with incremental compilation - this wouldn't either. -- Robert http://octarineparrot.com/
Jun 14 2011
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/14/11 6:21 PM, Robert Clipsham wrote:
 On 14/06/2011 14:53, Andrei Alexandrescu wrote:
 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.


 Andrei
More thoughts: * The compiler should be a compiler * Adding this makes the compiler a downloader and a compiler * If the compiler is a downloader, it should also be a builder and a package manager * Compiler now contains more C++ code that isn't to do with compiling.
All of these issues seem to be addressed by the emerging idea that dmd should cooperate with a companion binary that effects the downloading.
 The DIP mentions speed as a reason to integrate it into the compiler
 rather than have it separate. How about making dmd a bit more modular if
 you want the speed of having it in the compiler? Make a dmd library, the
 compiler can just be a main wrapper around it. This way:

 * The downloader/builder/package manager can be separate
 * The tool can be written in D, and still have the speed you want
 * It paves the way for other tools that could use dmd as a library

 The way I see this proposal is as a response to cpan/gem/pecl etc, a
 much needed package manager for D. I don't believe integrating it into
 the compiler is the right way to go, nor do I believe that a pragma is
 the right way to do it - I even refuse to use pragma(lib) as it doesn't
 work with incremental compilation - this wouldn't either.
The notion that the compiler communicates pragmas to its separated package manager during compilation - would that float your boat? Andrei
Jun 14 2011
prev sibling next sibling parent Ary Manzana <ary esperanto.org.ar> writes:
On 6/14/11 8:53 PM, Andrei Alexandrescu wrote:
 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.


 Andrei
I think something like CPAN or RubyGems should be done in D, now :-) I think it would give a big boost to D in many ways: * Have a repository of libraries searchable by command line and retrievable by command line. Many libraries provider can be registered, like dsource or others. * Then you can have a program that downloads all these libraries, one by one, and see if they compile, link, etc., correctly. If not, you've broken some of their code. You can choose to break it and notify them, or just not to break it. A problem I see in D now is that it's constantly changing (ok, the spec is frozen, but somehow old libraries stop working) and this will give a lot of stability to D. But please, don't reinvent the wheel. Solutions for this already exist and work pretty well.
Jun 15 2011
prev sibling next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 14 Jun 2011 09:53:16 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.
I put this as replies in several threads, but I'll throw it out there as its own thread: * You already agree that having the fetching done by a separate program (possibly written in d) makes the solution cleaner (i.e. you are not infiltrating the code that actually does compiling with code that does network fetching). * I think specifying the entire url in the pragma is akin to specifying the full path of a given module on your local disk. I think it's not the right place for it, the person who is building the code should be responsible for where the modules come from, and import should continue to specify the module relative to the include path. * A perfect (IMO) way to configure the fetch tool is by using the same mechanism that configures dmd on how to get modules -- the include path. For instance -Ihttp://xxx.yyy.zzz/package can be passed to the compiler or put into the dmd.conf. * DMD already has a good mechanism to specify configuration and you would barely have to change anything internally. Here's how it would work. I'll specify how it goes from command line to final (note the http path is not a valid path, it's just an example): dmd -Ihttp://www.dsource.org/projects/dcollections/import testproj.d 1. dmd recognizes the url pattern and stores this as an 'external' path 2. dmd reads the file testproj.d and sees that it imports dcollections.TreeMap 3. Using it's non-external paths, it cannot find the module. 4. It calls: dget -Ihttp://www.dsource.org/projects/dcollections/import dcollections.TreeMap 5. dget checks its internal cache to see if the file dcollections/TreeMap.[d|di] already exists -- not found 6. dget uses internal logic to generate a request to download either a. an entire package which contains the requested import (preferred) b. just the specific file dcollections/TreeMap.d 7. Using the url as a key, it stores the TreeMap.d file in a cache so it doesn't have to download it again (can be stored globally or local to the user/project) 8. Pipes the file to stdout, dmd reads the file, and returns 0 for success 9. dmd finishes compiling. On a second run to dmd, it would go through the same process, but dget succeeds on step 5 of finding it in the cache and pipes it to stdout. Some issues with this scheme: 1. dependency checking would be difficult for a build tool (like make) for doing incremental builds. However, traditionally one does not specify standard library files as dependencies, so downloaded files would probably be under this same category. I.e. if you need to rebuild, you'd have to clear the cache and do a make clean (or equivalent). Another option is to have dget check to see if the file on the server has been modified. 2. It's possible that dget fetches files one at a time, which might be very slow (on the first build). However, one can trigger whole package downloads easily enough (for example, by making the include path entry point at a zip file or tarball). dget should be smart enough to handle extracting packages. I can't really think of any other issues. -Steve
Jun 15 2011
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/15/11 8:33 AM, Steven Schveighoffer wrote:
 On Tue, 14 Jun 2011 09:53:16 -0400, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.
I put this as replies in several threads, but I'll throw it out there as its own thread: * You already agree that having the fetching done by a separate program (possibly written in d) makes the solution cleaner (i.e. you are not infiltrating the code that actually does compiling with code that does network fetching).
I agree.
 * I think specifying the entire url in the pragma is akin to specifying
 the full path of a given module on your local disk. I think it's not the
 right place for it, the person who is building the code should be
 responsible for where the modules come from, and import should continue
 to specify the module relative to the include path.
I understand. It hasn't been rare that I would have preferred to specify an -I equivalent through a pragma in my D programs. Otherwise all of a sudden I needed to have a more elaborate dmd/rdmd line, and then I thought, heck, I need a script or makefile or a dmd.conf to build this simple script... I don't think one is good and the other is bad. Both have their uses. BTW, Perl and Python (and probably others) have a way to specify paths for imports. http://www.perlhowto.com/extending_the_library_path http://stackoverflow.com/questions/279237/python-import-a-module-from-a-folder
 * A perfect (IMO) way to configure the fetch tool is by using the same
 mechanism that configures dmd on how to get modules -- the include path.
 For instance -Ihttp://xxx.yyy.zzz/package can be passed to the compiler
 or put into the dmd.conf.
HTTP is not a filesystem so the mechanism must be different. I added a section "Command-line equivalent": http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11#section10 My concern about using cmdline/conf exclusively remains. There must be a way to specify dependencies where they belong - with the source. That is _literally_ where they belong! One additional problem is one remote library that depends on another. You end up needing to add K URLs where K is the number of dependent libraries. The process of doing so will be mightily annoying - repeated failure to compile and RTFMs.
 * DMD already has a good mechanism to specify configuration and you
 would barely have to change anything internally.

 Here's how it would work. I'll specify how it goes from command line to
 final (note the http path is not a valid path, it's just an example):

 dmd -Ihttp://www.dsource.org/projects/dcollections/import testproj.d

 1. dmd recognizes the url pattern and stores this as an 'external' path
 2. dmd reads the file testproj.d and sees that it imports
 dcollections.TreeMap
 3. Using it's non-external paths, it cannot find the module.
 4. It calls:
 dget -Ihttp://www.dsource.org/projects/dcollections/import
 dcollections.TreeMap
 5. dget checks its internal cache to see if the file
 dcollections/TreeMap.[d|di] already exists -- not found
 6. dget uses internal logic to generate a request to download either
 a. an entire package which contains the requested import (preferred)
 b. just the specific file dcollections/TreeMap.d
 7. Using the url as a key, it stores the TreeMap.d file in a cache so it
 doesn't have to download it again (can be stored globally or local to
 the user/project)
 8. Pipes the file to stdout, dmd reads the file, and returns 0 for success
 9. dmd finishes compiling.
Not so fast. What if dcollections depends on stevesutils, to be found on http://www.stevesu.ti/ls and larspath, to be found on http://la.rs/path? The thing will fail to compile, and there will be no informative message on what to do next. Andrei
Jun 15 2011
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/15/11 8:33 AM, Steven Schveighoffer wrote:
 I can't really think of any other issues.
Allow me to repeat: the scheme as you mention it is unable to figure and load dependent remote libraries for remote libraries. It's essentially a flat scheme in which you know only the top remote library but nothing about the rest. The dip takes care of that by using transitivity and by relying on the presence of dependency information exactly where it belongs - in the dependent source files. Separating that information from source files has two liabilities. First, it breaks the whole transitivity thing. Second, it adds yet another itsy-bitsy pellet of metadata/config/whatevs files that need to be minded. I just don't see the advantage of imposing that. Andrei
Jun 15 2011
next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 15 Jun 2011 10:38:28 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 6/15/11 8:33 AM, Steven Schveighoffer wrote:
 I can't really think of any other issues.
Allow me to repeat: the scheme as you mention it is unable to figure and load dependent remote libraries for remote libraries. It's essentially a flat scheme in which you know only the top remote library but nothing about the rest. The dip takes care of that by using transitivity and by relying on the presence of dependency information exactly where it belongs - in the dependent source files. Separating that information from source files has two liabilities. First, it breaks the whole transitivity thing. Second, it adds yet another itsy-bitsy pellet of metadata/config/whatevs files that need to be minded. I just don't see the advantage of imposing that.
Yes, these are good points. But I think Dmitry brought up good points too (how do you specify that TreeMap.d needs to be compiled too?). One possible solution is a central repository of code. So basically, you can depend on other projects as long as they are sanely namespaced and live under one include path. I think dsource should provide something like this. For example: http://www.dsource.org/import then if you wanted dcollections.TreeMap, the import would be: http://www.dsource.org/import/dcollections/TreeMap.d Of course, that still doesn't solve Dmitry's problem. We need to think of a way to do that too. Still thinking.... -Steve
Jun 15 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:itagdr$29mt$1 digitalmars.com...
 On 6/15/11 8:33 AM, Steven Schveighoffer wrote:
 I can't really think of any other issues.
Allow me to repeat: the scheme as you mention it is unable to figure and load dependent remote libraries for remote libraries. It's essentially a flat scheme in which you know only the top remote library but nothing about the rest. The dip takes care of that by using transitivity and by relying on the presence of dependency information exactly where it belongs - in the dependent source files.
Dependency information is already in the source: The "import" statement. The actual path to the depndencies does not belong in the source file - that *is* a configuration matter, and cramming it into the source only makes configuring harder.
 Separating that information from source files has two liabilities. First, 
 it breaks the whole transitivity thing.
I think that's solvable.
Jun 15 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/15/11 3:47 PM, Nick Sabalausky wrote:
 "Andrei Alexandrescu"<SeeWebsiteForEmail erdani.org>  wrote in message
 news:itagdr$29mt$1 digitalmars.com...
 On 6/15/11 8:33 AM, Steven Schveighoffer wrote:
 I can't really think of any other issues.
Allow me to repeat: the scheme as you mention it is unable to figure and load dependent remote libraries for remote libraries. It's essentially a flat scheme in which you know only the top remote library but nothing about the rest. The dip takes care of that by using transitivity and by relying on the presence of dependency information exactly where it belongs - in the dependent source files.
Dependency information is already in the source: The "import" statement. The actual path to the depndencies does not belong in the source file - that *is* a configuration matter, and cramming it into the source only makes configuring harder.
Why? I mean I can't believe it just because you are saying it. On the face of it, it seems that on the contrary, there's no more need for crummy little configuration files definition, discovery, adjustment, parsing, etc. Clearly such are needed in certain situations but I see no reason on why they must be the only way to go. Andrei
Jun 15 2011
parent "Nick Sabalausky" <a a.a> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:itb6os$161f$1 digitalmars.com...
 On 6/15/11 3:47 PM, Nick Sabalausky wrote:
 "Andrei Alexandrescu"<SeeWebsiteForEmail erdani.org>  wrote in message
 news:itagdr$29mt$1 digitalmars.com...
 On 6/15/11 8:33 AM, Steven Schveighoffer wrote:
 I can't really think of any other issues.
Allow me to repeat: the scheme as you mention it is unable to figure and load dependent remote libraries for remote libraries. It's essentially a flat scheme in which you know only the top remote library but nothing about the rest. The dip takes care of that by using transitivity and by relying on the presence of dependency information exactly where it belongs - in the dependent source files.
Dependency information is already in the source: The "import" statement. The actual path to the depndencies does not belong in the source file - that *is* a configuration matter, and cramming it into the source only makes configuring harder.
Why? I mean I can't believe it just because you are saying it. On the face of it, it seems that on the contrary, there's no more need for crummy little configuration files definition, discovery, adjustment, parsing, etc. Clearly such are needed in certain situations but I see no reason on why they must be the only way to go.
I do have reasons, but TBH I really don't have any more time or energy for these uphill debates right now.
Jun 15 2011
prev sibling next sibling parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 15.06.2011 17:33, Steven Schveighoffer wrote:
 On Tue, 14 Jun 2011 09:53:16 -0400, Andrei Alexandrescu 
 <SeeWebsiteForEmail erdani.org> wrote:

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.
I put this as replies in several threads, but I'll throw it out there as its own thread: * You already agree that having the fetching done by a separate program (possibly written in d) makes the solution cleaner (i.e. you are not infiltrating the code that actually does compiling with code that does network fetching). * I think specifying the entire url in the pragma is akin to specifying the full path of a given module on your local disk. I think it's not the right place for it, the person who is building the code should be responsible for where the modules come from, and import should continue to specify the module relative to the include path. * A perfect (IMO) way to configure the fetch tool is by using the same mechanism that configures dmd on how to get modules -- the include path. For instance -Ihttp://xxx.yyy.zzz/package can be passed to the compiler or put into the dmd.conf. * DMD already has a good mechanism to specify configuration and you would barely have to change anything internally. Here's how it would work. I'll specify how it goes from command line to final (note the http path is not a valid path, it's just an example): dmd -Ihttp://www.dsource.org/projects/dcollections/import testproj.d
Now it's abundantly clear that dmd should have rdmd's 'make' functionality built-in. Otherwise you'd have to specify TreeMap.d (or library) on the command line.
 1. dmd recognizes the url pattern and stores this as an 'external' path
 2. dmd reads the file testproj.d and sees that it imports 
 dcollections.TreeMap
 3. Using it's non-external paths, it cannot find the module.
 4. It calls:
     dget -Ihttp://www.dsource.org/projects/dcollections/import 
 dcollections.TreeMap
 5. dget checks its internal cache to see if the file 
 dcollections/TreeMap.[d|di] already exists -- not found
 6. dget uses internal logic to generate a request to download either
    a. an entire package which contains the requested import (preferred)
    b. just the specific file dcollections/TreeMap.d
 7. Using the url as a key, it stores the TreeMap.d file in a cache so 
 it doesn't have to download it again (can be stored globally or local 
 to the user/project)
 8. Pipes the file to stdout, dmd reads the file, and returns 0 for 
 success
 9. dmd finishes compiling.

 On a second run to dmd, it would go through the same process, but dget 
 succeeds on step 5 of finding it in the cache and pipes it to stdout.

 Some issues with this scheme:

 1. dependency checking would be difficult for a build tool (like make) 
 for doing incremental builds.  However, traditionally one does not 
 specify standard library files as dependencies, so downloaded files 
 would probably be under this same category.  I.e. if you need to 
 rebuild, you'd have to clear the cache and do a make clean (or 
 equivalent).  Another option is to have dget check to see if the file 
 on the server has been modified.

 2. It's possible that dget fetches files one at a time, which might be 
 very slow (on the first build).  However, one can trigger whole 
 package downloads easily enough (for example, by making the include 
 path entry point at a zip file or tarball).  dget should be smart 
 enough to handle extracting packages.

 I can't really think of any other issues.

 -Steve
dmd should be able to run multiple instances of dget without any conflicts (also parallel builds etc.). Other then that it looks quite good to me. P.S. It seems like dget is, in fact, dcache :) -- Dmitry Olshansky
Jun 15 2011
prev sibling parent reply Ary Manzana <ary esperanto.org.ar> writes:
On 6/15/11 8:33 PM, Steven Schveighoffer wrote:
 On Tue, 14 Jun 2011 09:53:16 -0400, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.
I put this as replies in several threads, but I'll throw it out there as its own thread: * You already agree that having the fetching done by a separate program (possibly written in d) makes the solution cleaner (i.e. you are not infiltrating the code that actually does compiling with code that does network fetching). * I think specifying the entire url in the pragma is akin to specifying the full path of a given module on your local disk. I think it's not the right place for it, the person who is building the code should be responsible for where the modules come from, and import should continue to specify the module relative to the include path. * A perfect (IMO) way to configure the fetch tool is by using the same mechanism that configures dmd on how to get modules -- the include path. For instance -Ihttp://xxx.yyy.zzz/package can be passed to the compiler or put into the dmd.conf. * DMD already has a good mechanism to specify configuration and you would barely have to change anything internally. Here's how it would work. I'll specify how it goes from command line to final (note the http path is not a valid path, it's just an example): dmd -Ihttp://www.dsource.org/projects/dcollections/import testproj.d 1. dmd recognizes the url pattern and stores this as an 'external' path 2. dmd reads the file testproj.d and sees that it imports dcollections.TreeMap 3. Using it's non-external paths, it cannot find the module. 4. It calls: dget -Ihttp://www.dsource.org/projects/dcollections/import dcollections.TreeMap 5. dget checks its internal cache to see if the file dcollections/TreeMap.[d|di] already exists -- not found 6. dget uses internal logic to generate a request to download either a. an entire package which contains the requested import (preferred) b. just the specific file dcollections/TreeMap.d 7. Using the url as a key, it stores the TreeMap.d file in a cache so it doesn't have to download it again (can be stored globally or local to the user/project) 8. Pipes the file to stdout, dmd reads the file, and returns 0 for success 9. dmd finishes compiling.
So if I have a library with three modules, a.d, b.d, c.d, which depend on another library, I should put that pragma(importpath) on each of them with the same url? Or maybe I could create a fake d file with that pragma, and make the three modules import it so I just specify it once.
Jun 15 2011
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 15 Jun 2011 23:23:43 -0400, Ary Manzana <ary esperanto.org.ar>  
wrote:

 On 6/15/11 8:33 PM, Steven Schveighoffer wrote:
 On Tue, 14 Jun 2011 09:53:16 -0400, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.
I put this as replies in several threads, but I'll throw it out there as its own thread: * You already agree that having the fetching done by a separate program (possibly written in d) makes the solution cleaner (i.e. you are not infiltrating the code that actually does compiling with code that does network fetching). * I think specifying the entire url in the pragma is akin to specifying the full path of a given module on your local disk. I think it's not the right place for it, the person who is building the code should be responsible for where the modules come from, and import should continue to specify the module relative to the include path. * A perfect (IMO) way to configure the fetch tool is by using the same mechanism that configures dmd on how to get modules -- the include path. For instance -Ihttp://xxx.yyy.zzz/package can be passed to the compiler or put into the dmd.conf. * DMD already has a good mechanism to specify configuration and you would barely have to change anything internally. Here's how it would work. I'll specify how it goes from command line to final (note the http path is not a valid path, it's just an example): dmd -Ihttp://www.dsource.org/projects/dcollections/import testproj.d 1. dmd recognizes the url pattern and stores this as an 'external' path 2. dmd reads the file testproj.d and sees that it imports dcollections.TreeMap 3. Using it's non-external paths, it cannot find the module. 4. It calls: dget -Ihttp://www.dsource.org/projects/dcollections/import dcollections.TreeMap 5. dget checks its internal cache to see if the file dcollections/TreeMap.[d|di] already exists -- not found 6. dget uses internal logic to generate a request to download either a. an entire package which contains the requested import (preferred) b. just the specific file dcollections/TreeMap.d 7. Using the url as a key, it stores the TreeMap.d file in a cache so it doesn't have to download it again (can be stored globally or local to the user/project) 8. Pipes the file to stdout, dmd reads the file, and returns 0 for success 9. dmd finishes compiling.
So if I have a library with three modules, a.d, b.d, c.d, which depend on another library, I should put that pragma(importpath) on each of them with the same url?
With the updated proposal (please see the DIP now), you can do -I to specify the import path on the command line. Otherwise, yes, you have to duplicate it.
 Or maybe I could create a fake d file with that pragma, and make the  
 three modules import it so I just specify it once.
As long as the fake d file imports the files you need publicly, it should be pulled in, yes. But the import pragma only affects imports from the current file. I think that seems right, because you don't want to worry about importing files that might affect your import paths. I look at it like version statements. -Steve
Jun 15 2011
prev sibling next sibling parent reply David Gileadi <gileadis NSPMgmail.com> writes:
On 6/14/11 6:53 AM, Andrei Alexandrescu wrote:
 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.
I keep thinking that if we build a separate dget, dmd could call it even if there weren't a URL embedded in the source. If dget had a list of central repositories then it could simply look in them for the package/module and compilation would magically work with or without a pragma. In any case I suspect that a more formal versioning system is needed. One way of supporting versions would be to make dget aware of source control systems like svn, mercurial and git which support tags. The pragma could support source control URLs, and could also include an optional version. dget could be aware of common source control clients, and could try calling them if installed, looking for the code tagged with the provided version. If no version were specified then head/master would be used.
Jun 15 2011
parent Jacob Carlborg <doob me.com> writes:
On 2011-06-15 17:37, David Gileadi wrote:
 On 6/14/11 6:53 AM, Andrei Alexandrescu wrote:
 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.
I keep thinking that if we build a separate dget, dmd could call it even if there weren't a URL embedded in the source. If dget had a list of central repositories then it could simply look in them for the package/module and compilation would magically work with or without a pragma. In any case I suspect that a more formal versioning system is needed. One way of supporting versions would be to make dget aware of source control systems like svn, mercurial and git which support tags. The pragma could support source control URLs, and could also include an optional version. dget could be aware of common source control clients, and could try calling them if installed, looking for the code tagged with the provided version. If no version were specified then head/master would be used.
If you just want to clone a repository from, i.e. github or bitbucket you can just do a simple HTTP download, no need for a SCM client. -- /Jacob Carlborg
Jun 17 2011
prev sibling next sibling parent reply Mike Wey <mike-wey example.com> writes:
First i didn't read all of the posts in this thread, so some of these 
might already be answered.

In the first paragraph the DIP talks about Automatic downloading of 
*libraries* while all the posts here talk about downloading files.
This is also reflected in the "Package case" paragraph since the 
compiler / separate tool will first try to download a .di file.
Which generally is a d import or header file, which doesn't need to 
include the implementation, so the compiled library should also be 
downloaded or linking would fail, right?

Also the proposal doesn't do anything with versioning, while larger 
updates will probably get a different url, bug fixes might still 
introduce regressions that silently break an application that uses the 
library. And now you'll have to track down witch library introduced the 
bug, and more importantly your app broke overnight and while you didn't 
change anything. (other that recompiling)

To find out how downloading the files would work i did some tests with GtkD.

Building GtkD itself takes 1m56.
Building an Helloworld app that uses the prebuild library takes 0m01.

The Helloworld app need 133 files from GtkD.
Building the app and the files it needs takes 0m24.

The source of the HelloWord application can be found here:
http://www.dsource.org/projects/gtkd/browser/trunk/demos/gtk/HelloWorld.d

-- 
Mike Wey
Jun 15 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/15/11 3:48 PM, Mike Wey wrote:
 First i didn't read all of the posts in this thread, so some of these
 might already be answered.

 In the first paragraph the DIP talks about Automatic downloading of
 *libraries* while all the posts here talk about downloading files.
 This is also reflected in the "Package case" paragraph since the
 compiler / separate tool will first try to download a .di file.
 Which generally is a d import or header file, which doesn't need to
 include the implementation, so the compiled library should also be
 downloaded or linking would fail, right?
That is correct. We need to address the scenario in which a .di file requires the existence of a .a/.lib file.
 Also the proposal doesn't do anything with versioning, while larger
 updates will probably get a different url, bug fixes might still
 introduce regressions that silently break an application that uses the
 library.
I think this is a policy matter that depends on the URLs published by the library writer.
 And now you'll have to track down witch library introduced the
 bug, and more importantly your app broke overnight and while you didn't
 change anything. (other that recompiling)

 To find out how downloading the files would work i did some tests with
 GtkD.

 Building GtkD itself takes 1m56.
 Building an Helloworld app that uses the prebuild library takes 0m01.

 The Helloworld app need 133 files from GtkD.
 Building the app and the files it needs takes 0m24.

 The source of the HelloWord application can be found here:
 http://www.dsource.org/projects/gtkd/browser/trunk/demos/gtk/HelloWorld.d
Thanks for the measurements. So my understanding is that the slow helloworld essentially compiles those 133 files from GtkD in addition to helloworld itself? Thanks, Andrei
Jun 15 2011
parent Mike Wey <mike-wey example.com> writes:
On 06/15/2011 11:10 PM, Andrei Alexandrescu wrote:
 On 6/15/11 3:48 PM, Mike Wey wrote:
 First i didn't read all of the posts in this thread, so some of these
 might already be answered.

 In the first paragraph the DIP talks about Automatic downloading of
 *libraries* while all the posts here talk about downloading files.
 This is also reflected in the "Package case" paragraph since the
 compiler / separate tool will first try to download a .di file.
 Which generally is a d import or header file, which doesn't need to
 include the implementation, so the compiled library should also be
 downloaded or linking would fail, right?
That is correct. We need to address the scenario in which a .di file requires the existence of a .a/.lib file.
 Also the proposal doesn't do anything with versioning, while larger
 updates will probably get a different url, bug fixes might still
 introduce regressions that silently break an application that uses the
 library.
I think this is a policy matter that depends on the URLs published by the library writer.
But a different url for every bugfix would be difficult to maintain.
 And now you'll have to track down witch library introduced the
 bug, and more importantly your app broke overnight and while you didn't
 change anything. (other that recompiling)

 To find out how downloading the files would work i did some tests with
 GtkD.

 Building GtkD itself takes 1m56.
 Building an Helloworld app that uses the prebuild library takes 0m01.

 The Helloworld app need 133 files from GtkD.
 Building the app and the files it needs takes 0m24.

 The source of the HelloWord application can be found here:
 http://www.dsource.org/projects/gtkd/browser/trunk/demos/gtk/HelloWorld.d
Thanks for the measurements. So my understanding is that the slow helloworld essentially compiles those 133 files from GtkD in addition to helloworld itself?
Yes, thats correct.
 Thanks,

 Andrei
-- Mike Wey
Jun 15 2011
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 15 Jun 2011 14:37:29 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 6/15/11 12:35 PM, Steven Schveighoffer wrote:
 I propose the following:
Excellent. I'm on board with everything. Could you please update the DIP reflecting these ideas?
Updated, also added it to the DIP index. -Steve
Jun 15 2011
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-06-14 15:53, Andrei Alexandrescu wrote:
 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.


 Andrei
Instead of complaining about others ideas (I'll probably do that as well :) ), here's my idea: https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D I'm working on both of the tools mentioned in the above link. The ideas for the package manager are heavily based on Rubygems. -- /Jacob Carlborg
Jun 17 2011
next sibling parent David Nadlinger <see klickverbot.at> writes:
On 6/17/11 6:15 PM, Jacob Carlborg wrote:
 Instead of complaining about others ideas (I'll probably do that as well
 :) ), here's my idea:
 https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D
Sorry, I just wanted to fix the headline, but that changed the URL as well. Now at: https://github.com/jacob-carlborg/orbit/wiki/Orbit-Package-Manager-for-D David
Jun 17 2011
prev sibling next sibling parent reply Jose Armando Garcia <jsancio gmail.com> writes:
On Fri, Jun 17, 2011 at 1:15 PM, Jacob Carlborg <doob me.com> wrote:
 On 2011-06-14 15:53, Andrei Alexandrescu wrote:
 Instead of complaining about others ideas (I'll probably do that as well :)
 ), here's my idea:
 https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D
From website:
 Spec and Config Files
 The dakefile and the orbspec file is written in Ruby. Why?
Why ruby and not D with mixin? I am willing to volunteer some time to this if help is needed. -Jose
Jun 17 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-06-17 18:45, Jose Armando Garcia wrote:
 On Fri, Jun 17, 2011 at 1:15 PM, Jacob Carlborg<doob me.com>  wrote:
 On 2011-06-14 15:53, Andrei Alexandrescu wrote:
 Instead of complaining about others ideas (I'll probably do that as well :)
 ), here's my idea:
 https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D
 From website:
 Spec and Config Files
 The dakefile and the orbspec file is written in Ruby. Why?
Why ruby and not D with mixin? I am willing to volunteer some time to this if help is needed. -Jose
As I stated just below "The dakefile and the orbspec file is written in Ruby. Why?". D is too verbose for simple files like these. How would it even work? Wrap everything in a main method, compile and then run? -- /Jacob Carlborg
Jun 17 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"Jacob Carlborg" <doob me.com> wrote in message 
news:itgamg$2ggr$4 digitalmars.com...
 On 2011-06-17 18:45, Jose Armando Garcia wrote:
 On Fri, Jun 17, 2011 at 1:15 PM, Jacob Carlborg<doob me.com>  wrote:
 On 2011-06-14 15:53, Andrei Alexandrescu wrote:
 Instead of complaining about others ideas (I'll probably do that as well 
 :)
 ), here's my idea:
 https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D
 From website:
 Spec and Config Files
 The dakefile and the orbspec file is written in Ruby. Why?
Why ruby and not D with mixin? I am willing to volunteer some time to this if help is needed. -Jose
As I stated just below "The dakefile and the orbspec file is written in Ruby. Why?". D is too verbose for simple files like these. How would it even work? Wrap everything in a main method, compile and then run?
That would be better than forcing Ruby on people.
Jun 17 2011
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-06-18 07:00, Nick Sabalausky wrote:
 "Jacob Carlborg"<doob me.com>  wrote in message
 news:itgamg$2ggr$4 digitalmars.com...
 On 2011-06-17 18:45, Jose Armando Garcia wrote:
 On Fri, Jun 17, 2011 at 1:15 PM, Jacob Carlborg<doob me.com>   wrote:
 On 2011-06-14 15:53, Andrei Alexandrescu wrote:
 Instead of complaining about others ideas (I'll probably do that as well
 :)
 ), here's my idea:
 https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D
  From website:
 Spec and Config Files
 The dakefile and the orbspec file is written in Ruby. Why?
Why ruby and not D with mixin? I am willing to volunteer some time to this if help is needed. -Jose
As I stated just below "The dakefile and the orbspec file is written in Ruby. Why?". D is too verbose for simple files like these. How would it even work? Wrap everything in a main method, compile and then run?
That would be better than forcing Ruby on people.
So you prefer this, in favor of the Ruby syntax: version_("1.0.0"); author("Jacob Carlborg"); type(Type.library); imports(["a.d", "b.di"]); // an array of import files Or maybe it has to be: Orb.create((Orb orb) { orb.version_("1.0.0"); orb.author("Jacob Carlborg"); orb.type(Type.library); orb.imports(["a.d", "b.di"]); // an array of import files }); I think it's unnecessary verbose. BTW, DMD, Phobos and druntime is forcing makefiles on people (I hate makefiles). DSSS forced an INI-similar syntax on people. If the config/spec files should be in D then, as far as I know, the tool needs to: 1. read the file 2. add a main method 3. write a new file 4. compile the new file 5. run the resulting binary This seems very unnecessary to me. Unnecessary IO, unnecessary compilation, unnecessary processes (two new processes). The only thing this will do is slowing down everything. -- /Jacob Carlborg
Jun 18 2011
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Jacob Carlborg" <doob me.com> wrote in message 
news:iti310$2r4r$1 digitalmars.com...
 On 2011-06-18 07:00, Nick Sabalausky wrote:
 "Jacob Carlborg"<doob me.com>  wrote in message
 news:itgamg$2ggr$4 digitalmars.com...
 On 2011-06-17 18:45, Jose Armando Garcia wrote:
 On Fri, Jun 17, 2011 at 1:15 PM, Jacob Carlborg<doob me.com>   wrote:
 On 2011-06-14 15:53, Andrei Alexandrescu wrote:
 Instead of complaining about others ideas (I'll probably do that as 
 well
 :)
 ), here's my idea:
 https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D
  From website:
 Spec and Config Files
 The dakefile and the orbspec file is written in Ruby. Why?
Why ruby and not D with mixin? I am willing to volunteer some time to this if help is needed. -Jose
As I stated just below "The dakefile and the orbspec file is written in Ruby. Why?". D is too verbose for simple files like these. How would it even work? Wrap everything in a main method, compile and then run?
That would be better than forcing Ruby on people.
So you prefer this, in favor of the Ruby syntax: version_("1.0.0"); author("Jacob Carlborg"); type(Type.library); imports(["a.d", "b.di"]); // an array of import files Or maybe it has to be: Orb.create((Orb orb) { orb.version_("1.0.0"); orb.author("Jacob Carlborg"); orb.type(Type.library); orb.imports(["a.d", "b.di"]); // an array of import files }); I think it's unnecessary verbose.
I'd probably consider something more like: orb.ver = "1.0.0"; orb.author = "Jacob Carlborg"; orb.type = Type.library; orb.imports = ["a.d", "b.di"]; // an array of import files And yes, I think these would be better simply because they're in D. The user doesn't have to switch languages.
 BTW, DMD, Phobos and druntime is forcing makefiles on people (I hate 
 makefiles).
I hate makefiles too, but that's not an accurate comparison: 1. On windows, DMD comes with the make needed, and on linux everyone already has GNU make. With Orb/Ruby, many people will have to go and download Ruby and install it.` 2. People who *use* DMD to build their software *never* have to read or write a single line of makefile. *Only* people who modify the process of building DMD/Phobos/druntime need to do that. But anyone (or at least most people) who uses Orb to build their software will have to write Ruby. It would only be comparable if Orb only used Ruby to build Orb itself.
 DSSS forced an INI-similar syntax on people.
INI-syntax is trivial. Especially compared to Ruby (or D for that matter, to be perfectly fair).
 If the config/spec files should be in D then, as far as I know, the tool 
 needs to:

 1. read the file
 2. add a main method
 3. write a new file
 4. compile the new file
 5. run the resulting binary
More like: 1. compile the pre-existing main-wrapper: // main_wrapper.d void main() { mixin(import("orbconf.d")); } like this: $ dmd main_wrapper.d -J{path containing user's "orbconf.d"} If the user specifies a filename other than the standard name, then it's still not much more: // main_wrapperB.d void main() { mixin(import("std_orbconf.d")); } Then write std_orbconf.d: // std_orbconf.d mixin(import("renamed_orbconf.d")); $ dmd main_wrapperB.d --J{path containing user's "renamed_orbconf.d"} -J{path containing "std_orbconf.d"} Also, remember that this should only need to be rebuilt if renamed_orbconf.d (or something it depends on) has changed. So you can do like rdmd does: call dmd to *just* get the deps of the main_wrapper.d+orbconf.d combination (which is much faster than an actual build) and only rebuild if they've changed - which won't be often. And again even this is only needed when the user isn't using the standard config file name. 2. run the resulting binary
 This seems very unnecessary to me. Unnecessary IO, unnecessary 
 compilation, unnecessary processes (two new processes). The only thing 
 this will do is slowing down everything.
1. The amount of extra stuff is fairly minimal. *Especially* in the majority of cases where the user uses the standard name ("orbconf.d" or whatever you want to call it). 2. Using Ruby will slow things down, too. It's not exactly known for being a language that's fast to compile&run on the level of D.
Jun 18 2011
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 06/18/2011 02:35 PM, Nick Sabalausky wrote:
 "Jacob Carlborg"<doob me.com>  wrote in message
 news:iti310$2r4r$1 digitalmars.com...
 On 2011-06-18 07:00, Nick Sabalausky wrote:
 "Jacob Carlborg"<doob me.com>   wrote in message
 news:itgamg$2ggr$4 digitalmars.com...
 On 2011-06-17 18:45, Jose Armando Garcia wrote:
 On Fri, Jun 17, 2011 at 1:15 PM, Jacob Carlborg<doob me.com>    wrote:
 On 2011-06-14 15:53, Andrei Alexandrescu wrote:
 Instead of complaining about others ideas (I'll probably do that as
 well
 :)
 ), here's my idea:
 https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D
    From website:
 Spec and Config Files
 The dakefile and the orbspec file is written in Ruby. Why?
Why ruby and not D with mixin? I am willing to volunteer some time to this if help is needed. -Jose
As I stated just below "The dakefile and the orbspec file is written in Ruby. Why?". D is too verbose for simple files like these. How would it even work? Wrap everything in a main method, compile and then run?
That would be better than forcing Ruby on people.
So you prefer this, in favor of the Ruby syntax: version_("1.0.0"); author("Jacob Carlborg"); type(Type.library); imports(["a.d", "b.di"]); // an array of import files Or maybe it has to be: Orb.create((Orb orb) { orb.version_("1.0.0"); orb.author("Jacob Carlborg"); orb.type(Type.library); orb.imports(["a.d", "b.di"]); // an array of import files }); I think it's unnecessary verbose.
I'd probably consider something more like: orb.ver = "1.0.0"; orb.author = "Jacob Carlborg"; orb.type = Type.library; orb.imports = ["a.d", "b.di"]; // an array of import files And yes, I think these would be better simply because they're in D. The user doesn't have to switch languages.
Just to add an opinion - I think doing this work in D would foster creative uses of the language and be beneficial for improving the language itself and its standard library. Andrei
Jun 18 2011
parent Jacob Carlborg <doob me.com> writes:
On 2011-06-18 22:02, Andrei Alexandrescu wrote:
 On 06/18/2011 02:35 PM, Nick Sabalausky wrote:
 I'd probably consider something more like:

 orb.ver = "1.0.0";
 orb.author = "Jacob Carlborg";
 orb.type = Type.library;
 orb.imports = ["a.d", "b.di"]; // an array of import files

 And yes, I think these would be better simply because they're in D.
 The user
 doesn't have to switch languages.
Just to add an opinion - I think doing this work in D would foster creative uses of the language and be beneficial for improving the language itself and its standard library. Andrei
Fair point. But I'm using the tools I think are best fitted for the job. If I don't think D is good enough for the job I won't use it. If it it shows that D is good enough I can use D instead. Note that the whole tool is written in D, it's just that config/spec files that uses Ruby. -- /Jacob Carlborg
Jun 19 2011
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-06-18 21:35, Nick Sabalausky wrote:
 "Jacob Carlborg"<doob me.com>  wrote in message
 news:iti310$2r4r$1 digitalmars.com...
 On 2011-06-18 07:00, Nick Sabalausky wrote:
 "Jacob Carlborg"<doob me.com>   wrote in message
 news:itgamg$2ggr$4 digitalmars.com...
 On 2011-06-17 18:45, Jose Armando Garcia wrote:
 On Fri, Jun 17, 2011 at 1:15 PM, Jacob Carlborg<doob me.com>    wrote:
 On 2011-06-14 15:53, Andrei Alexandrescu wrote:
 Instead of complaining about others ideas (I'll probably do that as
 well
 :)
 ), here's my idea:
 https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D
    From website:
 Spec and Config Files
 The dakefile and the orbspec file is written in Ruby. Why?
Why ruby and not D with mixin? I am willing to volunteer some time to this if help is needed. -Jose
As I stated just below "The dakefile and the orbspec file is written in Ruby. Why?". D is too verbose for simple files like these. How would it even work? Wrap everything in a main method, compile and then run?
That would be better than forcing Ruby on people.
So you prefer this, in favor of the Ruby syntax: version_("1.0.0"); author("Jacob Carlborg"); type(Type.library); imports(["a.d", "b.di"]); // an array of import files Or maybe it has to be: Orb.create((Orb orb) { orb.version_("1.0.0"); orb.author("Jacob Carlborg"); orb.type(Type.library); orb.imports(["a.d", "b.di"]); // an array of import files }); I think it's unnecessary verbose.
I'd probably consider something more like: orb.ver = "1.0.0"; orb.author = "Jacob Carlborg"; orb.type = Type.library; orb.imports = ["a.d", "b.di"]; // an array of import files
That would be doable in Ruby as well. I though it would be better to not have to write "orb." in front of every method. Note the following syntax is not possible in Ruby: ver = "1.0.0" author = "Jacob Carlborg" type = Type.library imports = ["a.d", "b.di"] The above syntax is what I would prefer but it doesn't work in Ruby, would create local variables and not call instance methods. Because of that I chose the syntax I chose, the least verbose syntax I could think of.
 And yes, I think these would be better simply because they're in D. The user
 doesn't have to switch languages.

 BTW, DMD, Phobos and druntime is forcing makefiles on people (I hate
 makefiles).
I hate makefiles too, but that's not an accurate comparison: 1. On windows, DMD comes with the make needed, and on linux everyone already has GNU make. With Orb/Ruby, many people will have to go and download Ruby and install it.`
No need to download and install Ruby, it's embedded in the tool.
 2. People who *use* DMD to build their software *never* have to read or
 write a single line of makefile. *Only* people who modify the process of
 building DMD/Phobos/druntime need to do that. But anyone (or at least most
 people) who uses Orb to build their software will have to write Ruby. It
 would only be comparable if Orb only used Ruby to build Orb itself.
Ok, fair enough.
 DSSS forced an INI-similar syntax on people.
INI-syntax is trivial. Especially compared to Ruby (or D for that matter, to be perfectly fair).
I was thinking that the Ruby syntax was as easy and trivial as the INI-syntax if you just use the basic, like I have in the examples. No need to use if-statements, loops or classes. That's just for packages that need to do very special things. To take the DSSS syntax again as an example: version (Windows) { } version (Windows) { ) I assume this is because the lexer/parser is very simple. You don't have this problem if you use a complete language for the config/spec files.
 If the config/spec files should be in D then, as far as I know, the tool
 needs to:

 1. read the file
 2. add a main method
 3. write a new file
 4. compile the new file
 5. run the resulting binary
More like: 1. compile the pre-existing main-wrapper: // main_wrapper.d void main() { mixin(import("orbconf.d")); } like this: $ dmd main_wrapper.d -J{path containing user's "orbconf.d"} If the user specifies a filename other than the standard name, then it's still not much more: // main_wrapperB.d void main() { mixin(import("std_orbconf.d")); } Then write std_orbconf.d: // std_orbconf.d mixin(import("renamed_orbconf.d")); $ dmd main_wrapperB.d --J{path containing user's "renamed_orbconf.d"} -J{path containing "std_orbconf.d"} Also, remember that this should only need to be rebuilt if renamed_orbconf.d (or something it depends on) has changed. So you can do like rdmd does: call dmd to *just* get the deps of the main_wrapper.d+orbconf.d combination (which is much faster than an actual build) and only rebuild if they've changed - which won't be often. And again even this is only needed when the user isn't using the standard config file name. 2. run the resulting binary
 This seems very unnecessary to me. Unnecessary IO, unnecessary
 compilation, unnecessary processes (two new processes). The only thing
 this will do is slowing down everything.
1. The amount of extra stuff is fairly minimal. *Especially* in the majority of cases where the user uses the standard name ("orbconf.d" or whatever you want to call it).
OK, I guess you can get away without the IO, but you still need the extra processes.
 2. Using Ruby will slow things down, too. It's not exactly known for being a
 language that's fast to compile&run on the level of D.
Yeah, I know. Ruby is one of the slowest language. But I was still hoping it would be faster than compile and run a D application. Also note that since I've embedded Ruby in the tool it's not creating a new process (at least I don't think it does). -- /Jacob Carlborg
Jun 19 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"Jacob Carlborg" <doob me.com> wrote in message 
news:itko61$1qdm$1 digitalmars.com...
 On 2011-06-18 21:35, Nick Sabalausky wrote:
 I'd probably consider something more like:

 orb.ver = "1.0.0";
 orb.author = "Jacob Carlborg";
 orb.type = Type.library;
 orb.imports = ["a.d", "b.di"]; // an array of import files
That would be doable in Ruby as well. I though it would be better to not have to write "orb." in front of every method. Note the following syntax is not possible in Ruby: ver = "1.0.0" author = "Jacob Carlborg" type = Type.library imports = ["a.d", "b.di"] The above syntax is what I would prefer but it doesn't work in Ruby, would create local variables and not call instance methods. Because of that I chose the syntax I chose, the least verbose syntax I could think of.
That syntax should be doable in D.
 DSSS forced an INI-similar syntax on people.
INI-syntax is trivial. Especially compared to Ruby (or D for that matter, to be perfectly fair).
I was thinking that the Ruby syntax was as easy and trivial as the INI-syntax if you just use the basic, like I have in the examples. No need to use if-statements, loops or classes. That's just for packages that need to do very special things.
But then the people who do such fancy things have to do it in Ruby instead of D.
 To take the DSSS syntax again as an example:


 version (Windows) {
 }


 version (Windows)
 {
 )

 I assume this is because the lexer/parser is very simple. You don't have 
 this problem if you use a complete language for the config/spec files.
Right. And D is a complete language.
 1. The amount of extra stuff is fairly minimal. *Especially* in the 
 majority
 of cases where the user uses the standard name ("orbconf.d" or whatever 
 you
 want to call it).
OK, I guess you can get away without the IO, but you still need the extra processes.
That should be pretty quick.
Jun 19 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-06-19 20:41, Nick Sabalausky wrote:
 "Jacob Carlborg"<doob me.com>  wrote in message
 news:itko61$1qdm$1 digitalmars.com...
 On 2011-06-18 21:35, Nick Sabalausky wrote:
 I'd probably consider something more like:

 orb.ver = "1.0.0";
 orb.author = "Jacob Carlborg";
 orb.type = Type.library;
 orb.imports = ["a.d", "b.di"]; // an array of import files
That would be doable in Ruby as well. I though it would be better to not have to write "orb." in front of every method. Note the following syntax is not possible in Ruby: ver = "1.0.0" author = "Jacob Carlborg" type = Type.library imports = ["a.d", "b.di"] The above syntax is what I would prefer but it doesn't work in Ruby, would create local variables and not call instance methods. Because of that I chose the syntax I chose, the least verbose syntax I could think of.
That syntax should be doable in D.
 DSSS forced an INI-similar syntax on people.
INI-syntax is trivial. Especially compared to Ruby (or D for that matter, to be perfectly fair).
I was thinking that the Ruby syntax was as easy and trivial as the INI-syntax if you just use the basic, like I have in the examples. No need to use if-statements, loops or classes. That's just for packages that need to do very special things.
But then the people who do such fancy things have to do it in Ruby instead of D.
 To take the DSSS syntax again as an example:


 version (Windows) {
 }


 version (Windows)
 {
 )

 I assume this is because the lexer/parser is very simple. You don't have
 this problem if you use a complete language for the config/spec files.
Right. And D is a complete language.
 1. The amount of extra stuff is fairly minimal. *Especially* in the
 majority
 of cases where the user uses the standard name ("orbconf.d" or whatever
 you
 want to call it).
OK, I guess you can get away without the IO, but you still need the extra processes.
That should be pretty quick.
Ok, for now I will continue with Ruby and see how it goes. One thing I do think looks really ugly in D are delegates. For the Dake config file, I'm thinking that it would allow several targets and tasks (like rake), which would looks something like this (in Ruby): target "name" do |t| t.flags = "-L-lz" end In D this would look something like this: target("name", (Target t) { t.flags = "-L-lz" }); Or with operator overload abuse: target("name") in (Target t) { t.flags = "-L-lz" }; I would so love if this syntax (that's been suggested before) was supported: target("name", Target t) { t.flags = "-L-lz" } If anyone have better ideas for how this can be done I'm listening. One other thing, syntax below can be thought of like a compile time eval: void main () { mixin(import("file.d")); } Does anyone have an idea if it would be possible to do the corresponding to instance eval that is available in some scripting languages? -- /Jacob Carlborg
Jun 19 2011
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 19.06.2011 23:57, Jacob Carlborg wrote:
 On 2011-06-19 20:41, Nick Sabalausky wrote:
 "Jacob Carlborg"<doob me.com>  wrote in message
 news:itko61$1qdm$1 digitalmars.com...
 On 2011-06-18 21:35, Nick Sabalausky wrote:
 I'd probably consider something more like:

 orb.ver = "1.0.0";
 orb.author = "Jacob Carlborg";
 orb.type = Type.library;
 orb.imports = ["a.d", "b.di"]; // an array of import files
That would be doable in Ruby as well. I though it would be better to not have to write "orb." in front of every method. Note the following syntax is not possible in Ruby: ver = "1.0.0" author = "Jacob Carlborg" type = Type.library imports = ["a.d", "b.di"] The above syntax is what I would prefer but it doesn't work in Ruby, would create local variables and not call instance methods. Because of that I chose the syntax I chose, the least verbose syntax I could think of.
That syntax should be doable in D.
 DSSS forced an INI-similar syntax on people.
INI-syntax is trivial. Especially compared to Ruby (or D for that matter, to be perfectly fair).
I was thinking that the Ruby syntax was as easy and trivial as the INI-syntax if you just use the basic, like I have in the examples. No need to use if-statements, loops or classes. That's just for packages that need to do very special things.
But then the people who do such fancy things have to do it in Ruby instead of D.
 To take the DSSS syntax again as an example:


 version (Windows) {
 }


 version (Windows)
 {
 )

 I assume this is because the lexer/parser is very simple. You don't 
 have
 this problem if you use a complete language for the config/spec files.
Right. And D is a complete language.
 1. The amount of extra stuff is fairly minimal. *Especially* in the
 majority
 of cases where the user uses the standard name ("orbconf.d" or 
 whatever
 you
 want to call it).
OK, I guess you can get away without the IO, but you still need the extra processes.
That should be pretty quick.
Ok, for now I will continue with Ruby and see how it goes. One thing I do think looks really ugly in D are delegates. For the Dake config file, I'm thinking that it would allow several targets and tasks (like rake), which would looks something like this (in Ruby): target "name" do |t| t.flags = "-L-lz" end In D this would look something like this: target("name", (Target t) { t.flags = "-L-lz" }); Or with operator overload abuse: target("name") in (Target t) { t.flags = "-L-lz" }; I would so love if this syntax (that's been suggested before) was supported: target("name", Target t) { t.flags = "-L-lz" }
Why having name as run-time parameter? I'd expect more like (given there is Target struct or class): //somewhere at top Target cool_lib, ...; then: with(cool_lib) { flags = "-L-lz"; } I'd even expect special types like Executable, Library and so on.
 If anyone have better ideas for how this can be done I'm listening.

 One other thing, syntax below can be thought of like a compile time eval:

 void main ()
 {
     mixin(import("file.d"));
 }

 Does anyone have an idea if it would be possible to do the 
 corresponding to instance eval that is available in some scripting 
 languages?
-- Dmitry Olshansky
Jun 19 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-06-19 22:28, Dmitry Olshansky wrote:

 Why having name as run-time parameter? I'd expect more like (given there
 is Target struct or class):
 //somewhere at top
 Target cool_lib, ...;

 then:
 with(cool_lib) {
 flags = "-L-lz";
 }

 I'd even expect special types like Executable, Library and so on.
The user shouldn't have to create the necessary object. If it does, how would the tool get it then? -- /Jacob Carlborg
Jun 20 2011
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 20.06.2011 12:25, Jacob Carlborg wrote:
 On 2011-06-19 22:28, Dmitry Olshansky wrote:

 Why having name as run-time parameter? I'd expect more like (given there
 is Target struct or class):
 //somewhere at top
 Target cool_lib, ...;

 then:
 with(cool_lib) {
 flags = "-L-lz";
 }

 I'd even expect special types like Executable, Library and so on.
The user shouldn't have to create the necessary object. If it does, how would the tool get it then?
If we settle on effectively evaluating orbspec like this: //first module module orb_orange; mixin(import ("orange.orbspec")); // // builder entry point void main() { foreach(member; __traits(allMembers, orb_orange)) { static if(typeof(member) == Target){ //do necessary actions, sort out priority and construct a worklist } else //static if (...) //...could be others I mentioned { } } //all the work goes there } Should be straightforward? Alternatively with local imports we can pack it in a struct instead of separate module, though errors in script would be harder to report (but at least static constructors would be controlled!). More adequatly would be, of course, to pump it to dmd from stdin... -- Dmitry Olshansky
Jun 20 2011
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-06-20 10:59, Dmitry Olshansky wrote:
 On 20.06.2011 12:25, Jacob Carlborg wrote:
 On 2011-06-19 22:28, Dmitry Olshansky wrote:

 Why having name as run-time parameter? I'd expect more like (given there
 is Target struct or class):
 //somewhere at top
 Target cool_lib, ...;

 then:
 with(cool_lib) {
 flags = "-L-lz";
 }

 I'd even expect special types like Executable, Library and so on.
The user shouldn't have to create the necessary object. If it does, how would the tool get it then?
If we settle on effectively evaluating orbspec like this: //first module module orb_orange; mixin(import ("orange.orbspec")); // // builder entry point void main() { foreach(member; __traits(allMembers, orb_orange)) { static if(typeof(member) == Target){ //do necessary actions, sort out priority and construct a worklist } else //static if (...) //...could be others I mentioned { } } //all the work goes there } Should be straightforward? Alternatively with local imports we can pack it in a struct instead of separate module, though errors in script would be harder to report (but at least static constructors would be controlled!). More adequatly would be, of course, to pump it to dmd from stdin...
I had no idea that you could do that. It seems somewhat complicated and like a hack. Also note that Orbit is currently written in D1, which doesn't have __traits. -- /Jacob Carlborg
Jun 20 2011
next sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 20.06.2011 15:35, Jacob Carlborg wrote:
 On 2011-06-20 10:59, Dmitry Olshansky wrote:
 On 20.06.2011 12:25, Jacob Carlborg wrote:
 On 2011-06-19 22:28, Dmitry Olshansky wrote:

 Why having name as run-time parameter? I'd expect more like (given 
 there
 is Target struct or class):
 //somewhere at top
 Target cool_lib, ...;

 then:
 with(cool_lib) {
 flags = "-L-lz";
 }

 I'd even expect special types like Executable, Library and so on.
The user shouldn't have to create the necessary object. If it does, how would the tool get it then?
If we settle on effectively evaluating orbspec like this: //first module module orb_orange; mixin(import ("orange.orbspec")); // // builder entry point void main() { foreach(member; __traits(allMembers, orb_orange)) { static if(typeof(member) == Target){ //do necessary actions, sort out priority and construct a worklist } else //static if (...) //...could be others I mentioned { } } //all the work goes there } Should be straightforward? Alternatively with local imports we can pack it in a struct instead of separate module, though errors in script would be harder to report (but at least static constructors would be controlled!). More adequatly would be, of course, to pump it to dmd from stdin...
I had no idea that you could do that. It seems somewhat complicated and like a hack. Also note that Orbit is currently written in D1, which doesn't have __traits.
Well, everything about compile-time introspection could be labeled like a hack. In fact I just seen the aforementioned "hack" on a much grander scale being used in upcoming std module, see std.benchmarking: https://github.com/D-Programming-Language/phobos/pull/85/files#L1R577 And personally hacks should look ugly or they are just features or at best shortcuts ;) Personal things aside I still suggest you to switch it to D2. I can understand if Phobos is just not up to snuff for you yet (btw cute curl wrapper is coming in a matter of days). But other then that... just look at all these candies ( opDispatch anyone? ) :) And even if porting is a piece of work, I suspect there a lot of people out there that would love to help this project. (given the lofty goal that config would be written in D, and not Ruby) -- Dmitry Olshansky
Jun 20 2011
next sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 20.06.2011 16:35, Dmitry Olshansky wrote:
 On 20.06.2011 15:35, Jacob Carlborg wrote:
 On 2011-06-20 10:59, Dmitry Olshansky wrote:
 On 20.06.2011 12:25, Jacob Carlborg wrote:
 On 2011-06-19 22:28, Dmitry Olshansky wrote:

 Why having name as run-time parameter? I'd expect more like (given 
 there
 is Target struct or class):
 //somewhere at top
 Target cool_lib, ...;

 then:
 with(cool_lib) {
 flags = "-L-lz";
 }

 I'd even expect special types like Executable, Library and so on.
The user shouldn't have to create the necessary object. If it does, how would the tool get it then?
If we settle on effectively evaluating orbspec like this: //first module module orb_orange; mixin(import ("orange.orbspec")); // // builder entry point void main() { foreach(member; __traits(allMembers, orb_orange)) { static if(typeof(member) == Target){ //do necessary actions, sort out priority and construct a worklist } else //static if (...) //...could be others I mentioned { } } //all the work goes there } Should be straightforward? Alternatively with local imports we can pack it in a struct instead of separate module, though errors in script would be harder to report (but at least static constructors would be controlled!). More adequatly would be, of course, to pump it to dmd from stdin...
I had no idea that you could do that. It seems somewhat complicated and like a hack. Also note that Orbit is currently written in D1, which doesn't have __traits.
Well, everything about compile-time introspection could be labeled like a hack. In fact I just seen the aforementioned "hack" on a much grander scale being used in upcoming std module, see std.benchmarking: https://github.com/D-Programming-Language/phobos/pull/85/files#L1R577 And personally hacks should look ugly or they are just features or at best shortcuts ;) Personal things aside I still suggest you to switch it to D2. I can understand if Phobos is just not up to snuff for you yet (btw cute curl wrapper is coming in a matter of days). But other then that... just look at all these candies ( opDispatch anyone? ) :) And even if porting is a piece of work, I suspect there a lot of people out there that would love to help this project. (given the lofty goal that config would be written in D, and not Ruby)
Just looked through the source , it seems like you are doing a lot of work that's already been done in Phobos, so it might be worth doing a port to D2. Some simple wrappers might be needed, but ultimately: util.traits --> std.traits core.array --> std.array + std.algorithm io.path --> std.file & std.path orgb.util.OptinoParser --> std.getopt util.singleton should probably be pulled into Phobos, but a thread safe shared version. -- Dmitry Olshansky
Jun 20 2011
parent Jacob Carlborg <doob me.com> writes:
On 2011-06-20 14:49, Dmitry Olshansky wrote:
 On 20.06.2011 16:35, Dmitry Olshansky wrote:
 On 20.06.2011 15:35, Jacob Carlborg wrote:
 On 2011-06-20 10:59, Dmitry Olshansky wrote:
 On 20.06.2011 12:25, Jacob Carlborg wrote:
 On 2011-06-19 22:28, Dmitry Olshansky wrote:

 Why having name as run-time parameter? I'd expect more like (given
 there
 is Target struct or class):
 //somewhere at top
 Target cool_lib, ...;

 then:
 with(cool_lib) {
 flags = "-L-lz";
 }

 I'd even expect special types like Executable, Library and so on.
The user shouldn't have to create the necessary object. If it does, how would the tool get it then?
If we settle on effectively evaluating orbspec like this: //first module module orb_orange; mixin(import ("orange.orbspec")); // // builder entry point void main() { foreach(member; __traits(allMembers, orb_orange)) { static if(typeof(member) == Target){ //do necessary actions, sort out priority and construct a worklist } else //static if (...) //...could be others I mentioned { } } //all the work goes there } Should be straightforward? Alternatively with local imports we can pack it in a struct instead of separate module, though errors in script would be harder to report (but at least static constructors would be controlled!). More adequatly would be, of course, to pump it to dmd from stdin...
I had no idea that you could do that. It seems somewhat complicated and like a hack. Also note that Orbit is currently written in D1, which doesn't have __traits.
Well, everything about compile-time introspection could be labeled like a hack. In fact I just seen the aforementioned "hack" on a much grander scale being used in upcoming std module, see std.benchmarking: https://github.com/D-Programming-Language/phobos/pull/85/files#L1R577 And personally hacks should look ugly or they are just features or at best shortcuts ;) Personal things aside I still suggest you to switch it to D2. I can understand if Phobos is just not up to snuff for you yet (btw cute curl wrapper is coming in a matter of days). But other then that... just look at all these candies ( opDispatch anyone? ) :) And even if porting is a piece of work, I suspect there a lot of people out there that would love to help this project. (given the lofty goal that config would be written in D, and not Ruby)
Just looked through the source , it seems like you are doing a lot of work that's already been done in Phobos, so it might be worth doing a port to D2. Some simple wrappers might be needed, but ultimately:
First I have to say that these simple models are no reason to port to D2. Second, here are a couple of other reasons: * These modules (at least some of them) are quite old, pieces of some of them originate back from 2007 (before D2) * These modules also started out as common API for Phobos and Tango functions * Some of these modules also contains specific functions and names for easing Java and C++ porting Overall I like the API of the modules, some functions are aliases for Tango/Phobos functions with names I like better and some are just wrappers with a new API.
 util.traits --> std.traits
As far as I can see, most of these functions don't exist in std.traits.
 core.array --> std.array + std.algorithm
When I work with arrays I want to work with arrays not some other kind of type like a range. I do understand the theoretical idea about having containers and algorithm separated but in practice I've never needed it.
 io.path --> std.file & std.path
Some of these exist in std.file and some don't.
 orgb.util.OptinoParser --> std.getopt
This is a wrapper for the Tango argument parse, because I like this API better.
 util.singleton should probably be pulled into Phobos, but a thread safe
 shared version.
Yes, but it isn't in Phobos yet. -- /Jacob Carlborg
Jun 20 2011
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2011-06-20 14:35, Dmitry Olshansky wrote:
 On 20.06.2011 15:35, Jacob Carlborg wrote:
 On 2011-06-20 10:59, Dmitry Olshansky wrote:
 On 20.06.2011 12:25, Jacob Carlborg wrote:
 On 2011-06-19 22:28, Dmitry Olshansky wrote:

 Why having name as run-time parameter? I'd expect more like (given
 there
 is Target struct or class):
 //somewhere at top
 Target cool_lib, ...;

 then:
 with(cool_lib) {
 flags = "-L-lz";
 }

 I'd even expect special types like Executable, Library and so on.
The user shouldn't have to create the necessary object. If it does, how would the tool get it then?
If we settle on effectively evaluating orbspec like this: //first module module orb_orange; mixin(import ("orange.orbspec")); // // builder entry point void main() { foreach(member; __traits(allMembers, orb_orange)) { static if(typeof(member) == Target){ //do necessary actions, sort out priority and construct a worklist } else //static if (...) //...could be others I mentioned { } } //all the work goes there } Should be straightforward? Alternatively with local imports we can pack it in a struct instead of separate module, though errors in script would be harder to report (but at least static constructors would be controlled!). More adequatly would be, of course, to pump it to dmd from stdin...
I had no idea that you could do that. It seems somewhat complicated and like a hack. Also note that Orbit is currently written in D1, which doesn't have __traits.
Well, everything about compile-time introspection could be labeled like a hack. In fact I just seen the aforementioned "hack" on a much grander scale being used in upcoming std module, see std.benchmarking: https://github.com/D-Programming-Language/phobos/pull/85/files#L1R577 And personally hacks should look ugly or they are just features or at best shortcuts ;)
I personally think that just because Phobos uses these features will not make them less "hackish".
 Personal things aside I still suggest you to switch it to D2. I can
 understand if Phobos is just not up to snuff for you yet (btw cute curl
 wrapper is coming in a matter of days). But other then that... just look
 at all these candies ( opDispatch anyone? ) :)
 And even if porting is a piece of work, I suspect there a lot of people
 out there that would love to help this project.
 (given the lofty goal that config would be written in D, and not Ruby)
D2 has many new cool feature and I would love to use some of them, but every time I try they don't work. I'm tried of using a language that's not ready. I still think Tango is a better library and I like it better than Phobos. Although Phobos is doing a great job of filling in the feature gaps in every new release. -- /Jacob Carlborg
Jun 20 2011
prev sibling next sibling parent Adam Ruppe <destructionator gmail.com> writes:
Jacob Carlborg wrote:
 I had no idea that you could do that. It seems somewhat complicated
 and like a hack.
There's nothing really hacky about that - it's a defined and fairly complete part of the language. It's simpler than it looks too... the syntax is slightly long, but conceptually, you're just looping over an array of members. Combined with the stuff in std.traits to make it a little simpler, there's lots of nice stuff you can do in there.
Jun 20 2011
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/20/11 6:35 AM, Jacob Carlborg wrote:
 On 2011-06-20 10:59, Dmitry Olshansky wrote:
 On 20.06.2011 12:25, Jacob Carlborg wrote:
 On 2011-06-19 22:28, Dmitry Olshansky wrote:

 Why having name as run-time parameter? I'd expect more like (given
 there
 is Target struct or class):
 //somewhere at top
 Target cool_lib, ...;

 then:
 with(cool_lib) {
 flags = "-L-lz";
 }

 I'd even expect special types like Executable, Library and so on.
The user shouldn't have to create the necessary object. If it does, how would the tool get it then?
If we settle on effectively evaluating orbspec like this: //first module module orb_orange; mixin(import ("orange.orbspec")); // // builder entry point void main() { foreach(member; __traits(allMembers, orb_orange)) { static if(typeof(member) == Target){ //do necessary actions, sort out priority and construct a worklist } else //static if (...) //...could be others I mentioned { } } //all the work goes there } Should be straightforward? Alternatively with local imports we can pack it in a struct instead of separate module, though errors in script would be harder to report (but at least static constructors would be controlled!). More adequatly would be, of course, to pump it to dmd from stdin...
I had no idea that you could do that. It seems somewhat complicated and like a hack. Also note that Orbit is currently written in D1, which doesn't have __traits.
std.benchmark (https://github.com/D-Programming-Language/phobos/pull/85) does that, too. Overall I believe porting Orbit to D2 and making it use D2 instead of Ruby in configuration would increase its chances to become popular and accepted in tools/. Andrei
Jun 20 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-06-20 15:28, Andrei Alexandrescu wrote:
 On 6/20/11 6:35 AM, Jacob Carlborg wrote:
 On 2011-06-20 10:59, Dmitry Olshansky wrote:
 On 20.06.2011 12:25, Jacob Carlborg wrote:
 On 2011-06-19 22:28, Dmitry Olshansky wrote:

 Why having name as run-time parameter? I'd expect more like (given
 there
 is Target struct or class):
 //somewhere at top
 Target cool_lib, ...;

 then:
 with(cool_lib) {
 flags = "-L-lz";
 }

 I'd even expect special types like Executable, Library and so on.
The user shouldn't have to create the necessary object. If it does, how would the tool get it then?
If we settle on effectively evaluating orbspec like this: //first module module orb_orange; mixin(import ("orange.orbspec")); // // builder entry point void main() { foreach(member; __traits(allMembers, orb_orange)) { static if(typeof(member) == Target){ //do necessary actions, sort out priority and construct a worklist } else //static if (...) //...could be others I mentioned { } } //all the work goes there } Should be straightforward? Alternatively with local imports we can pack it in a struct instead of separate module, though errors in script would be harder to report (but at least static constructors would be controlled!). More adequatly would be, of course, to pump it to dmd from stdin...
I had no idea that you could do that. It seems somewhat complicated and like a hack. Also note that Orbit is currently written in D1, which doesn't have __traits.
std.benchmark (https://github.com/D-Programming-Language/phobos/pull/85) does that, too. Overall I believe porting Orbit to D2 and making it use D2 instead of Ruby in configuration would increase its chances to become popular and accepted in tools/. Andrei
See my reply to Dmitry. BTW has std.benchmark gone through the regular review process? -- /Jacob Carlborg
Jun 20 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/20/11 4:28 PM, Jacob Carlborg wrote:
 See my reply to Dmitry.
I see this as a dogfood issue. If there are things that should be in Phobos and aren't, it would gain everybody to add them to Phobos. Anyhow, it all depends on what you want to do with the tool. If it's written in D1, we won't be able to put it on the github D-programming-language/tools (which doesn't mean it won't become widespread).
 BTW has std.benchmark gone through the regular review process?
I was sure someone will ask that at some point :o). The planned change was to add a couple of functions, but then it got separated into its own module. If several people think it's worth putting std.benchmark through the review queue, let's do so. I'm sure the quality of the module will be gained. Andrei
Jun 20 2011
next sibling parent reply "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
On Mon, 20 Jun 2011 17:32:32 -0500, Andrei Alexandrescu wrote:
 On 6/20/11 4:28 PM, Jacob Carlborg wrote:
 BTW has std.benchmark gone through the regular review process?
I was sure someone will ask that at some point :o). The planned change was to add a couple of functions, but then it got separated into its own module. If several people think it's worth putting std.benchmark through the review queue, let's do so. I'm sure the quality of the module will be gained.
I think we should. Also, now that TempAlloc isn't up for review anymore, and both std.log and std.path have to be postponed a few weeks, the queue is open. :) -Lars
Jun 20 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/21/11 1:58 AM, Lars T. Kyllingstad wrote:
 On Mon, 20 Jun 2011 17:32:32 -0500, Andrei Alexandrescu wrote:
 On 6/20/11 4:28 PM, Jacob Carlborg wrote:
 BTW has std.benchmark gone through the regular review process?
I was sure someone will ask that at some point :o). The planned change was to add a couple of functions, but then it got separated into its own module. If several people think it's worth putting std.benchmark through the review queue, let's do so. I'm sure the quality of the module will be gained.
I think we should. Also, now that TempAlloc isn't up for review anymore, and both std.log and std.path have to be postponed a few weeks, the queue is open. :) -Lars
Perfect. Anyone would want to be the review manager? Lars? :o) Andrei
Jun 21 2011
parent reply "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
On Tue, 21 Jun 2011 08:21:57 -0500, Andrei Alexandrescu wrote:

 On 6/21/11 1:58 AM, Lars T. Kyllingstad wrote:
 On Mon, 20 Jun 2011 17:32:32 -0500, Andrei Alexandrescu wrote:
 On 6/20/11 4:28 PM, Jacob Carlborg wrote:
 BTW has std.benchmark gone through the regular review process?
I was sure someone will ask that at some point :o). The planned change was to add a couple of functions, but then it got separated into its own module. If several people think it's worth putting std.benchmark through the review queue, let's do so. I'm sure the quality of the module will be gained.
I think we should. Also, now that TempAlloc isn't up for review anymore, and both std.log and std.path have to be postponed a few weeks, the queue is open. :) -Lars
Perfect. Anyone would want to be the review manager? Lars? :o)
I would, but in two weeks I am going away on vacation, and that will be in the middle of the review process. Any other volunteers? -Lars
Jun 21 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/21/11 9:14 AM, Lars T. Kyllingstad wrote:
 On Tue, 21 Jun 2011 08:21:57 -0500, Andrei Alexandrescu wrote:

 On 6/21/11 1:58 AM, Lars T. Kyllingstad wrote:
 On Mon, 20 Jun 2011 17:32:32 -0500, Andrei Alexandrescu wrote:
 On 6/20/11 4:28 PM, Jacob Carlborg wrote:
 BTW has std.benchmark gone through the regular review process?
I was sure someone will ask that at some point :o). The planned change was to add a couple of functions, but then it got separated into its own module. If several people think it's worth putting std.benchmark through the review queue, let's do so. I'm sure the quality of the module will be gained.
I think we should. Also, now that TempAlloc isn't up for review anymore, and both std.log and std.path have to be postponed a few weeks, the queue is open. :) -Lars
Perfect. Anyone would want to be the review manager? Lars? :o)
I would, but in two weeks I am going away on vacation, and that will be in the middle of the review process. Any other volunteers? -Lars
BTW if libcurl is ready for review that should be the more urgent item. Andrei
Jun 21 2011
parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On 2011-06-21 07:17, Andrei Alexandrescu wrote:
 On 6/21/11 9:14 AM, Lars T. Kyllingstad wrote:
 On Tue, 21 Jun 2011 08:21:57 -0500, Andrei Alexandrescu wrote:
 On 6/21/11 1:58 AM, Lars T. Kyllingstad wrote:
 On Mon, 20 Jun 2011 17:32:32 -0500, Andrei Alexandrescu wrote:
 On 6/20/11 4:28 PM, Jacob Carlborg wrote:
 BTW has std.benchmark gone through the regular review process?
I was sure someone will ask that at some point :o). The planned change was to add a couple of functions, but then it got separated into its own module. If several people think it's worth putting std.benchmark through the review queue, let's do so. I'm sure the quality of the module will be gained.
I think we should. Also, now that TempAlloc isn't up for review anymore, and both std.log and std.path have to be postponed a few weeks, the queue is open. :) -Lars
Perfect. Anyone would want to be the review manager? Lars? :o)
I would, but in two weeks I am going away on vacation, and that will be in the middle of the review process. Any other volunteers? -Lars
BTW if libcurl is ready for review that should be the more urgent item.
It looks like libcurl needs more bake time first, so if we're going to review std.benchmark, it can go first. Since, no one else has stepped forward to do it, I can be the review manager. Given the relative simplicity of std.benchmark and the fact that something like half of it was in std.datetime to begin with, do you think that reviewing until July 1st (a little over a week) would be enough before voting on it, or do you think that it should go longer? We can always extend the time if it turns out that it needs a longer period than that, but if you think that it's likely to need more review and update, then we might as well select a longer time to begin with. - Jonathan M Davis
Jun 22 2011
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-06-21 00:32, Andrei Alexandrescu wrote:
 On 6/20/11 4:28 PM, Jacob Carlborg wrote:
 See my reply to Dmitry.
I see this as a dogfood issue. If there are things that should be in Phobos and aren't, it would gain everybody to add them to Phobos.
All of these are not missing. For some of the things I just like doing it differently then how Phobos does it.
 Anyhow, it all depends on what you want to do with the tool. If it's
 written in D1, we won't be able to put it on the github
 D-programming-language/tools (which doesn't mean it won't become
 widespread).
So now suddenly D1 is banned? Seems like you are trying to destroy all traces of D1. I think it would be better for all if you instead encourage people to use D of any version and not use D2.
 BTW has std.benchmark gone through the regular review process?
I was sure someone will ask that at some point :o). The planned change was to add a couple of functions, but then it got separated into its own module. If several people think it's worth putting std.benchmark through the review queue, let's do so. I'm sure the quality of the module will be gained. Andrei
Why would std.benchmark be an exception? Shouldn't all new modules and big refactoring of existing ones go through the review process? If none one thinks it's worth putting std.benchmark through the review process then it seems to me that people isn't thinking it worth adding to Phobos. -- /Jacob Carlborg
Jun 21 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/21/11 4:18 AM, Jacob Carlborg wrote:
 On 2011-06-21 00:32, Andrei Alexandrescu wrote:
 On 6/20/11 4:28 PM, Jacob Carlborg wrote:
 See my reply to Dmitry.
I see this as a dogfood issue. If there are things that should be in Phobos and aren't, it would gain everybody to add them to Phobos.
All of these are not missing. For some of the things I just like doing it differently then how Phobos does it.
I understand.
 Anyhow, it all depends on what you want to do with the tool. If it's
 written in D1, we won't be able to put it on the github
 D-programming-language/tools (which doesn't mean it won't become
 widespread).
So now suddenly D1 is banned? Seems like you are trying to destroy all traces of D1. I think it would be better for all if you instead encourage people to use D of any version and not use D2.
No need to politicize this - as I said, it's a matter of dogfood, as well as one of focusing our efforts. You seem to not like the way D and its standard library work, which is entirely fine, except when it comes about adding an official tool.
 BTW has std.benchmark gone through the regular review process?
I was sure someone will ask that at some point :o). The planned change was to add a couple of functions, but then it got separated into its own module. If several people think it's worth putting std.benchmark through the review queue, let's do so. I'm sure the quality of the module will be gained. Andrei
Why would std.benchmark be an exception? Shouldn't all new modules and big refactoring of existing ones go through the review process?
Again, the matter has been incidental - the module has grown from the desire to reduce std.datetime. The new code only adds a couple of functions. Going through the review process will definitely be helpful.
 If none
 one thinks it's worth putting std.benchmark through the review process
 then it seems to me that people isn't thinking it worth adding to Phobos.
I wrote these functions for two reasons. One, I want to add a collection of benchmarks to Phobos itself so we can keep tabs on performance. Second, few people know how to write a benchmark and these functions help to some extent, so the functions may be of interest beyond Phobos. My perception is that there is an underlying matter making you look for every opportunity to pick a fight. Your posts as of late have been increasingly abrupt. Only in the post I'm replying to you have attempted to ascribe political motives to me, to frame me as one who thinks is above the rules, and to question the worthiness of my work. Instead of doing all that, it may be more productive to focus on the core matter and figuring out a way to resolve it. Thanks, Andrei
Jun 21 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-06-21 16:02, Andrei Alexandrescu wrote:
 On 6/21/11 4:18 AM, Jacob Carlborg wrote:
 On 2011-06-21 00:32, Andrei Alexandrescu wrote:
 On 6/20/11 4:28 PM, Jacob Carlborg wrote:
 See my reply to Dmitry.
I see this as a dogfood issue. If there are things that should be in Phobos and aren't, it would gain everybody to add them to Phobos.
All of these are not missing. For some of the things I just like doing it differently then how Phobos does it.
I understand.
 Anyhow, it all depends on what you want to do with the tool. If it's
 written in D1, we won't be able to put it on the github
 D-programming-language/tools (which doesn't mean it won't become
 widespread).
So now suddenly D1 is banned? Seems like you are trying to destroy all traces of D1. I think it would be better for all if you instead encourage people to use D of any version and not use D2.
No need to politicize this - as I said, it's a matter of dogfood, as well as one of focusing our efforts. You seem to not like the way D and its standard library work, which is entirely fine, except when it comes about adding an official tool.
I do like D1 and in general D2. What I'm having most problem with is Phobos and that D2 sometimes (too often for me) doesn't work. If we talk about making it an official tool I can understand that you want it to be written in D2 and Phobos. On the other hand I think that the D community should encourage all developers using D, regardless of which version or standard library they use. The community is too small for anything else.
 BTW has std.benchmark gone through the regular review process?
I was sure someone will ask that at some point :o). The planned change was to add a couple of functions, but then it got separated into its own module. If several people think it's worth putting std.benchmark through the review queue, let's do so. I'm sure the quality of the module will be gained. Andrei
Why would std.benchmark be an exception? Shouldn't all new modules and big refactoring of existing ones go through the review process?
Again, the matter has been incidental - the module has grown from the desire to reduce std.datetime. The new code only adds a couple of functions. Going through the review process will definitely be helpful.
 If none
 one thinks it's worth putting std.benchmark through the review process
 then it seems to me that people isn't thinking it worth adding to Phobos.
I wrote these functions for two reasons. One, I want to add a collection of benchmarks to Phobos itself so we can keep tabs on performance. Second, few people know how to write a benchmark and these functions help to some extent, so the functions may be of interest beyond Phobos. My perception is that there is an underlying matter making you look for every opportunity to pick a fight. Your posts as of late have been increasingly abrupt. Only in the post I'm replying to you have attempted to ascribe political motives to me, to frame me as one who thinks is above the rules, and to question the worthiness of my work. Instead of doing all that, it may be more productive to focus on the core matter and figuring out a way to resolve it. Thanks, Andrei
I'm sorry if my posts are abrupt. I'm not very good at writing in the first place and my native language not being English doesn't help. Sometimes I just want to answer something to just basically indicate that I've read the reply, that may look abrupt, I don't know. I just want to say one more thing (hoping you don't think I'm too offensive) and that is that you sometimes seem to want to pretend that there is no D1 and never has been. Maybe I was a bit too harsh saying that std.benchmark maybe wasn't worth adding. On the other hand isn't this what the review process is about (or maybe this is before the review process)? We can't include library, I assume we don't want that. I just saw a new module with almost 1k lines of code and some additional changes as well and was wondering why this haven't gone through the review process. In the end I'm just trying to defend my code and ideas. Should I've not answered the feedback I got on my ideas? Anyway, I have no problem dropping this discussion and focusing on the core matter and figuring out a way to resolve it. -- /Jacob Carlborg
Jun 21 2011
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On 2011-06-21 10:17, Jacob Carlborg wrote:
 Maybe I was a bit too harsh saying that std.benchmark maybe wasn't worth
 adding. On the other hand isn't this what the review process is about
 (or maybe this is before the review process)? We can't include

 library, I assume we don't want that.
Why not? Granted, we want quality code, and we only have so many people working on Phobos and only so many people to help vet code, but assuming that it can be written at the appropriate level of quality and that the functionality is generally useful, I don't see why we wouldn't want a large expect that we'll ever have a standard library that large, but I don't see why having a large standard library would be a bad thing as long as it's of high quality and its functionality is generally useful. - Jonathan M Davis
Jun 21 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-06-21 19:36, Jonathan M Davis wrote:
 On 2011-06-21 10:17, Jacob Carlborg wrote:
 Maybe I was a bit too harsh saying that std.benchmark maybe wasn't worth
 adding. On the other hand isn't this what the review process is about
 (or maybe this is before the review process)? We can't include

 library, I assume we don't want that.
Why not? Granted, we want quality code, and we only have so many people working on Phobos and only so many people to help vet code, but assuming that it can be written at the appropriate level of quality and that the functionality is generally useful, I don't see why we wouldn't want a large expect that we'll ever have a standard library that large, but I don't see why having a large standard library would be a bad thing as long as it's of high quality and its functionality is generally useful. - Jonathan M Davis
I just got that impression. That we want a relative small standard library and have other libraries available as well. -- /Jacob Carlborg
Jun 21 2011
next sibling parent reply Jimmy Cao <jcao219 gmail.com> writes:
On Tue, Jun 21, 2011 at 1:01 PM, Jacob Carlborg <doob me.com> wrote:

 On 2011-06-21 19:36, Jonathan M Davis wrote:

 On 2011-06-21 10:17, Jacob Carlborg wrote:

 Maybe I was a bit too harsh saying that std.benchmark maybe wasn't worth
 adding. On the other hand isn't this what the review process is about
 (or maybe this is before the review process)? We can't include

 library, I assume we don't want that.
Why not? Granted, we want quality code, and we only have so many people working on Phobos and only so many people to help vet code, but assuming that it can be written at the appropriate level of quality and that the functionality is generally useful, I don't see why we wouldn't want a large don't expect that we'll ever have a standard library that large, but I don't see why having a large standard library would be a bad thing as long as it's of high quality and its functionality is generally useful. - Jonathan M Davis
I just got that impression. That we want a relative small standard library and have other libraries available as well. -- /Jacob Carlborg
greatest advantages of .NET programming.
Jun 21 2011
parent Jacob Carlborg <doob me.com> writes:
On 2011-06-21 20:11, Jimmy Cao wrote:
 On Tue, Jun 21, 2011 at 1:01 PM, Jacob Carlborg <doob me.com
 <mailto:doob me.com>> wrote:

     On 2011-06-21 19:36, Jonathan M Davis wrote:

         On 2011-06-21 10:17, Jacob Carlborg wrote:

             Maybe I was a bit too harsh saying that std.benchmark maybe
             wasn't worth
             adding. On the other hand isn't this what the review process
             is about
             (or maybe this is before the review process)? We can't include

             library, I assume we don't want that.


         Why not? Granted, we want quality code, and we only have so many
         people
         working on Phobos and only so many people to help vet code, but
         assuming that
         it can be written at the appropriate level of quality and that the
         functionality is generally useful, I don't see why we wouldn't
         want a large

         manpower, I don't
         expect that we'll ever have a standard library that large, but I
         don't see why
         having a large standard library would be a bad thing as long as
         it's of high
         quality and its functionality is generally useful.

         - Jonathan M Davis


     I just got that impression. That we want a relative small standard
     library and have other libraries available as well.

     --
     /Jacob Carlborg



 greatest advantages of .NET programming.
I'm not saying it's something wrong with having a standard library as -- /Jacob Carlborg
Jun 21 2011
prev sibling next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On 2011-06-21 11:01, Jacob Carlborg wrote:
 On 2011-06-21 19:36, Jonathan M Davis wrote:
 On 2011-06-21 10:17, Jacob Carlborg wrote:
 Maybe I was a bit too harsh saying that std.benchmark maybe wasn't worth
 adding. On the other hand isn't this what the review process is about
 (or maybe this is before the review process)? We can't include

 library, I assume we don't want that.
Why not? Granted, we want quality code, and we only have so many people working on Phobos and only so many people to help vet code, but assuming that it can be written at the appropriate level of quality and that the functionality is generally useful, I don't see why we wouldn't want a manpower, I don't expect that we'll ever have a standard library that large, but I don't see why having a large standard library would be a bad thing as long as it's of high quality and its functionality is generally useful. - Jonathan M Davis
I just got that impression. That we want a relative small standard library and have other libraries available as well.
I don't know how everyone else feels about it, but I see no problem with having a large standard library as long as it's of high quality and its generally considered a valuable asset. One of the major advantages of Python which frequently gets touted is its large standard library. I definitely see a large standard library as advantageous. The trick is being able to develop it, having it of high quality, and actually have everything in it be generally useful. We don't have a lot of people working on Phobos, so naturally, it's going to be smaller. If quality is a major focus, then the size is going to tend to be smaller as well. And if we try and avoid functionality which is overly-specific and not generally useful, then that's going to make the library smaller as well. We have been pushing for both high quality and general usefulness in what is added to Phobos, so it hasn't exactly been growing by leaps and bounds, and with the limited resources that we have, it takes time to improve and enlarge it even if we want to be large. So, Phobos is naturally smaller than many standard libraries (particularly those backed by large companies) and will continue to be so. But I think that having a large, high quality, generally useful standard library is very much what we should be striving for, even if for now that's pretty much restricted to high quality and generally useful. Now, maybe there are other folks on the Phobos dev team or on the newsgroup which want Phobos to be small, but I really think that experience has shown that large standard libraries are generally an asset to a language. The trick is ensuring that the functionality that they have is of high quality and appropriately general for a standard library. - Jonathan M Davis
Jun 21 2011
prev sibling parent reply Byakkun <byakkun myopera.com> writes:
On Tue, 21 Jun 2011 21:01:07 +0300, Jacob Carlborg <doob me.com> wrote:

 On 2011-06-21 19:36, Jonathan M Davis wrote:
 On 2011-06-21 10:17, Jacob Carlborg wrote:
 Maybe I was a bit too harsh saying that std.benchmark maybe wasn't  
 worth
 adding. On the other hand isn't this what the review process is about
 (or maybe this is before the review process)? We can't include

 library, I assume we don't want that.
Why not? Granted, we want quality code, and we only have so many people working on Phobos and only so many people to help vet code, but assuming that it can be written at the appropriate level of quality and that the functionality is generally useful, I don't see why we wouldn't want a large don't expect that we'll ever have a standard library that large, but I don't see why having a large standard library would be a bad thing as long as it's of high quality and its functionality is generally useful. - Jonathan M Davis
I just got that impression. That we want a relative small standard library and have other libraries available as well.
I see only one perspective from which you would like to not have good and that is the fact that you can't realistically hope to have the IDEs they have which integrate facilities to access the documentation very easily or one can just to rely on auto-completion (which also gives Java This is worthy of consideration for phobos (the fact to have as much std as possible and useful. My only concern (excepting bugs and holes in Phobos) is that the packages are not grouped at all and that increases the time (at least for a noob) it take to search through the documentation and the code. Also there is some ambiguity to regarding the place of some functionality like std.array and std.string (I fond myself surprised in other areas but I can't remember right now) which I imagine it could be fixed simply by intelligently using D module system. But maybe there are reasons for doing it this way which I don't get. -- Using Opera's revolutionary email client: http://www.opera.com/mail/
Jun 21 2011
next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On 2011-06-21 14:27, Byakkun wrote:
 On Tue, 21 Jun 2011 21:01:07 +0300, Jacob Carlborg <doob me.com> wrote:
 On 2011-06-21 19:36, Jonathan M Davis wrote:
 On 2011-06-21 10:17, Jacob Carlborg wrote:
 Maybe I was a bit too harsh saying that std.benchmark maybe wasn't
 worth
 adding. On the other hand isn't this what the review process is about
 (or maybe this is before the review process)? We can't include

 library, I assume we don't want that.
Why not? Granted, we want quality code, and we only have so many people working on Phobos and only so many people to help vet code, but assuming that it can be written at the appropriate level of quality and that the functionality is generally useful, I don't see why we wouldn't want a large don't expect that we'll ever have a standard library that large, but I don't see why having a large standard library would be a bad thing as long as it's of high quality and its functionality is generally useful. - Jonathan M Davis
I just got that impression. That we want a relative small standard library and have other libraries available as well.
I see only one perspective from which you would like to not have good and that is the fact that you can't realistically hope to have the IDEs they have which integrate facilities to access the documentation very easily or one can just to rely on auto-completion (which also gives Java This is worthy of consideration for phobos (the fact to have as much std as possible and useful. My only concern (excepting bugs and holes in Phobos) is that the packages are not grouped at all and that increases the time (at least for a noob) it take to search through the documentation and the code. Also there is some ambiguity to regarding the place of some functionality like std.array and std.string (I fond myself surprised in other areas but I can't remember right now) which I imagine it could be fixed simply by intelligently using D module system. But maybe there are reasons for doing it this way which I don't get.
As far as std.array vs std.string goes, functionality which generalizes to arrays is supposed to be in std.array, whereas functionality which only makes sense for strings belongs in std.string. For instance, toUpper makes sense in std.string but not in std.array, since it only makes sense to uppercase strings, not general characters, whereas a function like replicate makes sense for arrays in general, so it's in std.array. Where it's likely to be surprising is with functions like split where you would initially think that it applies only to strings, but it has an overload which is more generally applicable, so it's in std.array. And several functions were moved to std.array from std.string a couple of releases back, so if you were used to having them in std.string, it could throw you off. There are probably a few places where functions might be better moved to another module, and there are definitely cases where it's debatable whether they belong in one module or another, but overall things are fairly well organized. In some cases, we may eventually have to move to a deeper hierarchy, but with what we have at the moment, I don't think a deeper hierarchy would help us much. It's not like Java where everything is a class and every class is in its own module. In that kind of environment, you pretty much have to have a deep hierarchy. But that's not the case with D. - Jonathan M Davis
Jun 21 2011
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2011-06-21 23:27, Byakkun wrote:
 On Tue, 21 Jun 2011 21:01:07 +0300, Jacob Carlborg <doob me.com> wrote:

 On 2011-06-21 19:36, Jonathan M Davis wrote:
 On 2011-06-21 10:17, Jacob Carlborg wrote:
 Maybe I was a bit too harsh saying that std.benchmark maybe wasn't
 worth
 adding. On the other hand isn't this what the review process is about
 (or maybe this is before the review process)? We can't include

 library, I assume we don't want that.
Why not? Granted, we want quality code, and we only have so many people working on Phobos and only so many people to help vet code, but assuming that it can be written at the appropriate level of quality and that the functionality is generally useful, I don't see why we wouldn't want a large I don't expect that we'll ever have a standard library that large, but I don't see why having a large standard library would be a bad thing as long as it's of high quality and its functionality is generally useful. - Jonathan M Davis
I just got that impression. That we want a relative small standard library and have other libraries available as well.
I see only one perspective from which you would like to not have good and that is the fact that you can't realistically hope to have the IDEs they have which integrate facilities to access the documentation very easily or one can just to rely on auto-completion (which also gives Java This is worthy of consideration for phobos (the fact to have as much std as possible and useful. My only concern (excepting bugs and holes in Phobos) is that the packages are not grouped at all and that increases the time (at least for a noob) it take to search through the documentation and the code. Also there is some ambiguity to regarding the place of some functionality like std.array and std.string (I fond myself surprised in other areas but I can't remember right now) which I imagine it could be fixed simply by intelligently using D module system. But maybe there are reasons for doing it this way which I don't get.
-- /Jacob Carlborg
Jun 22 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Dmitry Olshansky" <dmitry.olsh gmail.com> wrote in message 
news:itn2el$2t2v$1 digitalmars.com...
 On 20.06.2011 12:25, Jacob Carlborg wrote:
 On 2011-06-19 22:28, Dmitry Olshansky wrote:

 Why having name as run-time parameter? I'd expect more like (given there
 is Target struct or class):
 //somewhere at top
 Target cool_lib, ...;

 then:
 with(cool_lib) {
 flags = "-L-lz";
 }

 I'd even expect special types like Executable, Library and so on.
The user shouldn't have to create the necessary object. If it does, how would the tool get it then?
If we settle on effectively evaluating orbspec like this: //first module module orb_orange; mixin(import ("orange.orbspec")); // // builder entry point void main() { foreach(member; __traits(allMembers, orb_orange)) { static if(typeof(member) == Target){ //do necessary actions, sort out priority and construct a worklist } else //static if (...) //...could be others I mentioned { } } //all the work goes there } Should be straightforward? Alternatively with local imports we can pack it in a struct instead of separate module, though errors in script would be harder to report (but at least static constructors would be controlled!). More adequatly would be, of course, to pump it to dmd from stdin...
Target would be part of Orb. Why not just make Target's ctor register itself with the rest of Orb?
Jun 20 2011
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 20.06.2011 23:39, Nick Sabalausky wrote:
 "Dmitry Olshansky"<dmitry.olsh gmail.com>  wrote in message
 news:itn2el$2t2v$1 digitalmars.com...
 On 20.06.2011 12:25, Jacob Carlborg wrote:
 On 2011-06-19 22:28, Dmitry Olshansky wrote:

 Why having name as run-time parameter? I'd expect more like (given there
 is Target struct or class):
 //somewhere at top
 Target cool_lib, ...;

 then:
 with(cool_lib) {
 flags = "-L-lz";
 }

 I'd even expect special types like Executable, Library and so on.
The user shouldn't have to create the necessary object. If it does, how would the tool get it then?
If we settle on effectively evaluating orbspec like this: //first module module orb_orange; mixin(import ("orange.orbspec")); // // builder entry point void main() { foreach(member; __traits(allMembers, orb_orange)) { static if(typeof(member) == Target){ //do necessary actions, sort out priority and construct a worklist } else //static if (...) //...could be others I mentioned { } } //all the work goes there } Should be straightforward? Alternatively with local imports we can pack it in a struct instead of separate module, though errors in script would be harder to report (but at least static constructors would be controlled!). More adequatly would be, of course, to pump it to dmd from stdin...
Target would be part of Orb. Why not just make Target's ctor register itself with the rest of Orb?
Nice thinking, but default constructors for structs? Of course, it could be a class... Then probably there could be usefull derived things like these Executable, Library, etc. -- Dmitry Olshansky
Jun 20 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-06-20 22:45, Dmitry Olshansky wrote:
 On 20.06.2011 23:39, Nick Sabalausky wrote:
 "Dmitry Olshansky"<dmitry.olsh gmail.com> wrote in message
 news:itn2el$2t2v$1 digitalmars.com...
 On 20.06.2011 12:25, Jacob Carlborg wrote:
 On 2011-06-19 22:28, Dmitry Olshansky wrote:

 Why having name as run-time parameter? I'd expect more like (given
 there
 is Target struct or class):
 //somewhere at top
 Target cool_lib, ...;

 then:
 with(cool_lib) {
 flags = "-L-lz";
 }

 I'd even expect special types like Executable, Library and so on.
The user shouldn't have to create the necessary object. If it does, how would the tool get it then?
If we settle on effectively evaluating orbspec like this: //first module module orb_orange; mixin(import ("orange.orbspec")); // // builder entry point void main() { foreach(member; __traits(allMembers, orb_orange)) { static if(typeof(member) == Target){ //do necessary actions, sort out priority and construct a worklist } else //static if (...) //...could be others I mentioned { } } //all the work goes there } Should be straightforward? Alternatively with local imports we can pack it in a struct instead of separate module, though errors in script would be harder to report (but at least static constructors would be controlled!). More adequatly would be, of course, to pump it to dmd from stdin...
Target would be part of Orb. Why not just make Target's ctor register itself with the rest of Orb?
Nice thinking, but default constructors for structs? Of course, it could be a class... Then probably there could be usefull derived things like these Executable, Library, etc.
I really don't like that the users needs to create the targets. The good thing about Ruby is that the user can just call a function and pass a block to the function. Then the tool can evaluate the block in the context of an instance. The user would never have to care about instances. -- /Jacob Carlborg
Jun 20 2011
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 21.06.2011 1:36, Jacob Carlborg wrote:
 On 2011-06-20 22:45, Dmitry Olshansky wrote:
 On 20.06.2011 23:39, Nick Sabalausky wrote:
 "Dmitry Olshansky"<dmitry.olsh gmail.com> wrote in message
 news:itn2el$2t2v$1 digitalmars.com...
 On 20.06.2011 12:25, Jacob Carlborg wrote:
 On 2011-06-19 22:28, Dmitry Olshansky wrote:

 Why having name as run-time parameter? I'd expect more like (given
 there
 is Target struct or class):
 //somewhere at top
 Target cool_lib, ...;

 then:
 with(cool_lib) {
 flags = "-L-lz";
 }

 I'd even expect special types like Executable, Library and so on.
The user shouldn't have to create the necessary object. If it does, how would the tool get it then?
If we settle on effectively evaluating orbspec like this: //first module module orb_orange; mixin(import ("orange.orbspec")); // // builder entry point void main() { foreach(member; __traits(allMembers, orb_orange)) { static if(typeof(member) == Target){ //do necessary actions, sort out priority and construct a worklist } else //static if (...) //...could be others I mentioned { } } //all the work goes there } Should be straightforward? Alternatively with local imports we can pack it in a struct instead of separate module, though errors in script would be harder to report (but at least static constructors would be controlled!). More adequatly would be, of course, to pump it to dmd from stdin...
Target would be part of Orb. Why not just make Target's ctor register itself with the rest of Orb?
Nice thinking, but default constructors for structs? Of course, it could be a class... Then probably there could be usefull derived things like these Executable, Library, etc.
I really don't like that the users needs to create the targets. The good thing about Ruby is that the user can just call a function and pass a block to the function. Then the tool can evaluate the block in the context of an instance. The user would never have to care about instances.
I'm not getting what's wrong with it. Your magical block is still getting some _name_ as string right? I suspect it's even an advantage if you can't type pass arbitrary strings to a block only proper instances, e.g. it's harder to mistype a name due to a type checking. What's so good about having to type all these name over and over again without keeping track of how many you inadvertently referenced? Taking your example, what if I typed name2 instead of name here, what would be the tool actions: target "name" do |t| t.flags = "-L-lz" end Create new target and set it's flags? I can't see a reasonable error checking to disambiguate it at all. More then that now I'm not sure what it was supposed to do in the first place - update flags of existing Target instance with name "name" ? Right now I think it could be much better to initialize them in the first place. IMHO every time I create a build script I usually care about number of targets and their names. P.S. Also about D as config language : take into account version statements, here they make a lot of sense. -- Dmitry Olshansky
Jun 20 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-06-21 00:30, Dmitry Olshansky wrote:
 On 21.06.2011 1:36, Jacob Carlborg wrote:
 On 2011-06-20 22:45, Dmitry Olshansky wrote:
 On 20.06.2011 23:39, Nick Sabalausky wrote:
 "Dmitry Olshansky"<dmitry.olsh gmail.com> wrote in message
 news:itn2el$2t2v$1 digitalmars.com...
 On 20.06.2011 12:25, Jacob Carlborg wrote:
 On 2011-06-19 22:28, Dmitry Olshansky wrote:

 Why having name as run-time parameter? I'd expect more like (given
 there
 is Target struct or class):
 //somewhere at top
 Target cool_lib, ...;

 then:
 with(cool_lib) {
 flags = "-L-lz";
 }

 I'd even expect special types like Executable, Library and so on.
The user shouldn't have to create the necessary object. If it does, how would the tool get it then?
If we settle on effectively evaluating orbspec like this: //first module module orb_orange; mixin(import ("orange.orbspec")); // // builder entry point void main() { foreach(member; __traits(allMembers, orb_orange)) { static if(typeof(member) == Target){ //do necessary actions, sort out priority and construct a worklist } else //static if (...) //...could be others I mentioned { } } //all the work goes there } Should be straightforward? Alternatively with local imports we can pack it in a struct instead of separate module, though errors in script would be harder to report (but at least static constructors would be controlled!). More adequatly would be, of course, to pump it to dmd from stdin...
Target would be part of Orb. Why not just make Target's ctor register itself with the rest of Orb?
Nice thinking, but default constructors for structs? Of course, it could be a class... Then probably there could be usefull derived things like these Executable, Library, etc.
I really don't like that the users needs to create the targets. The good thing about Ruby is that the user can just call a function and pass a block to the function. Then the tool can evaluate the block in the context of an instance. The user would never have to care about instances.
I'm not getting what's wrong with it. Your magical block is still getting some _name_ as string right? I suspect it's even an advantage if you can't type pass arbitrary strings to a block only proper instances, e.g. it's harder to mistype a name due to a type checking.
This is starting to get confusing. You're supposed to be passing an arbitrary strings to the function and then _receive_ an instance to the block.
 What's so good about having to type all these name over and over again
 without keeping track of how many you inadvertently referenced?
You shouldn't have to repeat the name.
 Taking your example, what if I typed name2 instead of name here, what
 would be the tool actions:
 target "name" do |t|
 t.flags = "-L-lz"
 end
"target" works like this: 1. You call "target" passing in the name of the target and a block 2. "target" then call the block passing in an instance of a Target class (or similar) 3. In the block you then specify all the necessary settings you need for this particular target. You should only call "target" once for each target. So, if you pass in "name2" instead of "name" you would create a new target. I haven't figured out what should happen if you call "target" twice with the same name. Also note that this would be sufficient: target "name" do flags "-l-lz" end In that case you wouldn't even have to care about "t" or that it even exists an instance behind the since. It would just be syntax. You can have a look at how Rake and Rubgems do this: If you look at the Rake examples: http://en.wikipedia.org/wiki/Rake_%28software%29 then a target would work the same as a Rake task. Have a look at the top example of: http://rubygems.rubyforge.org/rubygems-update/Gem/Specification.html
 Create new target and set it's flags? I can't see a reasonable error
 checking to disambiguate it at all.
 More then that now I'm not sure what it was supposed to do in the first
 place - update flags of existing Target instance with name "name" ?
 Right now I think it could be much better to initialize them in the
 first place.

 IMHO every time I create a build script I usually care about number of
 targets and their names.

 P.S. Also about D as config language : take into account version
 statements, here they make a lot of sense.
Yes, version statements will be available as well. -- /Jacob Carlborg
Jun 21 2011
next sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 21.06.2011 13:07, Jacob Carlborg wrote:
 On 2011-06-21 00:30, Dmitry Olshansky wrote:
 On 21.06.2011 1:36, Jacob Carlborg wrote:
 On 2011-06-20 22:45, Dmitry Olshansky wrote:
 On 20.06.2011 23:39, Nick Sabalausky wrote:
 "Dmitry Olshansky"<dmitry.olsh gmail.com> wrote in message
 news:itn2el$2t2v$1 digitalmars.com...
 On 20.06.2011 12:25, Jacob Carlborg wrote:
 On 2011-06-19 22:28, Dmitry Olshansky wrote:

 Why having name as run-time parameter? I'd expect more like (given
 there
 is Target struct or class):
 //somewhere at top
 Target cool_lib, ...;

 then:
 with(cool_lib) {
 flags = "-L-lz";
 }

 I'd even expect special types like Executable, Library and so on.
The user shouldn't have to create the necessary object. If it does, how would the tool get it then?
If we settle on effectively evaluating orbspec like this: //first module module orb_orange; mixin(import ("orange.orbspec")); // // builder entry point void main() { foreach(member; __traits(allMembers, orb_orange)) { static if(typeof(member) == Target){ //do necessary actions, sort out priority and construct a worklist } else //static if (...) //...could be others I mentioned { } } //all the work goes there } Should be straightforward? Alternatively with local imports we can pack it in a struct instead of separate module, though errors in script would be harder to report (but at least static constructors would be controlled!). More adequatly would be, of course, to pump it to dmd from stdin...
Target would be part of Orb. Why not just make Target's ctor register itself with the rest of Orb?
Nice thinking, but default constructors for structs? Of course, it could be a class... Then probably there could be usefull derived things like these Executable, Library, etc.
I really don't like that the users needs to create the targets. The good thing about Ruby is that the user can just call a function and pass a block to the function. Then the tool can evaluate the block in the context of an instance. The user would never have to care about instances.
I'm not getting what's wrong with it. Your magical block is still getting some _name_ as string right? I suspect it's even an advantage if you can't type pass arbitrary strings to a block only proper instances, e.g. it's harder to mistype a name due to a type checking.
This is starting to get confusing. You're supposed to be passing an arbitrary strings to the function and then _receive_ an instance to the block.
 What's so good about having to type all these name over and over again
 without keeping track of how many you inadvertently referenced?
You shouldn't have to repeat the name.
 Taking your example, what if I typed name2 instead of name here, what
 would be the tool actions:
 target "name" do |t|
 t.flags = "-L-lz"
 end
"target" works like this: 1. You call "target" passing in the name of the target and a block 2. "target" then call the block passing in an instance of a Target class (or similar) 3. In the block you then specify all the necessary settings you need for this particular target. You should only call "target" once for each target. So, if you pass in "name2" instead of "name" you would create a new target. I haven't figured out what should happen if you call "target" twice with the same name. Also note that this would be sufficient: target "name" do flags "-l-lz" end
So it's a way to _create_ instances. I suspected there could be need to add some extra options to existing. Imagine creating special version of package, IMO it's better when all this extra is packaged at one place not in every block. BTW this doesn't look any better then possible D version: spec = Gem::Specification.new do |s| s.name = 'example' s.version = '1.0' s.summary = 'Example gem specification' ... end In any case there is now instance named spec, right? So user still have to manage some variables...
 In that case you wouldn't even have to care about "t" or that it even 
 exists an instance behind the since. It would just be syntax.

 You can have a look at how Rake and Rubgems do this:

 If you look at the Rake examples: 
 http://en.wikipedia.org/wiki/Rake_%28software%29 then a target would 
 work the same as a Rake task.
 Have a look at the top example of: 
 http://rubygems.rubyforge.org/rubygems-update/Gem/Specification.html

 Create new target and set it's flags? I can't see a reasonable error
 checking to disambiguate it at all.
 More then that now I'm not sure what it was supposed to do in the first
 place - update flags of existing Target instance with name "name" ?
 Right now I think it could be much better to initialize them in the
 first place.

 IMHO every time I create a build script I usually care about number of
 targets and their names.

 P.S. Also about D as config language : take into account version
 statements, here they make a lot of sense.
Yes, version statements will be available as well.
-- Dmitry Olshansky
Jun 21 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-06-21 12:04, Dmitry Olshansky wrote:
 On 21.06.2011 13:07, Jacob Carlborg wrote:
 "target" works like this:

 1. You call "target" passing in the name of the target and a block

 2. "target" then call the block passing in an instance of a Target
 class (or similar)

 3. In the block you then specify all the necessary settings you need
 for this particular target.

 You should only call "target" once for each target. So, if you pass in
 "name2" instead of "name" you would create a new target. I haven't
 figured out what should happen if you call "target" twice with the
 same name.

 Also note that this would be sufficient:

 target "name" do
 flags "-l-lz"
 end
So it's a way to _create_ instances. I suspected there could be need to add some extra options to existing. Imagine creating special version of package, IMO it's better when all this extra is packaged at one place not in every block. BTW this doesn't look any better then possible D version: spec = Gem::Specification.new do |s| s.name = 'example' s.version = '1.0' s.summary = 'Example gem specification' ... end In any case there is now instance named spec, right? So user still have to manage some variables...
No, no, no. Have you read my previous messages and the wiki? That above syntax is used by Rubygems, Rake uses a similar and Orbit and Dake will also use a similar syntax but will still be slightly different. The concepts are the same, with calling a method and passing along a block. The syntax used by Orbit doesn't actually need blocks at all because you can only have one package in one orbspec. The syntax will look like this: name "example" version "1.0" summary "Example gem specification" Dake will have blocks in the syntax for its config files, this is because multiple targets and tasks are supported within the same file. The syntax will look like this: target "<name>" do flags "-L-l" product "foobar" type :executable end In this case, <name> would refer to a single D file or a directory with multiple D files. If you want to have settings for multiple targets then you just put that code outside any of the blocks, at the global scope (or pass a block to a method name "global", haven't decided yet). And similar for tasks: task "foo" do end A task is just some code you can run from the command line via the tool: dake foo As you can see, no variables and no instances for the user to keep track of. Seems that I actually do need to write down a complete specification for these config/spec files. -- /Jacob Carlborg
Jun 21 2011
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 21.06.2011 15:53, Jacob Carlborg wrote:
 On 2011-06-21 12:04, Dmitry Olshansky wrote:
 On 21.06.2011 13:07, Jacob Carlborg wrote:
 "target" works like this:

 1. You call "target" passing in the name of the target and a block

 2. "target" then call the block passing in an instance of a Target
 class (or similar)

 3. In the block you then specify all the necessary settings you need
 for this particular target.

 You should only call "target" once for each target. So, if you pass in
 "name2" instead of "name" you would create a new target. I haven't
 figured out what should happen if you call "target" twice with the
 same name.

 Also note that this would be sufficient:

 target "name" do
 flags "-l-lz"
 end
So it's a way to _create_ instances. I suspected there could be need to add some extra options to existing. Imagine creating special version of package, IMO it's better when all this extra is packaged at one place not in every block. BTW this doesn't look any better then possible D version: spec = Gem::Specification.new do |s| s.name = 'example' s.version = '1.0' s.summary = 'Example gem specification' ... end In any case there is now instance named spec, right? So user still have to manage some variables...
No, no, no. Have you read my previous messages and the wiki? That above syntax is used by Rubygems, Rake uses a similar and Orbit and Dake will also use a similar syntax but will still be slightly different. The concepts are the same, with calling a method and passing along a block. The syntax used by Orbit doesn't actually need blocks at all because you can only have one package in one orbspec. The syntax will look like this: name "example" version "1.0" summary "Example gem specification"
Very sensible, no arguments.
 Dake will have blocks in the syntax for its config files, this is 
 because multiple targets and tasks are supported within the same file. 
 The syntax will look like this:

 target "<name>" do
     flags "-L-l"
     product "foobar"
     type :executable
 end
Yes, this is the one I'm not entirely happy with, admittedly because it's a ruby script (and it's respective magic). I just hope you can embed Ruby interpreter inside dake so that user need not to care about having proper ruby installation (and it's correct version too).
 In this case, <name> would refer to a single D file or a directory 
 with multiple D files. If you want to have settings for multiple 
 targets then you just put that code outside any of the blocks, at the 
 global scope (or pass a block to a method name "global", haven't 
 decided yet).
Good so far.
 And similar for tasks:

 task "foo" do

 end

 A task is just some code you can run from the command line via the tool:

 dake foo

 As you can see, no variables and no instances for the user to keep 
 track of. Seems that I actually do need to write down a complete 
 specification for these config/spec files.
I'm still looking for a clean version statements... but I'll just have to wait till you have the full spec I guess. -- Dmitry Olshansky
Jun 21 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-06-21 14:12, Dmitry Olshansky wrote:
 On 21.06.2011 15:53, Jacob Carlborg wrote:
 On 2011-06-21 12:04, Dmitry Olshansky wrote:
 On 21.06.2011 13:07, Jacob Carlborg wrote:
 "target" works like this:

 1. You call "target" passing in the name of the target and a block

 2. "target" then call the block passing in an instance of a Target
 class (or similar)

 3. In the block you then specify all the necessary settings you need
 for this particular target.

 You should only call "target" once for each target. So, if you pass in
 "name2" instead of "name" you would create a new target. I haven't
 figured out what should happen if you call "target" twice with the
 same name.

 Also note that this would be sufficient:

 target "name" do
 flags "-l-lz"
 end
So it's a way to _create_ instances. I suspected there could be need to add some extra options to existing. Imagine creating special version of package, IMO it's better when all this extra is packaged at one place not in every block. BTW this doesn't look any better then possible D version: spec = Gem::Specification.new do |s| s.name = 'example' s.version = '1.0' s.summary = 'Example gem specification' ... end In any case there is now instance named spec, right? So user still have to manage some variables...
No, no, no. Have you read my previous messages and the wiki? That above syntax is used by Rubygems, Rake uses a similar and Orbit and Dake will also use a similar syntax but will still be slightly different. The concepts are the same, with calling a method and passing along a block. The syntax used by Orbit doesn't actually need blocks at all because you can only have one package in one orbspec. The syntax will look like this: name "example" version "1.0" summary "Example gem specification"
Very sensible, no arguments.
 Dake will have blocks in the syntax for its config files, this is
 because multiple targets and tasks are supported within the same file.
 The syntax will look like this:

 target "<name>" do
 flags "-L-l"
 product "foobar"
 type :executable
 end
Yes, this is the one I'm not entirely happy with, admittedly because it's a ruby script (and it's respective magic). I just hope you can embed Ruby interpreter inside dake so that user need not to care about having proper ruby installation (and it's correct version too).
Of course, Ruby will be embedded. I already have this working in Orbit. I'm very careful when choosing the dependencies my application/tools depends on. Another reason I'm not happy about switching to D2, then it would depend on libcurl.
 In this case, <name> would refer to a single D file or a directory
 with multiple D files. If you want to have settings for multiple
 targets then you just put that code outside any of the blocks, at the
 global scope (or pass a block to a method name "global", haven't
 decided yet).
Good so far.
 And similar for tasks:

 task "foo" do

 end

 A task is just some code you can run from the command line via the tool:

 dake foo

 As you can see, no variables and no instances for the user to keep
 track of. Seems that I actually do need to write down a complete
 specification for these config/spec files.
I'm still looking for a clean version statements... but I'll just have to wait till you have the full spec I guess.
if version.linux || version.osx end This allows to use versions just as bool values. -- /Jacob Carlborg
Jun 21 2011
parent reply KennyTM~ <kennytm gmail.com> writes:
On Jun 21, 11 21:03, Jacob Carlborg wrote:
 On 2011-06-21 14:12, Dmitry Olshansky wrote:
[snip]
 Of course, Ruby will be embedded. I already have this working in Orbit.
 I'm very careful when choosing the dependencies my application/tools
 depends on. Another reason I'm not happy about switching to D2, then it
 would depend on libcurl.
It doesn't. You need libcurl only if you need to use the etc.c.curl interface.
Jun 21 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-06-21 15:26, KennyTM~ wrote:
 On Jun 21, 11 21:03, Jacob Carlborg wrote:
 On 2011-06-21 14:12, Dmitry Olshansky wrote:
[snip]
 Of course, Ruby will be embedded. I already have this working in Orbit.
 I'm very careful when choosing the dependencies my application/tools
 depends on. Another reason I'm not happy about switching to D2, then it
 would depend on libcurl.
It doesn't. You need libcurl only if you need to use the etc.c.curl interface.
First, I need curl, or similar. Second, as I've said in a previous post I don't want to and will not use a low level socket. -- /Jacob Carlborg
Jun 21 2011
parent Adam D. Ruppe <destructionator gmail.com> writes:
Jacob Carlborg wrote:
 First, I need curl, or similar.
If you like, you're free to use the http implementation from my build2.d <http://arsdnet.net/dcode/build2.d> - look for HTTP Implementation near the bottom. The (commented out) Network Wrapper get() function a little above shows how to use it for basic stuff. Doesn't have https support or anything else fancy like that, but it works.
Jun 21 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Jacob Carlborg" <doob me.com> wrote in message 
news:itpn8m$1c1i$1 digitalmars.com...
 "target" works like this:

 1. You call "target" passing in the name of the target and a block

 2. "target" then call the block passing in an instance of a Target class 
 (or similar)

 3. In the block you then specify all the necessary settings you need for 
 this particular target.

 You should only call "target" once for each target. So, if you pass in 
 "name2" instead of "name" you would create a new target. I haven't figured 
 out what should happen if you call "target" twice with the same name.

 Also note that this would be sufficient:

 target "name" do
     flags "-l-lz"
 end

 In that case you wouldn't even have to care about "t" or that it even 
 exists an instance behind the since. It would just be syntax.

 You can have a look at how Rake and Rubgems do this:

 If you look at the Rake examples: 
 http://en.wikipedia.org/wiki/Rake_%28software%29 then a target would work 
 the same as a Rake task.

 Have a look at the top example of: 
 http://rubygems.rubyforge.org/rubygems-update/Gem/Specification.html
FWIW, I've been using Rake heavily on a non-D project for about a year or so, and the more I use it the more I keep wishing I could just use D instead of of Ruby. That may have a lot to do with why I'm so interested in seeing Dake use D. Of course, I realize that Dake isn't Rake and isn't going to be exactly the same, but it's still Ruby instead of D and that's proven to be
Jun 21 2011
parent Jacob Carlborg <doob me.com> writes:
On 2011-06-22 06:13, Nick Sabalausky wrote:
 "Jacob Carlborg"<doob me.com>  wrote in message
 news:itpn8m$1c1i$1 digitalmars.com...
 "target" works like this:

 1. You call "target" passing in the name of the target and a block

 2. "target" then call the block passing in an instance of a Target class
 (or similar)

 3. In the block you then specify all the necessary settings you need for
 this particular target.

 You should only call "target" once for each target. So, if you pass in
 "name2" instead of "name" you would create a new target. I haven't figured
 out what should happen if you call "target" twice with the same name.

 Also note that this would be sufficient:

 target "name" do
      flags "-l-lz"
 end

 In that case you wouldn't even have to care about "t" or that it even
 exists an instance behind the since. It would just be syntax.

 You can have a look at how Rake and Rubgems do this:

 If you look at the Rake examples:
 http://en.wikipedia.org/wiki/Rake_%28software%29 then a target would work
 the same as a Rake task.

 Have a look at the top example of:
 http://rubygems.rubyforge.org/rubygems-update/Gem/Specification.html
FWIW, I've been using Rake heavily on a non-D project for about a year or so, and the more I use it the more I keep wishing I could just use D instead of of Ruby. That may have a lot to do with why I'm so interested in seeing Dake use D. Of course, I realize that Dake isn't Rake and isn't going to be exactly the same, but it's still Ruby instead of D and that's proven to be
Too bad you feel that about Ruby, I think it's a great language. Maybe you don't have a choice of using Rake or not but the reason I see why anyone would choose Rake is because the rakefiles are in Ruby. -- /Jacob Carlborg
Jun 22 2011
prev sibling parent Ary Manzana <ary esperanto.org.ar> writes:
On 6/18/11 6:38 PM, Jacob Carlborg wrote:
 On 2011-06-18 07:00, Nick Sabalausky wrote:
 "Jacob Carlborg"<doob me.com> wrote in message
 news:itgamg$2ggr$4 digitalmars.com...
 On 2011-06-17 18:45, Jose Armando Garcia wrote:
 On Fri, Jun 17, 2011 at 1:15 PM, Jacob Carlborg<doob me.com> wrote:
 On 2011-06-14 15:53, Andrei Alexandrescu wrote:
 Instead of complaining about others ideas (I'll probably do that as
 well
 :)
 ), here's my idea:
 https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D
 From website:
 Spec and Config Files
 The dakefile and the orbspec file is written in Ruby. Why?
Why ruby and not D with mixin? I am willing to volunteer some time to this if help is needed. -Jose
As I stated just below "The dakefile and the orbspec file is written in Ruby. Why?". D is too verbose for simple files like these. How would it even work? Wrap everything in a main method, compile and then run?
That would be better than forcing Ruby on people.
So you prefer this, in favor of the Ruby syntax: version_("1.0.0"); author("Jacob Carlborg"); type(Type.library); imports(["a.d", "b.di"]); // an array of import files
Lol! I was going to write exactly the same answer...
Jun 19 2011
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-06-18 07:00, Nick Sabalausky wrote:
 "Jacob Carlborg"<doob me.com>  wrote in message
 news:itgamg$2ggr$4 digitalmars.com...
 On 2011-06-17 18:45, Jose Armando Garcia wrote:
 On Fri, Jun 17, 2011 at 1:15 PM, Jacob Carlborg<doob me.com>   wrote:
 On 2011-06-14 15:53, Andrei Alexandrescu wrote:
 Instead of complaining about others ideas (I'll probably do that as well
 :)
 ), here's my idea:
 https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D
  From website:
 Spec and Config Files
 The dakefile and the orbspec file is written in Ruby. Why?
Why ruby and not D with mixin? I am willing to volunteer some time to this if help is needed. -Jose
As I stated just below "The dakefile and the orbspec file is written in Ruby. Why?". D is too verbose for simple files like these. How would it even work? Wrap everything in a main method, compile and then run?
That would be better than forcing Ruby on people.
You can just pretend it's not Ruby and think of it as a custom format instead :) -- /Jacob Carlborg
Jun 18 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"Jacob Carlborg" <doob me.com> wrote in message 
news:iti3bf$2r4r$4 digitalmars.com...
 On 2011-06-18 07:00, Nick Sabalausky wrote:
 "Jacob Carlborg"<doob me.com>  wrote in message
 news:itgamg$2ggr$4 digitalmars.com...
 On 2011-06-17 18:45, Jose Armando Garcia wrote:
 On Fri, Jun 17, 2011 at 1:15 PM, Jacob Carlborg<doob me.com>   wrote:
 On 2011-06-14 15:53, Andrei Alexandrescu wrote:
 Instead of complaining about others ideas (I'll probably do that as 
 well
 :)
 ), here's my idea:
 https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D
  From website:
 Spec and Config Files
 The dakefile and the orbspec file is written in Ruby. Why?
Why ruby and not D with mixin? I am willing to volunteer some time to this if help is needed. -Jose
As I stated just below "The dakefile and the orbspec file is written in Ruby. Why?". D is too verbose for simple files like these. How would it even work? Wrap everything in a main method, compile and then run?
That would be better than forcing Ruby on people.
You can just pretend it's not Ruby and think of it as a custom format instead :)
That doesn't address the matter of needing to install Ruby. It also throws away this stated benefit: "When the files are written in a complete language you have great flexibility and can take full advantage of [a full-fledged programming language]".
Jun 18 2011
parent Jacob Carlborg <doob me.com> writes:
On 2011-06-18 21:38, Nick Sabalausky wrote:
 "Jacob Carlborg"<doob me.com>  wrote in message
 That doesn't address the matter of needing to install Ruby.
No need for installing, it's embedded in the tool.
 It also throws away this stated benefit: "When the files are written in a
 complete language you have great flexibility and can take full advantage of
 [a full-fledged programming language]".
It's there if you need it, most people won't (I would guess). -- /Jacob Carlborg
Jun 19 2011
prev sibling next sibling parent Ary Manzana <ary esperanto.org.ar> writes:
On 6/17/11 11:15 PM, Jacob Carlborg wrote:
 On 2011-06-14 15:53, Andrei Alexandrescu wrote:
 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.


 Andrei
Instead of complaining about others ideas (I'll probably do that as well :) ), here's my idea: https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D I'm working on both of the tools mentioned in the above link. The ideas for the package manager are heavily based on Rubygems.
Very nice :-) I hope this wins, hehe...
Jun 18 2011
prev sibling parent reply Lutger Blijdestijn <lutger.blijdestijn gmail.com> writes:
Jacob Carlborg wrote:

 On 2011-06-14 15:53, Andrei Alexandrescu wrote:
 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.


 Andrei
Instead of complaining about others ideas (I'll probably do that as well :) ), here's my idea: https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D I'm working on both of the tools mentioned in the above link. The ideas for the package manager are heavily based on Rubygems.
Looks good, and has a cool name too! I love the reference to the mars / phobos theme. After 'cloning into orbit...', I think I'm missing a ruby ffi binding. Is it possible to build it already? Or is it too early for that? If I'm not mistaken the dependency on ruby is nicely factored into a very small part of orbit and could easily be replaced if someone would be inclined to do so. I'd prefer this over ruby, but I prefer ruby over the dsss format. In the end, what matters is the value of the tool.
Jun 19 2011
next sibling parent Jacob Carlborg <doob me.com> writes:
On 2011-06-19 15:32, Lutger Blijdestijn wrote:
 Jacob Carlborg wrote:

 On 2011-06-14 15:53, Andrei Alexandrescu wrote:
 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.


 Andrei
Instead of complaining about others ideas (I'll probably do that as well :) ), here's my idea: https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D I'm working on both of the tools mentioned in the above link. The ideas for the package manager are heavily based on Rubygems.
Looks good, and has a cool name too! I love the reference to the mars / phobos theme. After 'cloning into orbit...', I think I'm missing a ruby ffi binding. Is it possible to build it already? Or is it too early for that? If I'm not mistaken the dependency on ruby is nicely factored into a very small part of orbit and could easily be replaced if someone would be inclined to do so. I'd prefer this over ruby, but I prefer ruby over the dsss format. In the end, what matters is the value of the tool.
Oh yeah. I have the Ruby bindings on my computer only. I'll upload the bindings as well. The repository is not actually ready for the public yet. I just created the repository so I could easily access the code on all my computers and now I had use for the wiki as well. -- /Jacob Carlborg
Jun 19 2011
prev sibling parent reply Johannes Pfau <spam example.com> writes:
Lutger Blijdestijn wrote:
Jacob Carlborg wrote:

 On 2011-06-14 15:53, Andrei Alexandrescu wrote:
 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.


 Andrei
Instead of complaining about others ideas (I'll probably do that as well :) ), here's my idea: https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D I'm working on both of the tools mentioned in the above link. The ideas for the package manager are heavily based on Rubygems.
Looks good, and has a cool name too! I love the reference to the mars / phobos theme. After 'cloning into orbit...', I think I'm missing a ruby ffi binding. Is it possible to build it already? Or is it too early for that? If I'm not mistaken the dependency on ruby is nicely factored into a very small part of orbit and could easily be replaced if someone would be inclined to do so. I'd prefer this over ruby, but I prefer ruby over the dsss format. In the end, what matters is the value of the tool.
I personally think that ruby is a good choice for the config format (lua, python, whatever would be fine too), as we definitely need a programming language for advanced use cases (debian uses makefiles, which are a pita, but together with bash and external tools they still count as a programming language) It should be noted though that replacing the config syntax later on will be difficult: even if it's factored out nicely in the code, we could have thousands of d packages using the old format. In order not to break those, we'd have to deprecate the old format, but still leave it available for some time, which leads to more dependencies and problems... -- Johannes Pfau
Jun 19 2011
next sibling parent reply Lutger Blijdestijn <lutger.blijdestijn gmail.com> writes:
Johannes Pfau wrote:

 Lutger Blijdestijn wrote:
Jacob Carlborg wrote:

 On 2011-06-14 15:53, Andrei Alexandrescu wrote:
 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.


 Andrei
Instead of complaining about others ideas (I'll probably do that as well :) ), here's my idea: https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D I'm working on both of the tools mentioned in the above link. The ideas for the package manager are heavily based on Rubygems.
Looks good, and has a cool name too! I love the reference to the mars / phobos theme. After 'cloning into orbit...', I think I'm missing a ruby ffi binding. Is it possible to build it already? Or is it too early for that? If I'm not mistaken the dependency on ruby is nicely factored into a very small part of orbit and could easily be replaced if someone would be inclined to do so. I'd prefer this over ruby, but I prefer ruby over the dsss format. In the end, what matters is the value of the tool.
I personally think that ruby is a good choice for the config format (lua, python, whatever would be fine too), as we definitely need a programming language for advanced use cases (debian uses makefiles, which are a pita, but together with bash and external tools they still count as a programming language) It should be noted though that replacing the config syntax later on will be difficult: even if it's factored out nicely in the code, we could have thousands of d packages using the old format. In order not to break those, we'd have to deprecate the old format, but still leave it available for some time, which leads to more dependencies and problems...
For D programmers that need this kind of advanced functionality it means they have to learn ruby as well. Whereas it's pretty safe to assume they already know D :) Another advantage of D is that built related scripts and extensions can be distributed easily with orbit itself. I'm thinking that maybe it is possible for dakefile.rb and dakefile.d to coexist in the same tool? I'm not sure if that creates problems, or if such extra complexity is worth it. However, those that really want to use D could try to convince Jacob Carlborg that D is a good alternative by implementing it, if he is open to that.
Jun 19 2011
parent Jacob Carlborg <doob me.com> writes:
On 2011-06-19 19:37, Lutger Blijdestijn wrote:
 Johannes Pfau wrote:

 Lutger Blijdestijn wrote:
 Jacob Carlborg wrote:

 On 2011-06-14 15:53, Andrei Alexandrescu wrote:
 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.


 Andrei
Instead of complaining about others ideas (I'll probably do that as well :) ), here's my idea: https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D I'm working on both of the tools mentioned in the above link. The ideas for the package manager are heavily based on Rubygems.
Looks good, and has a cool name too! I love the reference to the mars / phobos theme. After 'cloning into orbit...', I think I'm missing a ruby ffi binding. Is it possible to build it already? Or is it too early for that? If I'm not mistaken the dependency on ruby is nicely factored into a very small part of orbit and could easily be replaced if someone would be inclined to do so. I'd prefer this over ruby, but I prefer ruby over the dsss format. In the end, what matters is the value of the tool.
I personally think that ruby is a good choice for the config format (lua, python, whatever would be fine too), as we definitely need a programming language for advanced use cases (debian uses makefiles, which are a pita, but together with bash and external tools they still count as a programming language) It should be noted though that replacing the config syntax later on will be difficult: even if it's factored out nicely in the code, we could have thousands of d packages using the old format. In order not to break those, we'd have to deprecate the old format, but still leave it available for some time, which leads to more dependencies and problems...
For D programmers that need this kind of advanced functionality it means they have to learn ruby as well. Whereas it's pretty safe to assume they already know D :) Another advantage of D is that built related scripts and extensions can be distributed easily with orbit itself.
That's true. It would be possible to write extensions in D even when the config language is Ruby, although it would be more complicated.
 I'm thinking that maybe it is possible for dakefile.rb and dakefile.d to
 coexist in the same tool? I'm not sure if that creates problems, or if such
 extra complexity is worth it.
I don't think it's worth it. It also depends on how much the complexity increases.
 However, those that really want to use D could try to convince Jacob
 Carlborg that D is a good alternative by implementing it, if he is open to
 that.
I'm always open to suggestions. -- /Jacob Carlborg
Jun 19 2011
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2011-06-19 19:15, Johannes Pfau wrote:
 Lutger Blijdestijn wrote:
 Jacob Carlborg wrote:

 On 2011-06-14 15:53, Andrei Alexandrescu wrote:
 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.


 Andrei
Instead of complaining about others ideas (I'll probably do that as well :) ), here's my idea: https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D I'm working on both of the tools mentioned in the above link. The ideas for the package manager are heavily based on Rubygems.
Looks good, and has a cool name too! I love the reference to the mars / phobos theme. After 'cloning into orbit...', I think I'm missing a ruby ffi binding. Is it possible to build it already? Or is it too early for that? If I'm not mistaken the dependency on ruby is nicely factored into a very small part of orbit and could easily be replaced if someone would be inclined to do so. I'd prefer this over ruby, but I prefer ruby over the dsss format. In the end, what matters is the value of the tool.
I personally think that ruby is a good choice for the config format (lua, python, whatever would be fine too), as we definitely need a programming language for advanced use cases (debian uses makefiles, which are a pita, but together with bash and external tools they still count as a programming language)
I completely agree. I key feature for why I chose Ruby is because it allows you to call a method with out parentheses, don't know about the other above mentioned languages.
 It should be noted though that replacing the config syntax later on will
 be difficult: even if it's factored out nicely in the code, we
 could have thousands of d packages using the old format. In order not
 to break those, we'd have to deprecate the old format, but still leave
 it available for some time, which leads to more dependencies and
 problems...
Yes, that would be a big problem. But, the advantage we have is that we can change the language when developing the tool, if necessary. I mean before we get any more packages than just test packages. -- /Jacob Carlborg
Jun 19 2011
prev sibling parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
One option that I see is to create a compiler plugin interface that lets a 
build tool/package manager hook the compiler's import resolution process.

A very (very) basic implementation: (windows only)
https://github.com/yebblies/dmd/tree/importresolve

For those who don't want to read the source code:
The user (or the build tool, or in sc.ini/dmd.conf) supplies a dll/so on the 
command line with:
dmd -ih=mylib.dll
Which exports a single function "_importhandler" that is called when a file 
is not found on the include path.  It passes the module name and the 
contents of the describing pragma, if any.
eg.
pragma(libver, "collection", "version", "hash")
import xpackage.xmodule;

calls
filename = importhandler("xpackage.xmodule", "collection", "version", 
"hash")

and lets the library download, update etc, and return the full filename of 
the required library. 
Jun 17 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-06-18 08:09, Daniel Murphy wrote:
 One option that I see is to create a compiler plugin interface that lets a
 build tool/package manager hook the compiler's import resolution process.

 A very (very) basic implementation: (windows only)
 https://github.com/yebblies/dmd/tree/importresolve

 For those who don't want to read the source code:
 The user (or the build tool, or in sc.ini/dmd.conf) supplies a dll/so on the
 command line with:
 dmd -ih=mylib.dll
 Which exports a single function "_importhandler" that is called when a file
 is not found on the include path.  It passes the module name and the
 contents of the describing pragma, if any.
 eg.
 pragma(libver, "collection", "version", "hash")
 import xpackage.xmodule;

 calls
 filename = importhandler("xpackage.xmodule", "collection", "version",
 "hash")

 and lets the library download, update etc, and return the full filename of
 the required library.
That seems cool. But, you would want to write the pluing in D and that's not possible yet on all platforms? Or should everything be done with extern(C), does that work? -- /Jacob Carlborg
Jun 18 2011
parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Jacob Carlborg" <doob me.com> wrote in message 
news:iti35g$2r4r$2 digitalmars.com...
 That seems cool. But, you would want to write the pluing in D and that's 
 not possible yet on all platforms? Or should everything be done with 
 extern(C), does that work?
Yeah, it won't be possible to do it all in D until we have .so's working on linux etc, which I think is a while off yet. Although this could be worked around by writing a small loader in c++ and using another process (written in D) to do the actual work. Maybe it would be easier to build dmd as a shared lib (or a static lib) and just provide a different front... My point is that the compiler can quite easily be modified to allow it to pass pretty much anything (missing imports, pragma(lib), etc) to a build tool, and it should be fairly straightforward for the build tool to pass things back in (adding objects to the linker etc). This could allow single pass full compilation even when the libraries need to be fetched off the internet. It could also allow seperate compilation of several source files at once, without having to re-do parsing+semantic each time. Can dmd currently do this? Most importantly it keeps knowledge about urls and downloading files outside the compiler, where IMO it does not belong.
Jun 18 2011
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Sat, 18 Jun 2011 23:33:29 -0400, Daniel Murphy  
<yebblies nospamgmail.com> wrote:

 "Jacob Carlborg" <doob me.com> wrote in message
 news:iti35g$2r4r$2 digitalmars.com...
 That seems cool. But, you would want to write the pluing in D and that's
 not possible yet on all platforms? Or should everything be done with
 extern(C), does that work?
Yeah, it won't be possible to do it all in D until we have .so's working on linux etc, which I think is a while off yet. Although this could be worked around by writing a small loader in c++ and using another process (written in D) to do the actual work. Maybe it would be easier to build dmd as a shared lib (or a static lib) and just provide a different front... My point is that the compiler can quite easily be modified to allow it to pass pretty much anything (missing imports, pragma(lib), etc) to a build tool, and it should be fairly straightforward for the build tool to pass things back in (adding objects to the linker etc). This could allow single pass full compilation even when the libraries need to be fetched off the internet. It could also allow seperate compilation of several source files at once, without having to re-do parsing+semantic each time. Can dmd currently do this? Most importantly it keeps knowledge about urls and downloading files outside the compiler, where IMO it does not belong.
Note the current proposal does exactly what you are looking for, but does it via processes and command line instead of dlls. This opens up numerous avenues of implementation (including re-using already existing utilities), plus keeps it actually separated (i.e. a dll/so can easily corrupt the memory of the application, whereas a separate process cannot). -Steve
Jun 20 2011