www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - The DUB package manager

reply =?UTF-8?B?U8O2bmtlIEx1ZHdpZw==?= <sludwig outerproduct.org> writes:
With the recent talk about Orbit, I thought it is time to also announce
the package manager that we have been working out based on the simple
VPM system that has always been in vibe.d. I don't really like stepping
into competition with Jacob here (*), but the approach is different
enough that I think it should be put on the table.

Some may already have noticed it as it's mentioned already on the vibe.d
website and is currently hosted on the same domain as the old VPM registry:

http://registry.vibed.org/


DUB has two important development goals:

 - Simplicity:

   Making a DUB package, as well as using one as a dependency should be
   as simple as possible to facilitate broad usage, also and especially
   among language newcomers. Procedural build scripts often scare away
   people, although their added complexity doesn't matter for bigger
   projects. I think they should be left as an option rather than the
   default.

   Turning a library/application into a DUB package can be as simple as
   adding a package.json file with the following content (mysql-native
   is automatically made available during the build in this example):

   {
        "name": "my-library",
        "dependencies": {"mysql-native": ">=0.0.7"}
   }

   If the project is hosted on GitHub, it can be directly registered on
   the registry site and is then available for anyone to use as a
   dependency. Alternatively, it is also possible to use a local
   directory as the source for a particular package (e.g. for closed
   source projects or when working on both the main project and the
   dependency at the same time).

 - Full IDE support:

   Rather than focusing on performing the build by itself or tying a
   package to a particular build tool, DUB translates a general
   build receipt to any supported project format (it can also build
   by itself). Right now VisualD and MonoD are supported as targets and
   rdmd is used for simple command line builds. Especially the IDE
   support is really important to not simply lock out people who prefer
   them.


Apart from that we have tried to be as flexible as possible regarding
the way people can organize their projects (although by default it
assumes source code to be in "source/" and string imports in "views/",
if those folders exist).

There are still a number of missing features, but apart from those it is
fully usable and tested on Windows, Linux, and Mac OS.


GitHub repository:
https://github.com/rejectedsoftware/dub
https://github.com/rejectedsoftware/dub-registry

Preliminary package format documentation:
http://registry.vibed.org/package-format


(*) Originally I looked into using Orbit as the package manager for
vibe.d packages, but it just wasn't far enough at the time and had some
traits that I wasn't fully comfortable with.
Feb 16 2013
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2013-02-16 18:10, Sönke Ludwig wrote:
 With the recent talk about Orbit, I thought it is time to also announce
 the package manager that we have been working out based on the simple
 VPM system that has always been in vibe.d. I don't really like stepping
 into competition with Jacob here (*), but the approach is different
 enough that I think it should be put on the table.

 Some may already have noticed it as it's mentioned already on the vibe.d
 website and is currently hosted on the same domain as the old VPM registry:

 http://registry.vibed.org/


 DUB has two important development goals:

   - Simplicity:

     Making a DUB package, as well as using one as a dependency should be
     as simple as possible to facilitate broad usage, also and especially
     among language newcomers. Procedural build scripts often scare away
     people, although their added complexity doesn't matter for bigger
     projects. I think they should be left as an option rather than the
     default.

     Turning a library/application into a DUB package can be as simple as
     adding a package.json file with the following content (mysql-native
     is automatically made available during the build in this example):

     {
          "name": "my-library",
          "dependencies": {"mysql-native": ">=0.0.7"}
     }

Using a full blow language can look pretty declarative as well: name = "my-library"; dependencies = ["mysql-native": ">=0.0.7"]; I don't see why something like the above would scare away people. It's even less code than the JSON above.
     If the project is hosted on GitHub, it can be directly registered on
     the registry site and is then available for anyone to use as a
     dependency. Alternatively, it is also possible to use a local
     directory as the source for a particular package (e.g. for closed
     source projects or when working on both the main project and the
     dependency at the same time).

   - Full IDE support:

     Rather than focusing on performing the build by itself or tying a
     package to a particular build tool, DUB translates a general
     build receipt to any supported project format (it can also build
     by itself). Right now VisualD and MonoD are supported as targets and
     rdmd is used for simple command line builds. Especially the IDE
     support is really important to not simply lock out people who prefer
     them.

I think it looks like you're tying the user to a particular build tool. I don't think it's the business of the package manager to build software. That's the job of a build tool. The package manager should just invoke the build tool. You just need to support a few build tools, like rdmd, make and then also support shell script. Things like setting compiler flags does really not belong in a package manger. I have not looked at the source code for DUB at all. In general how is it organized. Can it be used as a library to easily build, say a GUI or directly integrate into an IDE? -- /Jacob Carlborg
Feb 16 2013
next sibling parent reply =?UTF-8?B?U8O2bmtlIEx1ZHdpZw==?= <sludwig outerproduct.org> writes:
Am 16.02.2013 19:10, schrieb Jacob Carlborg:
 
 I think it looks like you're tying the user to a particular build tool.
 I don't think it's the business of the package manager to build
 software. That's the job of a build tool. The package manager should
 just invoke the build tool. You just need to support a few build tools,
 like rdmd, make and then also support shell script.

Not at all, the idea is to have a number of "build script" generators so everyone can choose whatever fits best, otherwise a generic rdmd/dmd based build works out of the box with no need to install an additional tool. Invoking a generic external tool is easy enough to add and already planned, so this should not be a limiting factor in any way.
 
 Things like setting compiler flags does really not belong in a package
 manger.

What makes you think so? Just because of your definition of "package manager" or for a concrete reason? There are things like setting up import paths to dependent projects that only the package manager can do, because it knows their locations (and it can be desirable for many reasons to not install dependencies into a predefined place). Bringing package management and build receipt together is convenient and natural here. You could also say that it is a meta-build tool (along the lines of cmake) with package support if you want.
 
 I have not looked at the source code for DUB at all. In general how is
 it organized. Can it be used as a library to easily build, say a GUI or
 directly integrate into an IDE?
 

There should be no problem with that. The API will need to be cleaned up a bit, but generally it's a shallow "app.d" file that does command line processing and a number of modules that do the actual work (in a package "dub").
Feb 16 2013
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2013-02-16 19:39, Sönke Ludwig wrote:

 Not at all, the idea is to have a number of "build script" generators so
 everyone can choose whatever fits best, otherwise a generic rdmd/dmd
 based build works out of the box with no need to install an additional
 tool. Invoking a generic external tool is easy enough to add and already
 planned, so this should not be a limiting factor in any way.

Ok, I see. But it feels wrong to me that it should generate a build script. I think the package should already contain a build script.
 What makes you think so? Just because of your definition of "package
 manager" or for a concrete reason? There are things like setting up
 import paths to dependent projects that only the package manager can do,
 because it knows their locations (and it can be desirable for many
 reasons to not install dependencies into a predefined place). Bringing
 package management and build receipt together is convenient and natural
 here.

I'm thinking that there should be a build tool that drives everything. The build script contains package dependencies. The build tool will ask the package manager to get linker/import paths and libraries for the dependencies. But I guess that basically the same as how DUB is working. But the package manager drives everything instead of the build tool.
 You could also say that it is a meta-build tool (along the lines of
 cmake) with package support if you want.

 There should be no problem with that. The API will need to be cleaned up
 a bit, but generally it's a shallow "app.d" file that does command line
 processing and a number of modules that do the actual work (in a package
 "dub").

Good, that's how it's supposed to be organized. -- /Jacob Carlborg
Feb 16 2013
next sibling parent reply =?ISO-8859-1?Q?S=F6nke_Ludwig?= <sludwig outerproduct.org> writes:
Am 16.02.2013 21:02, schrieb Johannes Pfau:
 This way there are no second class build systems.

I actually think it is indeed _good_ to have a first class build system for exactly the reason that H. S. Teoh gave. If other build systems really are on the same level as the standard one, it poses the risk of fragmentation among different packages and users would possibly have to install a number of different build tools to build all dependencies. That said, providing an interface to the dependency information to better support other build tools definitely is a good thing no matter which way is taken. My idea for the things you mentioned (swig, c, etc.) was to have a set of hooks that can be used to run external tools (invoked before build/project file generation, before build or after build). That together with your proposed interface should provide all the necessary flexibility while putting an emphasis on a standard way to describe the build process.
Feb 16 2013
next sibling parent =?ISO-8859-1?Q?S=F6nke_Ludwig?= <sludwig outerproduct.org> writes:
Am 16.02.2013 23:49, schrieb Nick Sabalausky:
 On Sat, 16 Feb 2013 22:21:55 +0100
 Snke Ludwig <sludwig outerproduct.org> wrote:
 My idea for the things you mentioned (swig, c, etc.) was to have a set
 of hooks that can be used to run external tools (invoked before
 build/project file generation, before build or after build). That
 together with your proposed interface should provide all the necessary
 flexibility while putting an emphasis on a standard way to describe
 the build process.

I like a lot of what you've said, but my concern about this part is, does it support things like: - Dependencies between custom build steps (so that custom build steps aren't needlessly re-run when they're not out-of-date, and can possibly be parallelized) - Multiple custom build targets - Multiple custom build configurations I think those are essential for a real general-purpose build tool.

I'd hope that all of this (except multiple custom build commands) can be pushed off to the external build tool. So in essence those build steps would be just dump command lists that are always executed. It would basically need to be that way, if the target builder doesn't support something like this anyway (e.g. the pre/post build command in a VisualD or Mono-D project). If this should turn out not to be enough, it could always be extended, but I think most of it should be easy to handle using "make" or something similar.
Feb 16 2013
prev sibling next sibling parent reply =?ISO-8859-15?Q?S=F6nke_Ludwig?= <sludwig outerproduct.org> writes:
 Another issue: I understand why you are using json but it is
 not the best suited format IMHO. D put some restriction on
 module names, thus the format can be simplified.
 
 (...)
 
 Exporting to other formats will probably needed anyway.
 
 Peter

I agree about the non-optimal syntax, but it comes down to nicer syntax (where JSON at least is not _too_ bad) vs. standard format (familiarity, data exchange, proven design and implementation). And while data exchange could worked around with export/import functions, considering that these files are usually just 4 to 20 lines long, I personally would put less weight on the syntax issue than on the others. BTW, I think YAML as a superset of JSON is also a good contender with nice syntax features, but also much more complex.
Feb 17 2013
next sibling parent =?ISO-8859-15?Q?S=F6nke_Ludwig?= <sludwig outerproduct.org> writes:
Am 17.02.2013 11:21, schrieb Peter Sommerfeld:
 Am 17.02.2013, 10:24 Uhr, schrieb Russel Winder:
 On Sun, 2013-02-17 at 09:12 +0100, Snke Ludwig wrote:

 I agree about the non-optimal syntax, but it comes down to nicer syntax
 (where JSON at least is not _too_ bad) vs. standard format (familiarity,
 data exchange, proven design and implementation). And while data
 [...]

Peter's point is more important than the above response indicates.

I think so too. And I don't believe that it will stay by a few lines only because the demands will increase in the course of time. Think about including C/C++ compiler, a project manager etc.

If I were to vote for anything, that would be to never support C/C++ or any other foreign language builds. There are enough tools for that and I don't see enough value for the huge complexity that this route would add (just consider autotools or cmake). Not sure what you mean by project manager, but right now, I see no reason why it shouldn't be possible to keep the package/build description very compact. If something gets complex anyway, maybe it should be broken up into multiple packages (e.g. one with the C/C++ stuff + bindings and one using the C++ stuff).
 
 With a syntax as I suggested or something similar everything is mapped
 direct onto an AA and is easily scanned by the build system or other
 tools.
 
 Regarding json: It is designed for data exchange between different
 sources, not to be edited by people. It is too annoying and error-
 prone to wrap each string into quotes. And people have to edit the
 configuration file. Better to start in a well suited format.

This is just wrong. JSON is just JavaScript, which is meant for human editing just as D or a custom DSL is. There definitely are nicer syntaxes, but this is putting it out of proportion, IMO. I'm surely not against using a different format, but all arguments must be weighted against each other here.
Feb 17 2013
prev sibling next sibling parent reply =?UTF-8?B?U8O2bmtlIEx1ZHdpZw==?= <sludwig outerproduct.org> writes:
Am 17.02.2013 10:24, schrieb Russel Winder:
 On Sun, 2013-02-17 at 09:12 +0100, Sönke Ludwig wrote:
 
 I agree about the non-optimal syntax, but it comes down to nicer syntax
 (where JSON at least is not _too_ bad) vs. standard format (familiarity,
 data exchange, proven design and implementation). And while data
 exchange could worked around with export/import functions, considering
 that these files are usually just 4 to 20 lines long, I personally would
 put less weight on the syntax issue than on the others.

 BTW, I think YAML as a superset of JSON is also a good contender with
 nice syntax features, but also much more complex.

Peter's point is more important than the above response indicates. Using JSON (YAML, whatever) is a recipe for failure. The syntax is data exchange facing, whereas writing and maintaining a build script (even if it is only 4 lines long), is a human activity. Do not underestimate the reaction "why do I have to write this data exchange format?" The rapid demise of XML, Ant, Maven, etc. is testament to people awakening to the fact that they were forced to use manually a language designed for another purpose.

I don't agree with this, as already said in my response to Peter. We are talking about rather small syntactic differences here (i.e. ',' vs ';' and omitting quotes) and XML adds a lot more overhead than the quotes around key names. Ant and the like also have the awkward approach of not only using a verbose format such as XML, but they also try to express procedural build scripts in that language, which is insane. But my idea for dub is to completely avoid the need for procedural build steps apart from invoking an external tool (which should be needed only for a minority of D projects).
 
 Where Gradle, and before it Gant, Buildr, etc., got it right was to use
 an internal DSL mentality to ask what is a written form that has
 affordance for the user that we can interpret in some way to deliver for
 computation the data needed. Groovy provides the infrastructure for
 Gradle, but people write Gradle scripts not Groovy. The build/packaging
 system does the hard work, not the user.
 
 The lesson from SBT is that by judicious cleverness, a trivially simple
 internal DSL can be derived that does all the work using Scala. The only
 downside to SBT scripts is the extra newline needed everywhere. Surely D
 is better than Scala for this?

If by internal DSL, you mean compiled as D source code directly or indirectly, please also consider the server side issues (i.e. danger of DoS and hijacking the system).
 
 So if D is to contribute to build and package management, it should be
 looking to replace Make, CMake, SCons, Waf, Gradle. It should be able to
 build D systems yes, and manage D packages, but also look to build C, C
 ++, Fortran, and interlink all of them. Clearly starting small is good:
 a successful large system always evolves from a successful small system;
 designed large systems always fail. (cf. Gall, Systemantics, 1975)

I think building foreign languages are better left to specialized tools. It should be possible to invoke those automatically, but adding their functionality directly to a D build system would blow up the complexity dramatically for an uncertain return of value (what would be the problem invoking make to build a C dependency?). My question if we can get by without procedural build scripts is still open. If we could, it would give great benefits in simplicity of the system and its usage. This may require a change in conventions for some projects, but I think that can be worth it. Maybe it would be good to have some concrete examples of complex build scenarios to better judge what is possible and what may be problematic.
Feb 17 2013
parent =?UTF-8?B?U8O2bmtlIEx1ZHdpZw==?= <sludwig outerproduct.org> writes:
Am 23.02.2013 10:24, schrieb SomeDude:
 On Sunday, 17 February 2013 at 11:20:55 UTC, Sönke Ludwig wrote:
 
 My question if we can get by without procedural build scripts is still
 open. If we could, it would give great benefits in simplicity of the
 system and its usage. This may require a change in conventions for some
 projects, but I think that can be worth it.

 Maybe it would be good to have some concrete examples of complex build
 scenarios to better judge what is possible and what may be problematic.

I think embedding procedural scripts are always a plus. Given that you already have JSON, I would vote for *cough* Javascript (V8 engine).

If anything, DMDScript, of course ;) (WRT procedural, I still think that non-procedural might actually be better in this case, see prev. reply)
Feb 23 2013
prev sibling next sibling parent =?ISO-8859-1?Q?S=F6nke_Ludwig?= <sludwig outerproduct.org> writes:
Am 17.02.2013 12:47, schrieb Nick Sabalausky:
 I think SDL (Simple Declarative Language) hits the sweet spot:
 http://sdl.ikayzo.org/display/SDL/Language+Guide
 
 FWIW, I just resumed working on a D parser for SDL. EVen if SDL
 doesn't get used for DUB or Orbit or dmd.conf, I think it'll still be a
 great thing to have available. It's such a simple grammar, I don't
 think it'll take long to reach a usable point.
 

I have to say that it looks almost exactly like the DSLs I've been using for all of my internal stuff, so I really like it (although I've never heard of it before). It would require a bit of a rework because it doesn't map to JSON 1-to-1, though.
Feb 17 2013
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-02-17 10:24, Russel Winder wrote:
 On Sun, 2013-02-17 at 09:12 +0100, Sönke Ludwig wrote:

 I agree about the non-optimal syntax, but it comes down to nicer syntax
 (where JSON at least is not _too_ bad) vs. standard format (familiarity,
 data exchange, proven design and implementation). And while data
 exchange could worked around with export/import functions, considering
 that these files are usually just 4 to 20 lines long, I personally would
 put less weight on the syntax issue than on the others.

 BTW, I think YAML as a superset of JSON is also a good contender with
 nice syntax features, but also much more complex.

Peter's point is more important than the above response indicates. Using JSON (YAML, whatever) is a recipe for failure. The syntax is data exchange facing, whereas writing and maintaining a build script (even if it is only 4 lines long), is a human activity. Do not underestimate the reaction "why do I have to write this data exchange format?" The rapid demise of XML, Ant, Maven, etc. is testament to people awakening to the fact that they were forced to use manually a language designed for another purpose. Where Gradle, and before it Gant, Buildr, etc., got it right was to use an internal DSL mentality to ask what is a written form that has affordance for the user that we can interpret in some way to deliver for computation the data needed. Groovy provides the infrastructure for Gradle, but people write Gradle scripts not Groovy. The build/packaging system does the hard work, not the user. The lesson from SBT is that by judicious cleverness, a trivially simple internal DSL can be derived that does all the work using Scala. The only downside to SBT scripts is the extra newline needed everywhere. Surely D is better than Scala for this?

Unfortunately I don't think D is better than Scala. I would say that you can do everything/most that Scala/Groovy can but not with as nice syntax as Scala/Groovy.
 So if D is to contribute to build and package management, it should be
 looking to replace Make, CMake, SCons, Waf, Gradle. It should be able to
 build D systems yes, and manage D packages, but also look to build C, C
 ++, Fortran, and interlink all of them. Clearly starting small is good:
 a successful large system always evolves from a successful small system;
 designed large systems always fail. (cf. Gall, Systemantics, 1975)

I don't think it necessarily needs to be able to build non-D projects. I'm looking at Gradle now. The first example they show is this: task hello { doLast { println 'Hello world!' } } That translated to D would probably look something like this: task("hello", { doLast({ writeln("Hello World"); }); }); Which in my opinion is a lot uglier the the original Groovy code. A couple of problems with D: * Semicolons * Parentheses when making function calls * No block syntax. Not possible to pass a delegate after the regular arguments
 If all the energy around DUB and Orbit leads to the situation where D
 cannot be used to create an internal DSL for describing build and
 package/artefact management, then D has failed.

As I've said. I will change Orbit to use D for the DSL. -- /Jacob Carlborg
Feb 17 2013
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-02-17 16:14, Johannes Pfau wrote:
 Am Sun, 17 Feb 2013 00:20:48 -0800
 schrieb Jonathan M Davis <jmdavisProg gmx.com>:

 On Sunday, February 17, 2013 09:12:00 Sönke Ludwig wrote:
 BTW, I think YAML as a superset of JSON is also a good contender
 with nice syntax features, but also much more complex.

It's also whitespace-sensitive, which is downright evil IMHO. I'd take JSON over YAML any day. - Jonathan M Davis

Are you sure? YAML 1.1 required whitespace after comma and in some more cases, but YAML 1.2 dropped that to be 100% compatible with JSON. If you write JSON you have valid YAML and you can write YAML that is valid JSON. http://en.wikipedia.org/wiki/YAML#JSON http://en.wikipedia.org/wiki/YAML#cite_note-9

Yes, this is valid YAML: point: x: 1 y: 2 -- /Jacob Carlborg
Feb 17 2013
prev sibling parent Martin Nowak <code dawg.eu> writes:
On 02/17/2013 12:47 PM, Nick Sabalausky wrote:
 Then on the flipside, we have the example of INI files: Definitely a
 purely data-language, definitely not an embedded DSL, and yet that's
 never been a hindrance for it: it's been a lasting success for many
 things. And the only time anyone complains about it is when more power
 is needed. (And no, I'm not suggesting DUB or Orbit use INI files.
 Again, specifically because more power is needed here.)

I also went with INI files for an even simpler build/package tool I wrote a year ago. https://github.com/dawgfoto/dpk
Feb 18 2013
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2013-02-16 23:19, Peter Sommerfeld wrote:

 Another issue: I understand why you are using json but it is
 not the best suited format IMHO. D put some restriction on
 module names, thus the format can be simplified. Compare:

Inventing a new format is pointless. I you want it less verbose Yaml is an alternative. -- /Jacob Carlborg
Feb 17 2013
parent Jacob Carlborg <doob me.com> writes:
On 2013-02-17 14:40, Nick Sabalausky wrote:

 Plus, some of it's syntax is rather non-intuitive:

 !!map {
    ? !!str "---"
    : !!str "foo",
    ? !!str "...",
    : !!str "bar"
 }

 WTF?

I have used Yaml quite a lot but I have never seen that. I think the below is very intuitive: point: x: 1 y: 2 -- /Jacob Carlborg
Feb 17 2013
prev sibling next sibling parent reply =?UTF-8?B?U8O2bmtlIEx1ZHdpZw==?= <sludwig outerproduct.org> writes:
Am 16.02.2013 23:58, schrieb Rob T:
 
 I'm having good success using D itself as the build tool language, and
 I'm at the point now where I'm getting much better results than what I
 was getting out of using external build tools, so for me there's no
 looking back.
 
 I run rdmd on a .d "script" that I wrote that's setup to do exactly what
 I want.

I did that, too, for automated and Linux builds. But this means that if you also want to work with an IDE for debugging and such things, you have to keep additional project files in sync. The DUB approach just combines the two (+ the package management) and also has less overhead than doing it with D (main function, imports, possibly distributing helper modules etc.).
 
 What is missing from D is a good library of functions that are generally
 useful for writing build scripts. Phobos already supplies a great deal
 of what is needed that can be used to construct what is missing.
 
 The benefit of using D is that I do not have to install and learn a
 secondary language or conform to a specific build format, or follow any
 weird restrictions. What I want to do is build my D code, not learn
 something new and perform acrobatics to get the job done. I can even use
 the same D process to build C/C++ code if I wanted to expand on it.
 

But you will have to learn the API of the build script helpers, too. I'm not sure if this is actually less to learn than the JSON alternative.
 What I'm saying here is that I see no reason to use a language other
 than D itself as the build tool. What D can use is an addition to Phobos
 that supplies the necessary generalized functions that all build tools
 should supply, and I don't think there's all that much that's missing.
 
 For a package manager, some standards may be required, but it too can be
 done completely with D.

Well, DUB is fully written in D, so the question is less D or not, but more if an additional intermediate layer for the package description makes sense or not.
 
 Why use json (which is a subset of javascript), or ruby, or python, etc?
 Is there something fundamentally wrong with D that makes it unsuitable
 for this role?
 

I see the following points for the decision for data/JSON vs. D: - Meta information needs to be available to the package registry and for managing dependencies in general. Executing a script/program would imply a big performance risk, and worse, is a high security risk in case of the registry. So a data-driven approach is needed at least for the meta data anyway. - JSON is a nice standard format with generally low risk of errors. I agree that it is not the prettiest language in the world (although IMHO much better than something XML based). Generally the same recent discussion on the beta list for the dmd.conf format applies here, I'm not sure what is the best approach, but JSON at least is not too bad. To me the most interesting open question is this: Do we actually gain from programmatic support for the build description, or does it suffice to have a good purely descriptive system? If the former should be true for more than 1% of the cases, that would definitely be a good argument against pure data.
Feb 17 2013
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2013-02-17 09:02, Sönke Ludwig wrote:

 But you will have to learn the API of the build script helpers, too. I'm
 not sure if this is actually less to learn than the JSON alternative.

You still have to learn this: http://registry.vibed.org/package-format
   - Meta information needs to be available to the package registry and
 for managing dependencies in general. Executing a script/program would
 imply a big performance risk, and worse, is a high security risk in case
 of the registry. So a data-driven approach is needed at least for the
 meta data anyway.

You can just serialize the D data structure to XML/JSON to make it safe for the registry. -- /Jacob Carlborg
Feb 17 2013
parent reply =?UTF-8?B?U8O2bmtlIEx1ZHdpZw==?= <sludwig outerproduct.org> writes:
Am 17.02.2013 13:47, schrieb Jacob Carlborg:
 On 2013-02-17 09:02, Sönke Ludwig wrote:
 
 But you will have to learn the API of the build script helpers, too. I'm
 not sure if this is actually less to learn than the JSON alternative.

You still have to learn this: http://registry.vibed.org/package-format

Yes, I meant that with the JSON alternative. JSON itself is so simple and widely known, that I think it can be neglected compared to the possible fields.
   - Meta information needs to be available to the package registry and
 for managing dependencies in general. Executing a script/program would
 imply a big performance risk, and worse, is a high security risk in case
 of the registry. So a data-driven approach is needed at least for the
 meta data anyway.

You can just serialize the D data structure to XML/JSON to make it safe for the registry.

But that would need to happen as a separate step and then there would be two redundant files in the repository, with the usual danger of inconsistencies between the two. Since a build script may behave differently on different systems, it could also happen that the contents cannot really be described as JSON/XML. For example someone might get the idea to search the system for some library and only add a corresponding dependency if it is found. There would be no way for the registry to represent that.
Feb 17 2013
parent reply Jacob Carlborg <doob me.com> writes:
On 2013-02-17 13:58, Sönke Ludwig wrote:

 But that would need to happen as a separate step and then there would be
 two redundant files in the repository, with the usual danger of
 inconsistencies between the two.

The tool generates the JSON/Yaml/XML from the D script. The user never have to see the Json file.
 Since a build script may behave differently on different systems, it
 could also happen that the contents cannot really be described as
 JSON/XML. For example someone might get the idea to search the system
 for some library and only add a corresponding dependency if it is found.
 There would be no way for the registry to represent that.

Sure, it's always possible to do stupid things. -- /Jacob Carlborg
Feb 17 2013
parent reply =?UTF-8?B?U8O2bmtlIEx1ZHdpZw==?= <sludwig outerproduct.org> writes:
Am 17.02.2013 15:04, schrieb Jacob Carlborg:
 On 2013-02-17 13:58, Sönke Ludwig wrote:
 
 But that would need to happen as a separate step and then there would be
 two redundant files in the repository, with the usual danger of
 inconsistencies between the two.

The tool generates the JSON/Yaml/XML from the D script. The user never have to see the Json file.

But the registry gets its information about a package directly from the github repository (this is quite a central idea), so it must also be committed there.
 Since a build script may behave differently on different systems, it
 could also happen that the contents cannot really be described as
 JSON/XML. For example someone might get the idea to search the system
 for some library and only add a corresponding dependency if it is found.
 There would be no way for the registry to represent that.

Sure, it's always possible to do stupid things.

But at least not this particular this with the data driven approach ;)
Feb 17 2013
parent Jacob Carlborg <doob me.com> writes:
On 2013-02-17 15:32, Sönke Ludwig wrote:

 But the registry gets its information about a package directly from the
 github repository (this is quite a central idea), so it must also be
 committed there.

Aha, that would be a problem. With Orbit you build a package out of an orbspec (the DSL) and the package will contain the meta info. -- /Jacob Carlborg
Feb 17 2013
prev sibling next sibling parent =?UTF-8?B?U8O2bmtlIEx1ZHdpZw==?= <sludwig outerproduct.org> writes:
Am 23.02.2013 10:20, schrieb SomeDude:
 On Sunday, 17 February 2013 at 08:02:41 UTC, Sönke Ludwig wrote:
 To me the most interesting open question is this: Do we actually gain
 from programmatic support for the build description, or does it suffice
 to have a good purely descriptive system? If the former should be true
 for more than 1% of the cases, that would definitely be a good argument
 against pure data.

Well, in the Java world, there is ant. It does the trick, but it's quite ugly.

And it also does the really strange thing to actually build up a procedural build description in a declarative (+ugly) language. That's definitely not what I have in mind - what I want is really a pure /description/ of the system, from which the build steps (or anything else) can be inferred. This has the big advantage that you can decouple the two concepts and do generic things with all packages, even if the package maintainer didn't explicitly write supporting code (e.g. generate documentation, generate a project file, generate an installer, build as a DLL etc.). The question is if we need procedural statements to create that package description or not. So far I haven't really seen a compelling example that we do. And staying with a declarative approach has the advantage that the language is _not_ more powerful than the thing that it is supposed to model, i.e. no information is lost. An example where it is easy to mess up with a procedural language and not have a real package description anymore is if the user uses the scripting language to make decisions based on outside conditions (OS, architecture, installed libraries, time of day, whatever). The result is a description that is not really a description of the package anymore, but of the package+environment, which makes it unpredictable and thus impossible to safely make decisions about other environments (build servers, cross compilation, platform independent list of dependencies etc. pp.).
Feb 23 2013
prev sibling next sibling parent David Gileadi <gileadis NSPMgmail.com> writes:
On 2/23/13 7:02 AM, deadalnix wrote:
 On Saturday, 23 February 2013 at 11:21:06 UTC, Russel Winder wrote:
 Gradle makes no pretence as being either declarative or iterative, but
 embraces both. As much of a specification is as declarative as possible,
 but where imperative is needed it is available as Gradle specifications
 are Groovy scripts with the Gradle internal DSL.

Do you have some link I can read about this ? This sounds like a very nice project !

I love Gradle! Official site at http://www.gradle.org, with very good docs including getting started tutorials. In practice I've found it to be concise and readable compared to Ant/Maven, and (almost ridiculously) easily extended when necessary. IMO if you're doing Java builds it's the hands-down winner.
Feb 23 2013
prev sibling next sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
23-Feb-2013 21:17, H. S. Teoh пишет:
 On Sat, Feb 23, 2013 at 05:57:23PM +0100, simendsjo wrote:
 On Saturday, 23 February 2013 at 16:44:59 UTC, Nick Sabalausky
 wrote:
 (...)
 Anyone still using Java is just so last decade ;)

I've managed to dodge Java all these years, but I just started a college which teach Java. Even after using it only for a couple of thousand lines of code, I understand the hatred.. Feels like I'm in a straitjacket. Yes, it might be easy to learn, but damn it's verbose!

That was my reaction too, when I first starting learning Java. And that is still my reaction today.

Aye. To my chagrin I have to work with Java quite a lot since late 2012. I have to say that no amount of provided out of the box nice libraries and top-notch GC help alleviate the dire need for plain value-types and some kind of terseness (rows of static final int xyz = blah; to define a bunch of constants). With Java version being 5 or 6 (as is usually the case in production ATM) there is basically not a single construct to avoid horrible duplication of information on each line. There are few shortcuts in Java 7, and even more stuff in Java 8 but all of it is coming horribly too late and doesn't fix the "big picture". Plus that has to propagate into the mainstream.
 It's not a *bad* language per se. In fact, a lot of it is quite ideal.
 Or rather, idealistic, should I say. Unfortunately, that makes it a pain
 to map to messy real-world situations -- you end up with a truckload of
 wrappers and incrediblyLongAndVerboseIdentifiers just so the language
 can remain "pure". As for being a straitjacketed language, this IOCCC
 entry says it best:

 	http://www.ioccc.org/2005/chia/chia.c

-- Dmitry Olshansky
Feb 23 2013
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
23-Feb-2013 22:40, Russel Winder пишет:
 On Sat, 2013-02-23 at 21:44 +0400, Dmitry Olshansky wrote:
 […]
 I have to say that no amount of provided out of the box nice libraries
 and top-notch GC help alleviate the dire need for plain value-types and
 some kind of terseness (rows of static final int xyz = blah; to define a
 bunch of constants). With Java version being 5 or 6 (as is usually the
 case in production ATM) there is basically not a single construct to
 avoid horrible duplication of information on each line.

Maybe in Java 1.4, 8 years ago a plethora of public static final int constants, but these days seeing that implies a bad design, a complete lack of updating/refactoring of the software to modern Java features, or simple laziness. Java 5 introduced more than generics.

Admittedly I'm no expert on Java. Hence "have to work". Would be nice to know if there is something that can represent this C snippet without the usual heaps of verbosity: enum { STATUS_OK = 0, STATUS_FAIL_REASON_XYZ, ... ad infinitum (but presently dozens) }; There are lots of other cases, but this one is prime. You might argue for using classes and inheritance + visitor pattern instead of plain switch/state machines. The heaps of boilerplate to write/generate all of "state"/visitor etc. classes argues the other way around. (and I suspect the execution speed too)
 The G1 GC is a significant improvements to Java.

I guess this depends on the platform you (have to) use? BTW I agree that Java's GC is nice, more so the automatic de-heapification of modern VMs.
 Java supports value types, albeit as objects on the heap. Give me an
 example of the problem so we can approach this with more than waffly
 statements.

A trivial example is storage of pairs or tuples. That plus using Java's containers makes memory footprint explode. At 100K+ hash-map I notice quite high factor of waste. It's some double digits compared to plain C arrays. I probably shouldn't care that much since servers are having tons of RAM these days, and customers throw money on equipment without second thought. Thus for the moment I just can't help but notice. Thank god we are mostly I/O bound. Another example. I had to sort some objects by rating for a bit of processing with ranking. Yet this rating is computed and really doesn't belong with them at all (and needs locking beforehand) and I can't reorder them in place. What I'd do is create an array of tuple link-rating and sort these. In Java I have to create a full-blown object and allocate each array slot separately, cool. Not to mention the complete lack of concise tuples. Writing functors is a royal pain in the ass too. Anonymous classes doesn't quite cut it on the scale of verbosity/benefit. If there any library that doesn't use reflection like crazy to achieve simple, concise functors, I'd be glad to use it but so far I found none. Sorry can't elaborate on exact details here, the work is not open-source by any measure.
 Far too few programmers use the final keyword properly/effectively.

In Java programs it can be used very often. I'd say too often since I find it rare to have a need to e.g. rebind an object. As for "a few programmers", well you have to type it everywhere, guess why it's not popular?!
 There are few shortcuts in Java 7, and even more stuff in Java 8 but all
 of it is coming horribly too late and doesn't fix the "big picture".
 Plus that has to propagate into the  mainstream.

This is Java's serious problem. There are still people using Java 1.4.2 because they can't be bothered to update to Java 5 let alone 7.

And even if e.g. I for one would go for Java 7 it really doesn't depend on my views at all.
 […]

 I have many problems with Java, it is very verbose, but it needs to be
 criticized in a fair manner not in a "slag off" way. Clearly everyone on
 this list is going to prefer D over Java, but that is not permission to
 use inappropriate argumentation.

Mmm. I report my experience not argument much of anything excpet that it's far from pleasant. If anything I might convince the team to switch off to Scala, someday. If not the somewhat alien syntax it would be far easier. -- Dmitry Olshansky
Feb 23 2013
next sibling parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
24-Feb-2013 00:18, Nick Sabalausky пишет:
 On Sat, 23 Feb 2013 23:22:54 +0400
 Dmitry Olshansky <dmitry.olsh gmail.com> wrote:
 A trivial example is storage of pairs or tuples. That plus using
 Java's containers makes memory footprint explode. At 100K+ hash-map I
 notice quite high factor of waste. It's some double digits compared
 to plain C arrays. I probably shouldn't care that much since servers
 are having tons of 0RAM these days

That "tons of RAM" isn't always true. My server is a lower-end VPS, so I have a mere 512 MB RAM: one-eighth as much as my *budget-range laptop*.

Yup, I'm having even half of that but at least it's KVM so I can hack on kernel drivers or at least try IPsec ;) But here I was talking of real metal beasts sitting in racks and doing *solely* one-two processing task (no matter how stupid does it look).
 But it's been fine for me so far, especially since I don't run
 JVM on it. (I know I could use a dedicated physical server and easily
 get far more RAM, but I *love* VPS hosting - all the many benefits of an
 external web host (ex: not having to pay for my own T1 or better), but
 without the near-total lack of control or the impossibility of finding
 a company that knows what they're doing.)

Agreed.
 Anyway, I'd love more RAM and I will certainly get it when I need to,
 but every double-ing of my server's RAM will double my hosting costs -
 and doing that just for the sake of downgrading from a great language
 like D to a mediocre one like Java (even if it's J8+) wouldn't make
 very good business sense ;) Higher costs *plus* lower productivity and
 morale...yea, not exactly a great deal ;)

 My main point, of course, being: "Tons of RAM" isn't *always* the case
 for servers. Thank goodness for D :)

+1 Hence my discomfort with Java ;) -- Dmitry Olshansky
Feb 23 2013
prev sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
24-Feb-2013 14:34, Russel Winder пишет:
 On Sat, 2013-02-23 at 23:22 +0400, Dmitry Olshansky wrote:
 […]
 Would be nice to know if there is something that can represent this C
 snippet without the usual heaps of verbosity:
 enum {
 STATUS_OK = 0,
 STATUS_FAIL_REASON_XYZ,
 ... ad infinitum (but presently dozens)
 };

enum StatusCodes { STATUS_OK, STATUS_FAIL_REASON_XYZ, } Is perfectly valid Java.

You missed the point that these have to be the *real* integer constants starting from 0. No frigging magic classes please. Citing Oracle: All enums implicitly extend java.lang.Enum. Since Java does not support multiple inheritance, an enum cannot extend anything else. My thoughts after reading this: "holy crap, they are class instances". Even then we seem to use enums where there is no need for integer constants specifically.
 There are lots of other cases, but this one is prime.

Job done then, Java is fine for your purposes!

Nice irony but I've meant the cases for rows of "final static int blah" specifically. If I had a power to fix a single thing in Java I'd say "remove redundancy".
 No point in philosophizing about performance, measurement is the only
 valid way of making any form of performance comparison.

Right. It however doesn't prevent one from observing the potential bottlenecks in e.g. parsing binary packets. Measuring is good but in server/distrubuted context is highly non-trivial to do correctly. Like I said earlier the current project (the one I go by) is largely I/O bound (and that's not network but rather the "device" I/O). I did synthetic benchmarks back then and in short visitor sucked.
 The G1 GC is a significant improvements to Java.

I guess this depends on the platform you (have to) use? BTW I agree that Java's GC is nice, more so the automatic de-heapification of modern VMs.

No G1 is just nice. I think it is standard in JavaSE and JavaEE from 7u6. Not sure about JavaSE Embedded. Although JavaME is due a re-launch, I think it is dead in the water before being launched.

Then it's in Java 7? Okay, now I have a solid argument to bring about the switch to Java 7 :) Seriously, of course, we tend to switch to whatever is perceived as stable in the "local market". Currently it was Java 6. Newer projects would be 7 I guess.
 Another example. I had to sort some objects by rating for a bit of
 processing with ranking. Yet this rating is computed and really doesn't
 belong with them at all (and needs locking beforehand) and I can't
 reorder them in place. What I'd do is create an array of tuple
 link-rating and sort these.

 In Java I have to create a full-blown object and allocate each array
 slot separately, cool. Not to mention the complete lack of concise tuples.

Without the code it is difficult to comment other than to say, are you using the right data structures for the job? Creating small short-lived objects is now highly efficient on the JVM, especially immutable ones. Support for tuples is interesting: Kotlin just removed them, Ceylon just added them. Java is nowhere!
 Writing functors is a royal pain in the ass too. Anonymous classes
 doesn't quite cut it on the scale of verbosity/benefit. If there any
 library that doesn't use reflection like crazy to achieve simple,
 concise functors, I'd be glad to use it but so far I found none.

Java 8. You'll love it. Lambda expressions aplenty. Interestingly, no need for anonymous classes since it is possible to do call site type inferencing and use method handles and invokedynamic to get it all in place without an object in sight.

From what I read about Java 8 it is tolerable (as the language). Still a far cry from D but largely more usable then now.
 Sorry can't elaborate on exact details here, the work is not open-source
 by any measure.

In which case if you want help I'm available for consultancy!

Thanks, I'd gladly do.
 Mmm. I report my experience not argument much of anything excpet that
 it's far from pleasant.

Different people like and hate different languages, this is only natural. I am surprised though how many people stay working in places using languages they do not and cannot begin to like.

Don't get me wrong here. Java as language is awfully verbose (above all) and lacking some simple features. Of these shortcut-like are the primes ones, I've mentioned functors, also e.g. unsigned data types are very useful sometimes. Then as JVM platform it's perfectly fine for us at the moment due to it being: a) mature and largely accepted (the key factor in favor of using it) b) quite nice selection of tools in the standard library not to speak of reams of 3-rd party code c) a VM. Like it or not but corps love VMs and safe isolated environments. For us the fact that it's cross-platform also removes quite a bit of pain, plus no memory corruption bugs etc. d) friendly for plugins and dynamic (re)loading of these Also given some constraints it turns out to be either code the whole stuff in plain C or split in Java/C parts. The C all the way down is definitely less pleasant. D would probably fit perfectly were it not for: a) Not widely recognized but sometimes we may cheat and tell it's C++ b) No support for loading shared libraries (bye-bye sane plugin architecture) c) Lack of stable 3-rd party libs for pretty much everything (this is getting better but is not there by miles)
 If anything I might convince the team to switch off to Scala, someday.
 If not the somewhat alien syntax it would be far easier.

Scala has many pluses over Java, but also some minuses, as do the other languages such as Kotlin and Ceylon.

 There is also Groovy which can be
 both a dynamic and a static language these days and is closely tied to
 Java (but without all the verbosity), unlike Scala, Kotlin and Ceylon
 which have significant impedance mismatch in places.

Yay, Groovy! I've pushed for using it recently for web application in favor of "evolving" some legacy JSP-based hell. BTW you are solely responsible for me ever knowing about it in the first place. Hmm. An OT question - how hard it is to convert some moderately sized app to Grovvy from Java (just to see if it looses anything in terms of performance, in static mode obviously)? -- Dmitry Olshansky
Feb 24 2013
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/26/13 12:12 AM, Russel Winder wrote:
 It is possible Java 9 or Java 10 will remove the primitive types
 completely so that all variables are reference types leaving it to the
 JVM to handle all boxing and unboxing internally thus making things a
 lot more efficient and faster.  Experiments are scheduled and underway,
 decisions made only once the results are in.

That's very interesting. Got any related links? Thanks, Andrei
Feb 26 2013
prev sibling parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
26-Feb-2013 09:12, Russel Winder пишет:
 On Sun, 2013-02-24 at 15:49 +0400, Dmitry Olshansky wrote:
 […]
 You missed the point that these have to be the *real* integer constants
 starting from 0. No frigging magic classes please.

I am not sure why they have to be hardware integers, this is a JVM-based system and hardware integers do not exist. I feel a contradiction between requirements and technology here!

'cause they are read from network stream. Basically they are tags/IDs. Then there is a read-dispatch loop based on combination of tags/IDs. In a sense all of this execution state that is dispatched to can be encapsulated inside of a class. One class per ID that can be called Command or Packet. Then switch translates to visitor that visits packets as these are constructed from stream. Then you can even chain visitors together etc. and it would be beautiful were it not for slowness and tons of boilerplate to create all of the classes. Again the above thing could be easily done with a table of functor once Java has lambdas. Everything I need seems to require easy to use lambdas ... a disturbing thought.
 Citing Oracle:
    All enums implicitly extend java.lang.Enum. Since Java does not
 support multiple inheritance, an enum cannot extend anything else.

 My thoughts after reading this: "holy crap, they are class instances".
 Even then we seem to use enums where there is no need for integer
 constants specifically.

Safe Enum pattern/idiom is indeed all about the representation of the symbols being instances of a class. But small immutable objects are very cheap these days on the JVM.

M-hm. I'm not sure it would work when you actually store these as members of other instances like Command or Packet as above. But maybe JVM can cheat a bit even there I dunno.
 It is possible Java 9 or Java 10 will remove the primitive types
 completely so that all variables are reference types leaving it to the
 JVM to handle all boxing and unboxing internally thus making things a
 lot more efficient and faster.  Experiments are scheduled and underway,
 decisions made only once the results are in.

Till then. But e.g. Scala claims to do just fine with this approach.
 […]
   From what I read about Java 8 it is tolerable (as the language). Still
 a far cry from D but largely more usable then now.

Java 8 and Groovy (also Scala, possibly Clojure, Kotlin and Ceylon) will make it hard for any organization with a JVM heritage to even contemplate switching to native. No matter how good D is compared to C++ or Java, if there is no easy route for take up, D will remain is the traction position it currently is. […]

Mmm I wondering what does Go have to do with the topic at hand since it doesn't have direct C call mechanism and thus is worse then Java as I for one would call Go's GC quite pedestrian and the ecosystem way to young. The other thing is that "it's from Google" won't sell it where I work at all (it will work against probably) since people are all technically skilled and have no respect for argumentum ad populum. Again even Java was chosen given constraints and practical need. It wasn't my decision and I don't quite like it (I'm a newcomer there). Still I see the logic behind the choice and certain advantages it brings.
 D would probably fit perfectly were it not for:
 a) Not widely recognized but sometimes we may cheat and tell it's C++

With the current marketing strategy, this is how D is going to remain. D does not have an organization such as Google pushing it as Go has had. Yet there is a strong analogy: Go is C with warts removed and modern stuff added. D is C++ with warts removed and modern stuff added. Go has a small hardcore that is active and outward looking trying to create replacements for all the C and Python codes that people actually use. Go sells itself on the purity of the concurrency model and it object model, not mention GC. D on the other hand has a small hardcore.

 b) No support for loading shared libraries (bye-bye sane plugin
 architecture)

Go has eschewed all dynamic linking and is making this a feature. But it has a mechanism for being able to call C from libraries. Python has a mechanism for calling C from shared libraries. D is at a disadvantage.

D is twice better then Go then: it calls C directly and has (will have) shared library support. For now it just calls into C's shared libs we need both ways and that's all. And Go is worse then Python still ;)
 c) Lack of stable 3-rd party libs for pretty much everything
 (this is getting better but is not there by miles)

Go has managed to attract volunteer labour to write in Go new versions of everything previously written in C other than actual OSs. But even there people are beginning to write OSs in Go.

There are moments when I think that wasn't it for Google backing behind it (and the large cult of Google world-wide) that language would never get any kind of traction. [...]
 Hmm. An OT question - how hard it is to convert some moderately sized
 app to Grovvy from Java (just to see if it looses anything in terms of
 performance, in static mode obviously)?

No tools for this as yet, so down to manual transform. First step though is to run the Java through the Groovy compiler and see what happens. Once you have something compiling you refactor the code replacing Java verbosity with Groovy terseness until there is nothing left to refactor, you have a "pure Groovy" codebase.

Okay thanks for the data point as I think I've seen some claims that Java can be compiled as Groovy (almost) as is... Well, back to the old painful way. -- Dmitry Olshansky
Feb 27 2013
prev sibling parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
23-Feb-2013 22:08, deadalnix пишет:
 On Saturday, 23 February 2013 at 16:57:24 UTC, simendsjo wrote:
 On Saturday, 23 February 2013 at 16:44:59 UTC, Nick Sabalausky wrote:
 (...)
 Anyone still using Java is just so last decade ;)

I've managed to dodge Java all these years, but I just started a college which teach Java. Even after using it only for a couple of thousand lines of code, I understand the hatred.. Feels like I'm in a straitjacket. Yes, it might be easy to learn, but damn it's verbose!

The whole trick with java, is that your IDE write most of the verbosity for you. This a whole new set of programming techniques to master.

It doesn't help *reading* this verbosity. -- Dmitry Olshansky
Feb 23 2013
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-02-16 20:35, Russel Winder wrote:

 Thus I suggest that it is not that the build tool is embedded in the
 package manager but that package and dependency management is part of
 the build system.

In general yes, but I think that there should be two separated tools and handle what they're designed to handled. On top of that they should have great integration with each other. I'm not talking about processes invoking each other. I'm talking about both tools are built as libraries and can be easily integrated with each other. Say we have build tool A and package manager B. I don't want to force anyone using A just to use B, or the opposite. -- /Jacob Carlborg
Feb 17 2013
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-02-16 23:58, Rob T wrote:

 Why use json (which is a subset of javascript), or ruby, or python, etc?
 Is there something fundamentally wrong with D that makes it unsuitable
 for this role?

You generally will get a more verbose syntax using D. It's not really made to execute statements on the top level. -- /Jacob Carlborg
Feb 17 2013
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-02-17 02:49, Nick Sabalausky wrote:

 I largely agree, except:

 1. For simple projects with trivial build system requirements, D is
 overkill compared to a purely data-only language.

It might be but the syntax is no that different: Yaml: flags: "-l-L. -release" foo: "bar" Json: { "flags": "-l-L. -release" "foo": "bar" } D: flags = "-l-L. -release"; foo = "bar"; In this example D has less syntax than Json. -- /Jacob Carlborg
Feb 17 2013
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-02-16 20:37, H. S. Teoh wrote:

 I think Snke's idea is actually very good. I know we all have our own
 preferences for build systems (I know I do -- for example, I abhor
 anything to do with makefiles), but having a standardized way to specify
 a build has many advantages. Imagine the user-unfriendliness of
 downloading a bunch of packages from the D package manager, only to
 discover that one requires make, another requires cmake, another
 requires SCons, another requires Ant, pretty soon, what should be just a
 simple automatic download turns into a nightmare of installing 20
 different build systems just so you can use a bunch of packages from the
 standard D package manager.

 Having a standardized way of generating build scripts is good, because
 then the D package manager can target the *end user*'s preferred build
 system, rather than whatever build system the package writers chose. The
 package writers can just specify how to build the stuff, then let the D
 packager generate makefiles for one user, Ant files for another user,
 etc.. This makes it much more friendly to use, and therefore, more
 likely people will actually use it.

The build system doesn't need to embedded in the package manager just to have a standardize build system. See this: http://forum.dlang.org/thread/kfoei9$bmd$1 digitalmars.com?page=4#post-kfqium:24unf:241:40digitalmars.com -- /Jacob Carlborg
Feb 17 2013
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-02-16 21:02, Johannes Pfau wrote:

 Having a common standard build tool is always good. But some kind of
 projects require custom build scripts (calling swig, generating
 interface files for ipc stuff, executing the C compiler to check if a
 function is available to disable / enable additional features, calling
 pkgconfig for some reason, compiling additional c/c++ files, assembling
 additional files, ...).


 I think splitting DUB into a package manger and build tool would be a
 good idea. Ship them as one package, so that everyone using the package
 manager also has the build tool installed. And integrate them as much
 as possible, the default setup can still work exactly the same as if
 they were integrated.

 The benefit of splitting them in this way: You're forced to provide
 interfaces for the build tool to communicate with the package manager
 and every other build tool can use those interfaces as well. This way
 there are no second class build systems.

 As an example:

 package.json
 {
 	"name": "myproject",
 	"description": "A little web service of mine.",
 	"authors": ["Peter Parker"],
 	"homepage": "http://myproject.com",
 	"license": "GPL v2",
 	"dependencies": {
 		"vibe-d": ">=0.7.11"
 	},
          "build": "DUB"
 }

 build.json
 {
 	"configurations": {
 		"metro-app": {
 			"versions": ["MetroApp"],
 			"libs": ["d3d11"]
 		},
 		"desktop-app": {
 			"versions": ["DesktopApp"],
 			"libs": ["d3d9"]
 		}
 	}
 }

 doing a "dub-pkg install myproject" should fetch the sources, then call
 "dub-build build.json". dub-build will have to ask the package manager
 for some information: "dub-pkg package.json --query dependencies",
 "dub-pkg package.json --query --package=vibe.d --link-path". Or it
 might require some additional actions: "dup-pkg --install-dependency
 d3d9"

Exactly. But I see no reason for communicating with processes. Just make them both into libraries and call functions plain functions. -- /Jacob Carlborg
Feb 17 2013
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-02-17 10:01, Russel Winder wrote:

 Comment from a build veteran last Tuesday: "I will not use any build
 system that cannot be used as a library."

 This is the modern way.

Agree. -- /Jacob Carlborg
Feb 17 2013
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Feb 23, 2013 at 05:57:23PM +0100, simendsjo wrote:
 On Saturday, 23 February 2013 at 16:44:59 UTC, Nick Sabalausky
 wrote:
 (...)
Anyone still using Java is just so last decade ;)

I've managed to dodge Java all these years, but I just started a college which teach Java. Even after using it only for a couple of thousand lines of code, I understand the hatred.. Feels like I'm in a straitjacket. Yes, it might be easy to learn, but damn it's verbose!

That was my reaction too, when I first starting learning Java. And that is still my reaction today. It's not a *bad* language per se. In fact, a lot of it is quite ideal. Or rather, idealistic, should I say. Unfortunately, that makes it a pain to map to messy real-world situations -- you end up with a truckload of wrappers and incrediblyLongAndVerboseIdentifiers just so the language can remain "pure". As for being a straitjacketed language, this IOCCC entry says it best: http://www.ioccc.org/2005/chia/chia.c ;-) T -- He who sacrifices functionality for ease of use, loses both and deserves neither. -- Slashdotter
Feb 23 2013
prev sibling next sibling parent Russel Winder <russel winder.org.uk> writes:
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Sat, 2013-02-23 at 21:44 +0400, Dmitry Olshansky wrote:
[=E2=80=A6]
 I have to say that no amount of provided out of the box nice libraries=

 and top-notch GC help alleviate the dire need for plain value-types and=

 some kind of terseness (rows of static final int xyz =3D blah; to define =

 bunch of constants). With Java version being 5 or 6 (as is usually the=

 case in production ATM) there is basically not a single construct to=20
 avoid horrible duplication of information on each line.

Maybe in Java 1.4, 8 years ago a plethora of public static final int constants, but these days seeing that implies a bad design, a complete lack of updating/refactoring of the software to modern Java features, or simple laziness. Java 5 introduced more than generics. The G1 GC is a significant improvements to Java. Java supports value types, albeit as objects on the heap. Give me an example of the problem so we can approach this with more than waffly statements.=20 Far too few programmers use the final keyword properly/effectively.
 There are few shortcuts in Java 7, and even more stuff in Java 8 but all=

 of it is coming horribly too late and doesn't fix the "big picture".=20
 Plus that has to propagate into the  mainstream.

This is Java's serious problem. There are still people using Java 1.4.2 because they can't be bothered to update to Java 5 let alone 7. [=E2=80=A6] I have many problems with Java, it is very verbose, but it needs to be criticized in a fair manner not in a "slag off" way. Clearly everyone on this list is going to prefer D over Java, but that is not permission to use inappropriate argumentation. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Feb 23 2013
prev sibling next sibling parent Russel Winder <russel winder.org.uk> writes:
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Sat, 2013-02-23 at 23:22 +0400, Dmitry Olshansky wrote:
[=E2=80=A6]
 Would be nice to know if there is something that can represent this C=20
 snippet without the usual heaps of verbosity:
 enum {
 STATUS_OK =3D 0,
 STATUS_FAIL_REASON_XYZ,
 ... ad infinitum (but presently dozens)
 };

enum StatusCodes { STATUS_OK, STATUS_FAIL_REASON_XYZ, } Is perfectly valid Java.
 There are lots of other cases, but this one is prime.

Job done then, Java is fine for your purposes!
 You might argue for using classes and inheritance + visitor pattern=20
 instead of plain switch/state machines. The heaps of boilerplate to=20
 write/generate all of "state"/visitor etc. classes argues the other way=

 around.

That used to be the right way of doing it in Java, but as you say too much boilerplate, so the syntax was added to make it all work correctly. There is much more to the Safe Enum pattern implementation in Java. For example see the Planets enum example in the Java documentation.
 (and I suspect the execution speed too)

No point in philosophizing about performance, measurement is the only valid way of making any form of performance comparison.
 The G1 GC is a significant improvements to Java.

I guess this depends on the platform you (have to) use? BTW I agree that Java's GC is nice, more so the automatic=20 de-heapification of modern VMs.

No G1 is just nice. I think it is standard in JavaSE and JavaEE from 7u6. Not sure about JavaSE Embedded. Although JavaME is due a re-launch, I think it is dead in the water before being launched.
 A trivial example is storage of pairs or tuples. That plus using Java's=

 containers makes memory footprint explode. At 100K+ hash-map I notice=20
 quite high factor of waste. It's some double digits compared to plain C=

 arrays. I probably shouldn't care that much since servers are having=20
 tons of RAM these days, and customers throw money on equipment without=

 second thought. Thus for the moment I just can't help but notice. Thank=

 god we are mostly I/O bound.

Java is a big memory user, anyone trying to claim it isn't is living in cloud cuckoo land. But the memory does get used efficiently in modern JVMs especially with the G1 GC.=20 I can't see D ever really competing with Java or any other JVM-based language as JVM-based shops won't even consider D (or C or C++), except in very few circumstances.
 Another example. I had to sort some objects by rating for a bit of=20
 processing with ranking. Yet this rating is computed and really doesn't=

 belong with them at all (and needs locking beforehand) and I can't=20
 reorder them in place. What I'd do is create an array of tuple=20
 link-rating and sort these.
=20
 In Java I have to create a full-blown object and allocate each array=20
 slot separately, cool. Not to mention the complete lack of concise tuples=

Without the code it is difficult to comment other than to say, are you using the right data structures for the job? Creating small short-lived objects is now highly efficient on the JVM, especially immutable ones. Support for tuples is interesting: Kotlin just removed them, Ceylon just added them. Java is nowhere!
 Writing functors is a royal pain in the ass too. Anonymous classes=20
 doesn't quite cut it on the scale of verbosity/benefit. If there any=20
 library that doesn't use reflection like crazy to achieve simple,=20
 concise functors, I'd be glad to use it but so far I found none.

Java 8. You'll love it. Lambda expressions aplenty. Interestingly, no need for anonymous classes since it is possible to do call site type inferencing and use method handles and invokedynamic to get it all in place without an object in sight.
 Sorry can't elaborate on exact details here, the work is not open-source=

 by any measure.

In which case if you want help I'm available for consultancy!
 Far too few programmers use the final keyword properly/effectively.

In Java programs it can be used very often. I'd say too often since I=20 find it rare to have a need to e.g. rebind an object. As for "a few=20 programmers", well you have to type it everywhere, guess why it's not=20 popular?!

Indeed, we all agree on verbose. Java variables should be single assignment by default and variable only by annotation, but that isn't going to happen now. So final should be typed first and only remove if rebinding is needed.
=20
 Mmm. I report my experience not argument much of anything excpet that=20
 it's far from pleasant.

Different people like and hate different languages, this is only natural. I am surprised though how many people stay working in places using languages they do not and cannot begin to like.=20
 If anything I might convince the team to switch off to Scala, someday.
 If not the somewhat alien syntax it would be far easier.

Scala has many pluses over Java, but also some minuses, as do the other languages such as Kotlin and Ceylon. There is also Groovy which can be both a dynamic and a static language these days and is closely tied to Java (but without all the verbosity), unlike Scala, Kotlin and Ceylon which have significant impedance mismatch in places. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Feb 24 2013
prev sibling next sibling parent Russel Winder <russel winder.org.uk> writes:
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Sun, 2013-02-24 at 15:49 +0400, Dmitry Olshansky wrote:
[=E2=80=A6]
 You missed the point that these have to be the *real* integer constants
 starting from 0. No frigging magic classes please.

I am not sure why they have to be hardware integers, this is a JVM-based system and hardware integers do not exist. I feel a contradiction between requirements and technology here!
 Citing Oracle:
   All enums implicitly extend java.lang.Enum. Since Java does not=20
 support multiple inheritance, an enum cannot extend anything else.
=20
 My thoughts after reading this: "holy crap, they are class instances".=

 Even then we seem to use enums where there is no need for integer=20
 constants specifically.

Safe Enum pattern/idiom is indeed all about the representation of the symbols being instances of a class. But small immutable objects are very cheap these days on the JVM. It is possible Java 9 or Java 10 will remove the primitive types completely so that all variables are reference types leaving it to the JVM to handle all boxing and unboxing internally thus making things a lot more efficient and faster. Experiments are scheduled and underway, decisions made only once the results are in. [=E2=80=A6]
 It however doesn't prevent one from observing the potential bottlenecks
 in e.g. parsing binary packets. Measuring is good but in server/distrubut=

 context is highly non-trivial to do correctly.
=20
 Like I said earlier the current project (the one I go by) is largely I/O=

 bound (and that's not network but rather the "device" I/O). I did=20
 synthetic benchmarks back then and in short visitor sucked.

I'm not sure where "visitor" fits into this, I may have missed something=E2=80=A6 [=E2=80=A6]
 Then it's in Java 7? Okay, now I have a solid argument to bring about=20
 the switch to Java 7 :)

http://www.oracle.com/technetwork/java/javase/tech/g1-intro-jsp-135488.html http://www.oracle.com/webfolder/technetwork/tutorials/obe/java/G1GettingSta= rted/index.html G1 is the default in Java 8, but you have to switch to it in Java 7 if I remember correctly.
 Seriously, of course, we tend to switch to whatever is perceived as=20
 stable in the "local market". Currently it was Java 6. Newer projects=20
 would be 7 I guess.

Any Java user still planning to stay with Java 6 or earlier and not planning to switch asap to Java 7 will be on their own very quickly and seen and just another legacy problem. [=E2=80=A6]
  From what I read about Java 8 it is tolerable (as the language). Still=

 a far cry from D but largely more usable then now.

Java 8 and Groovy (also Scala, possibly Clojure, Kotlin and Ceylon) will make it hard for any organization with a JVM heritage to even contemplate switching to native. No matter how good D is compared to C++ or Java, if there is no easy route for take up, D will remain is the traction position it currently is. [=E2=80=A6]
 D would probably fit perfectly were it not for:
 a) Not widely recognized but sometimes we may cheat and tell it's C++

With the current marketing strategy, this is how D is going to remain. D does not have an organization such as Google pushing it as Go has had. Yet there is a strong analogy: Go is C with warts removed and modern stuff added. D is C++ with warts removed and modern stuff added. Go has a small hardcore that is active and outward looking trying to create replacements for all the C and Python codes that people actually use. Go sells itself on the purity of the concurrency model and it object model, not mention GC. D on the other hand has a small hardcore.=20
 b) No support for loading shared libraries (bye-bye sane plugin=20
 architecture)

Go has eschewed all dynamic linking and is making this a feature. But it has a mechanism for being able to call C from libraries. Python has a mechanism for calling C from shared libraries. D is at a disadvantage.
 c) Lack of stable 3-rd party libs for pretty much everything
 (this is getting better but is not there by miles)

Go has managed to attract volunteer labour to write in Go new versions of everything previously written in C other than actual OSs. But even there people are beginning to write OSs in Go. [=E2=80=A6]
 Yay, Groovy! I've pushed for using it recently for web application in=20
 favor of "evolving" some legacy JSP-based hell. BTW you are solely=20
 responsible for me ever knowing about it in the first place.

Sponditious :-)
 Hmm. An OT question - how hard it is to convert some moderately sized=20
 app to Grovvy from Java (just to see if it looses anything in terms of=

 performance, in static mode obviously)?

No tools for this as yet, so down to manual transform. First step though is to run the Java through the Groovy compiler and see what happens. Once you have something compiling you refactor the code replacing Java verbosity with Groovy terseness until there is nothing left to refactor, you have a "pure Groovy" codebase. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Feb 25 2013
prev sibling next sibling parent Russel Winder <russel winder.org.uk> writes:
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Sun, 2013-02-24 at 16:32 -0500, Nick Sabalausky wrote:
[=E2=80=A6]
 Luckily, modern server hardware should support hardware virtualization,
 and most languages/libs are pretty good at cross-platform, so this
 one shouldn't be much of a "reason for JVM" anymore like it might have
 been ten or so years ago.

But this is where "virtual !=3D virtual": hardware virtualization is a different thing from virtual machines. The reason for JVM and PVM remains even in a world of server virtualization. Cross platform is not the application developers problem using a virtual machine as it is with native codes. This has not changed. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Feb 25 2013
prev sibling next sibling parent "pjmlp" <pjmlp progtools.org> writes:
On Tuesday, 26 February 2013 at 05:13:10 UTC, Russel Winder wrote:
 On Sun, 2013-02-24 at 15:49 +0400, Dmitry Olshansky wrote:
 […]

 Any Java user still planning to stay with Java 6 or earlier and 
 not
 planning to switch asap to Java 7 will be on their own very 
 quickly and
 seen and just another legacy problem.

My employer is still getting requests for proposals with Java 1.4! :(
 […]
 With the current marketing strategy, this is how D is going to 
 remain. D
 does not have an organization such as Google pushing it as Go 
 has had.

Even with my continous complains about lack of generics in gonuts, I do realize it is easier to push Go at the workplace using the "it is from Google" excuse. That is how I have been pushing F# lately (being a Microsoft language).
 b) No support for loading shared libraries (bye-bye sane 
 plugin architecture)

Go has eschewed all dynamic linking and is making this a feature.

They might eventually be forced to reconsider it. Even Plan9 had dynamic linking support.
 c) Lack of stable 3-rd party libs for pretty much everything
 (this is getting better but is not there by miles)

Go has managed to attract volunteer labour to write in Go new versions of everything previously written in C other than actual OSs. But even there people are beginning to write OSs in Go.

This I find great and is a reason I always give information about Oberon language family and Modula-3 based OS. More people need to be aware it is possible to write proper OS in GC enabled system programming languages without a single line of C or C++. -- Paulo
Feb 25 2013
prev sibling parent "Jakob Ovrum" <jakobovrum gmail.com> writes:
On Tuesday, 26 February 2013 at 05:13:10 UTC, Russel Winder wrote:
 Go has eschewed all dynamic linking and is making this a 
 feature. But it
 has a mechanism for being able to call C from libraries. Python 
 has a
 mechanism for calling C from shared libraries. D is at a 
 disadvantage.

I fail to see how D is at a disadvantage here.
Feb 26 2013
prev sibling next sibling parent reply =?UTF-8?B?U8O2bmtlIEx1ZHdpZw==?= <sludwig outerproduct.org> writes:
Am 16.02.2013 19:10, schrieb Jacob Carlborg:
 
 Using a full blow language can look pretty declarative as well:
 
 name = "my-library";
 dependencies = ["mysql-native": ">=0.0.7"];
 
 I don't see why something like the above would scare away people. It's
 even less code than the JSON above.
 

I was thinking more along the lines of make, cmake, D based build scripts in the form as proposed in the past here and others. Such procedural build files often tend to get bloated over time and hard to work with. While this is not necessarily a problem with the tools itself, it can still make people look for alternatives. That said, if a scripting language is used (almost) purely to provide a declarative system as in your example, it doesn't have to be bad at all. The question that arguably remains is just if the added flexibility is actually useful or necessary and if such a thing "pulls its own weight" (WRT implementation and API complexity).
Feb 16 2013
parent reply Jacob Carlborg <doob me.com> writes:
On 2013-02-16 20:07, Sönke Ludwig wrote:

 I was thinking more along the lines of make, cmake, D based build
 scripts in the form as proposed in the past here and others. Such
 procedural build files often tend to get bloated over time and hard to
 work with. While this is not necessarily a problem with the tools
 itself, it can still make people look for alternatives.

 That said, if a scripting language is used (almost) purely to provide a
 declarative system as in your example, it doesn't have to be bad at all.
 The question that arguably remains is just if the added flexibility is
 actually useful or necessary and if such a thing "pulls its own weight"
 (WRT implementation and API complexity).

I think so. -- /Jacob Carlborg
Feb 17 2013
parent reply =?UTF-8?B?U8O2bmtlIEx1ZHdpZw==?= <sludwig outerproduct.org> writes:
Am 17.02.2013 14:51, schrieb Jacob Carlborg:
 On 2013-02-16 20:07, Sönke Ludwig wrote:
 
 I was thinking more along the lines of make, cmake, D based build
 scripts in the form as proposed in the past here and others. Such
 procedural build files often tend to get bloated over time and hard to
 work with. While this is not necessarily a problem with the tools
 itself, it can still make people look for alternatives.

 That said, if a scripting language is used (almost) purely to provide a
 declarative system as in your example, it doesn't have to be bad at all.
 The question that arguably remains is just if the added flexibility is
 actually useful or necessary and if such a thing "pulls its own weight"
 (WRT implementation and API complexity).

I think so.

Any examples?
Feb 17 2013
parent reply Jacob Carlborg <doob me.com> writes:
On 2013-02-17 14:53, Sönke Ludwig wrote:

 Any examples?

The DWT build script if fairly complex: D: https://github.com/d-widget-toolkit/dwt/blob/master/build.d Rakefile: https://github.com/d-widget-toolkit/dwt/blob/master/rakefile -- /Jacob Carlborg
Feb 17 2013
parent reply =?UTF-8?B?U8O2bmtlIEx1ZHdpZw==?= <sludwig outerproduct.org> writes:
Am 17.02.2013 15:09, schrieb Jacob Carlborg:
 On 2013-02-17 14:53, Sönke Ludwig wrote:
 
 Any examples?

The DWT build script if fairly complex: D: https://github.com/d-widget-toolkit/dwt/blob/master/build.d Rakefile: https://github.com/d-widget-toolkit/dwt/blob/master/rakefile

I see some things like windows/non-windows specific flags and a lot of path handling and infrastructure stuff (which the build tool would do itself). Anything in particular that you think is not easily doable with the data driven approach? BTW, I think build files like these are the perfect example for what I mean with complexity and scaring away people. I'm sure it all makes sense, but my vision of a build system is that it goes out of the developers way instead of forcing him to maintain a whole sister project alongside the actual code.
Feb 17 2013
parent Jacob Carlborg <doob me.com> writes:
On 2013-02-17 15:28, Sönke Ludwig wrote:

 I see some things like windows/non-windows specific flags and a lot of
 path handling and infrastructure stuff (which the build tool would do
 itself).

The D script contains a lot of utility functions that would be available in the build tool. It emulates the rakefile to stay backwards compatible.
 Anything in particular that you think is not easily doable with
 the data driven approach?

I don't know the build scripts very well since I have not written them. It has several targets. Base, swt, snippets and so on.
 BTW, I think build files like these are the perfect example for what I
 mean with complexity and scaring away people. I'm sure it all makes
 sense, but my vision of a build system is that it goes out of the
 developers way instead of forcing him to maintain a whole sister project
 alongside the actual code.

I really hate that build script. I really hope that a proper build system will handle DWT with just a simple build script. -- /Jacob Carlborg
Feb 17 2013
prev sibling next sibling parent reply Russel Winder <russel winder.org.uk> writes:
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Sat, 2013-02-16 at 20:15 +0100, Jacob Carlborg wrote:
[=E2=80=A6]
 I'm thinking that there should be a build tool that drives everything.=

 The build script contains package dependencies. The build tool will ask=

 the package manager to get linker/import paths and libraries for the=20
 dependencies.

First a plug for people to think of SCons and Waf, not to mention Gradle, when starting to rattle of build tools. The SBT folks are using Scala in quite interesting ways to enable Scala to define project specifications with all the non-inferable dependencies. The Gradle folks have already done almost everything the SBT folks are doing, but using Groovy rather than Scala. The Go folk have done something interesting in that they have merged the whole concept of configuration and build by doing everything over DVCS. You put your sources in Git, Mercurial, Bazaar (they really should include Fossil as well but=E2=80=A6) and these can be Got and the modules created within the standard structure local hierarchy. The have a single command that performs all activity. D has rdmd but compared to Go's go command it cannot do a lot. I would suggest now is the time to think outside the box, analogous to the ways the Gradle and SBT folk have on the JVM and the way the Go folk have for native builds of statically linked code. Instead of thinking in terms of compile, link, modules, dependencies, what is the workflow that makes D the compelling language for building Fortran/C/C++/D systems. This is suggesting that the milieu to attack is the one currently being won by Python in the computationally intensive areas of bioinformatics and scientific computing. Can D be the project specification language using convention over configuration. Project directories are in a standard form (with exceptions describable) with dependencies either inferred by scanning the project sources or specified in a trivial way in the project specification file. Thus I suggest that it is not that the build tool is embedded in the package manager but that package and dependency management is part of the build system. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Feb 16 2013
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Feb 23, 2013 at 06:42:43PM +0000, Russel Winder wrote:
 On Sat, 2013-02-23 at 22:24 +0400, Dmitry Olshansky wrote:
 […]
 It doesn't help *reading* this verbosity.

Very, very true. Sadly, D has some arcane bits that make it equally difficult to read D code. For example: example.filter!isLongEnough().array() Why ! in one place and . in the others, it is all just calling a method on something.

One of the worst offenders in D is the is-expression. Not only are the meaning of the arguments mysterious and manifold, they also have non-obvious intents: void myGenericFunction(T,U,V)(T t, U u, V v) if (is(T) && is(typeof(T.a)) && is(U : int) && is (V _ : W[X], W, X) && is(typeof(T.expectedMember)) && is(typeof(T.expectedMember())) && is(typeof(T.expectedMember() : V)) // ^ Completely opaque unless you stare at it // long enough. ) { // Lisp fans would love the level of parentheses in this // next line: static if (isInputRange!(typeof(T.expectedMember()))) dotDotDotMagic(t,u,v); else moreDotDotDotMagic(t,u,v); } T -- "Holy war is an oxymoron." -- Lazarus Long
Feb 23 2013
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Feb 16, 2013 at 08:15:13PM +0100, Jacob Carlborg wrote:
 On 2013-02-16 19:39, Snke Ludwig wrote:
 
Not at all, the idea is to have a number of "build script" generators
so everyone can choose whatever fits best, otherwise a generic
rdmd/dmd based build works out of the box with no need to install an
additional tool. Invoking a generic external tool is easy enough to
add and already planned, so this should not be a limiting factor in
any way.

Ok, I see. But it feels wrong to me that it should generate a build script. I think the package should already contain a build script.

I think Snke's idea is actually very good. I know we all have our own preferences for build systems (I know I do -- for example, I abhor anything to do with makefiles), but having a standardized way to specify a build has many advantages. Imagine the user-unfriendliness of downloading a bunch of packages from the D package manager, only to discover that one requires make, another requires cmake, another requires SCons, another requires Ant, pretty soon, what should be just a simple automatic download turns into a nightmare of installing 20 different build systems just so you can use a bunch of packages from the standard D package manager. Having a standardized way of generating build scripts is good, because then the D package manager can target the *end user*'s preferred build system, rather than whatever build system the package writers chose. The package writers can just specify how to build the stuff, then let the D packager generate makefiles for one user, Ant files for another user, etc.. This makes it much more friendly to use, and therefore, more likely people will actually use it. T -- Guns don't kill people. Bullets do.
Feb 16 2013
prev sibling next sibling parent reply "Peter Sommerfeld" <noreply rubrica.at> writes:
Am 16.02.2013, 22:21 Uhr, schrieb S=F6nke Ludwig:
 I actually think it is indeed _good_ to have a first class build syste=

 for exactly the reason that H. S. Teoh gave. If other build systems
 really are on the same level as the standard one, it poses the risk of=

 fragmentation among different packages and users would possibly have t=

 install a number of different build tools to build all dependencies.

++1 Another issue: I understand why you are using json but it is not the best suited format IMHO. D put some restriction on module names, thus the format can be simplified. Compare: { "name": "myproject", "description": "A little web service of mine.", "authors": ["Peter Parker"], "homepage": "http://myproject.com", "license": "GPL v2", "dependencies": { "vibe-d": ">=3D0.7.11" } } name: myproject; description: A little web service of mine.; authors: [Peter Parker, Fritz Walter]; homepage: "http://myproject.com"; license": GPL v2; dependencies: [ vibe: >=3D 0.7.11 # a comment; ]; Using ';' as end-char and omit all these double quotes makes it much cleaner and less error-prone for the user as well as automatic generation. Double quotes are needed only for strings with none alphanumeric characters. Adding '#' or '//' for comments would also be a good idea. I also think that something like json objects, which are key-value pairs anyway, are dispensable. Hence everything can be key-value pairs. Exporting to other formats will probably needed anyway. Peter
Feb 16 2013
parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, February 17, 2013 09:12:00 S=C3=B6nke Ludwig wrote:
 BTW, I think YAML as a superset of JSON is also a good contender with=

 nice syntax features, but also much more complex.

It's also whitespace-sensitive, which is downright evil IMHO. I'd take = JSON=20 over YAML any day. - Jonathan M Davis
Feb 17 2013
prev sibling next sibling parent "Rob T" <alanb ucora.com> writes:
On Saturday, 16 February 2013 at 19:35:47 UTC, Russel Winder 
wrote:
 On Sat, 2013-02-16 at 20:15 +0100, Jacob Carlborg wrote:
 […]
 I'm thinking that there should be a build tool that drives 
 everything. The build script contains package dependencies. 
 The build tool will ask the package manager to get 
 linker/import paths and libraries for the dependencies.

First a plug for people to think of SCons and Waf, not to mention Gradle, when starting to rattle of build tools. The SBT folks are using Scala in quite interesting ways to enable Scala to define project specifications with all the non-inferable dependencies. The Gradle folks have already done almost everything the SBT folks are doing, but using Groovy rather than Scala. The Go folk have done something interesting in that they have merged the whole concept of configuration and build by doing everything over DVCS. You put your sources in Git, Mercurial, Bazaar (they really should include Fossil as well but…) and these can be Got and the modules created within the standard structure local hierarchy. The have a single command that performs all activity. D has rdmd but compared to Go's go command it cannot do a lot. I would suggest now is the time to think outside the box, analogous to the ways the Gradle and SBT folk have on the JVM and the way the Go folk have for native builds of statically linked code. Instead of thinking in terms of compile, link, modules, dependencies, what is the workflow that makes D the compelling language for building Fortran/C/C++/D systems. This is suggesting that the milieu to attack is the one currently being won by Python in the computationally intensive areas of bioinformatics and scientific computing. Can D be the project specification language using convention over configuration. Project directories are in a standard form (with exceptions describable) with dependencies either inferred by scanning the project sources or specified in a trivial way in the project specification file. Thus I suggest that it is not that the build tool is embedded in the package manager but that package and dependency management is part of the build system.

I'm having good success using D itself as the build tool language, and I'm at the point now where I'm getting much better results than what I was getting out of using external build tools, so for me there's no looking back. I run rdmd on a .d "script" that I wrote that's setup to do exactly what I want. What is missing from D is a good library of functions that are generally useful for writing build scripts. Phobos already supplies a great deal of what is needed that can be used to construct what is missing. The benefit of using D is that I do not have to install and learn a secondary language or conform to a specific build format, or follow any weird restrictions. What I want to do is build my D code, not learn something new and perform acrobatics to get the job done. I can even use the same D process to build C/C++ code if I wanted to expand on it. What I'm saying here is that I see no reason to use a language other than D itself as the build tool. What D can use is an addition to Phobos that supplies the necessary generalized functions that all build tools should supply, and I don't think there's all that much that's missing. For a package manager, some standards may be required, but it too can be done completely with D. Why use json (which is a subset of javascript), or ruby, or python, etc? Is there something fundamentally wrong with D that makes it unsuitable for this role? --rt
Feb 16 2013
prev sibling next sibling parent "SomeDude" <lovelydear mailmetrash.com> writes:
On Saturday, 16 February 2013 at 22:58:42 UTC, Rob T wrote:
 I'm having good success using D itself as the build tool 
 language, and I'm at the point now where I'm getting much 
 better results than what I was getting out of using external 
 build tools, so for me there's no looking back.

 I run rdmd on a .d "script" that I wrote that's setup to do 
 exactly what I want.

 What is missing from D is a good library of functions that are 
 generally useful for writing build scripts. Phobos already 
 supplies a great deal of what is needed that can be used to 
 construct what is missing.

 The benefit of using D is that I do not have to install and 
 learn a secondary language or conform to a specific build 
 format, or follow any weird restrictions. What I want to do is 
 build my D code, not learn something new and perform acrobatics 
 to get the job done. I can even use the same D process to build 
 C/C++ code if I wanted to expand on it.

 What I'm saying here is that I see no reason to use a language 
 other than D itself as the build tool. What D can use is an 
 addition to Phobos that supplies the necessary generalized 
 functions that all build tools should supply, and I don't think 
 there's all that much that's missing.

 For a package manager, some standards may be required, but it 
 too can be done completely with D.

 Why use json (which is a subset of javascript), or ruby, or 
 python, etc? Is there something fundamentally wrong with D that 
 makes it unsuitable for this role?

 --rt

Indeed, it does sound like a sweet idea to provide a complete library of D functions to help building a project and use D itself as the build tool. Maybe dub could offer the possibility to call D as an integrated scripting language ?
Feb 16 2013
prev sibling next sibling parent "SomeDude" <lovelydear mailmetrash.com> writes:
On Saturday, 16 February 2013 at 22:49:27 UTC, Nick Sabalausky 
wrote:

 I like a lot of what you've said, but my concern about this 
 part is,
 does it support things like:

 - Dependencies between custom build steps (so that custom build 
 steps
 aren't needlessly re-run when they're not out-of-date, and can
 possibly be parallelized)

One problem with parallelizing builds right now is, the risk is high they fail with out of memory errors before they finish. Unless it's possible to pararellize them on different build boxes, or one owns a build machine with 64 Gb of RAM.
Feb 16 2013
prev sibling next sibling parent Russel Winder <russel winder.org.uk> writes:
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Sat, 2013-02-16 at 21:02 +0100, Johannes Pfau wrote:
[=E2=80=A6]

 The benefit of splitting them in this way: You're forced to provide
 interfaces for the build tool to communicate with the package manager
 and every other build tool can use those interfaces as well. This way
 there are no second class build systems.

Comment from a build veteran last Tuesday: "I will not use any build system that cannot be used as a library." This is the modern way. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Feb 17 2013
prev sibling next sibling parent Russel Winder <russel winder.org.uk> writes:
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Sun, 2013-02-17 at 09:12 +0100, S=C3=B6nke Ludwig wrote:

 I agree about the non-optimal syntax, but it comes down to nicer syntax
 (where JSON at least is not _too_ bad) vs. standard format (familiarity,
 data exchange, proven design and implementation). And while data
 exchange could worked around with export/import functions, considering
 that these files are usually just 4 to 20 lines long, I personally would
 put less weight on the syntax issue than on the others.
=20
 BTW, I think YAML as a superset of JSON is also a good contender with
 nice syntax features, but also much more complex.

Peter's point is more important than the above response indicates. Using JSON (YAML, whatever) is a recipe for failure. The syntax is data exchange facing, whereas writing and maintaining a build script (even if it is only 4 lines long), is a human activity. Do not underestimate the reaction "why do I have to write this data exchange format?" The rapid demise of XML, Ant, Maven, etc. is testament to people awakening to the fact that they were forced to use manually a language designed for another purpose. Where Gradle, and before it Gant, Buildr, etc., got it right was to use an internal DSL mentality to ask what is a written form that has affordance for the user that we can interpret in some way to deliver for computation the data needed. Groovy provides the infrastructure for Gradle, but people write Gradle scripts not Groovy. The build/packaging system does the hard work, not the user. The lesson from SBT is that by judicious cleverness, a trivially simple internal DSL can be derived that does all the work using Scala. The only downside to SBT scripts is the extra newline needed everywhere. Surely D is better than Scala for this? So if D is to contribute to build and package management, it should be looking to replace Make, CMake, SCons, Waf, Gradle. It should be able to build D systems yes, and manage D packages, but also look to build C, C ++, Fortran, and interlink all of them. Clearly starting small is good: a successful large system always evolves from a successful small system; designed large systems always fail. (cf. Gall, Systemantics, 1975) If all the energy around DUB and Orbit leads to the situation where D cannot be used to create an internal DSL for describing build and package/artefact management, then D has failed. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Feb 17 2013
prev sibling next sibling parent "Peter Sommerfeld" <noreply rubrica.at> writes:
Am 17.02.2013, 10:24 Uhr, schrieb Russel Winder:
 On Sun, 2013-02-17 at 09:12 +0100, S=F6nke Ludwig wrote:

 I agree about the non-optimal syntax, but it comes down to nicer synt=


 (where JSON at least is not _too_ bad) vs. standard format (familiari=


 data exchange, proven design and implementation). And while data
 [...]

Peter's point is more important than the above response indicates.

I think so too. And I don't believe that it will stay by a few lines only because the demands will increase in the course of time. Think about including C/C++ compiler, a project manager etc. With a syntax as I suggested or something similar everything is mapped direct onto an AA and is easily scanned by the build system or other tools. Regarding json: It is designed for data exchange between different sources, not to be edited by people. It is too annoying and error- prone to wrap each string into quotes. And people have to edit the configuration file. Better to start in a well suited format. Peter
Feb 17 2013
prev sibling next sibling parent "Peter Sommerfeld" <noreply rubrica.at> writes:
Am 17.02.2013, 11:58 Uhr, schrieb S=F6nke Ludwig:
 Am 17.02.2013 11:21, schrieb Peter Sommerfeld:
 Not sure what you mean by project manager, but right now, I see no
 reason why it shouldn't be possible to keep the package/build
 description very compact.

If a good configuration format for D is established, it will (or should) be used everywhere.
 Regarding json: It is designed ...

editing just as D or a custom DSL is.

OK, that is not important. The point is: Config files have to be edited by humans and all the quotes are simply superfluous, annoying and error-prone. The appropriate file format has low-priority and may be discussed when a general decision about using dub as build-tool has been made. I am for it! So let us continue with more important issues for now. Peter
Feb 17 2013
prev sibling next sibling parent "Peter Sommerfeld" <noreply rubrica.at> writes:
Am 17.02.2013, 14:03 Uhr schrieb Jacob Carlborg:

 On 2013-02-16 23:19, Peter Sommerfeld wrote:

 Another issue: I understand why you are using json but it is
 not the best suited format IMHO. D put some restriction on
 module names, thus the format can be simplified. Compare:

Inventing a new format is pointless. I you want it less verboseYaml is an alternative.

If you prefer indentation. I would never touch it. Peter
Feb 17 2013
prev sibling next sibling parent "Peter Sommerfeld" <noreply rubrica.at> writes:
Am 17.02.2013, 11:58 Uhr, schrieb S=F6nke Ludwig:
 Am 17.02.2013 11:21, schrieb Peter Sommerfeld:
 Not sure what you mean by project manager, but right now, I see no
 reason why it shouldn't be possible to keep the package/build
 description very compact.

If a good configuration format for D is established, it will (or should) be used everywhere.
 Regarding json: It is designed ...

editing just as D or a custom DSL is.

OK, that is not important. The point is: Config files have to be edited by humans and all the quotes are simply superfluous, annoying and error-prone. The appropriate file format has low-priority and may be discussed when a general decision about using dub as build-tool has been made. I am for it! So let us continue with more important issues for now. Peter
Feb 17 2013
prev sibling next sibling parent "Peter Sommerfeld" <noreply rubrica.at> writes:
Am 17.02.2013, 11:58 Uhr, schrieb S=F6nke Ludwig:
 Am 17.02.2013 11:21, schrieb Peter Sommerfeld:
 Not sure what you mean by project manager, but right now, I see no
 reason why it shouldn't be possible to keep the package/build
 description very compact.

If a good configuration format for D is established, it will (or should) be used everywhere.
 Regarding json: It is designed ...

editing just as D or a custom DSL is.

OK, that is not important. The point is: Config files have to be edited by humans and all the quotes are simply superfluous, annoying and error-prone. The appropriate file format has low-priority and may be discussed when a general decision about using dub as build-tool has been made. I am for it! So let us continue with more important issues for now. Peter
Feb 17 2013
prev sibling next sibling parent "SomeDude" <lovelydear mailmetrash.com> writes:
On Sunday, 17 February 2013 at 08:02:41 UTC, Sönke Ludwig wrote:
 To me the most interesting open question is this: Do we 
 actually gain
 from programmatic support for the build description, or does it 
 suffice
 to have a good purely descriptive system? If the former should 
 be true
 for more than 1% of the cases, that would definitely be a good 
 argument
 against pure data.

Well, in the Java world, there is ant. It does the trick, but it's quite ugly.
Feb 23 2013
prev sibling next sibling parent "SomeDude" <lovelydear mailmetrash.com> writes:
On Sunday, 17 February 2013 at 11:20:55 UTC, Sönke Ludwig wrote:

 My question if we can get by without procedural build scripts 
 is still
 open. If we could, it would give great benefits in simplicity 
 of the
 system and its usage. This may require a change in conventions 
 for some
 projects, but I think that can be worth it.

 Maybe it would be good to have some concrete examples of 
 complex build
 scenarios to better judge what is possible and what may be 
 problematic.

I think embedding procedural scripts are always a plus. Given that you already have JSON, I would vote for *cough* Javascript (V8 engine).
Feb 23 2013
prev sibling next sibling parent Russel Winder <russel winder.org.uk> writes:
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Sat, 2013-02-23 at 10:20 +0100, SomeDude wrote:
[=E2=80=A6]
 Well, in the Java world, there is ant. It does the trick, but=20
 it's quite ugly.

Anyone in the Java world still using Ant is just so last decade ;-) Maven attempts to be wholly declarative and succeeds in that all the hard work is done via plugins coded in Java or Groovy code. Gradle makes no pretence as being either declarative or iterative, but embraces both. As much of a specification is as declarative as possible, but where imperative is needed it is available as Gradle specifications are Groovy scripts with the Gradle internal DSL. =20 --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Feb 23 2013
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 23 February 2013 at 10:21:59 UTC, Sönke Ludwig wrote:
 Am 23.02.2013 10:20, schrieb SomeDude:
 On Sunday, 17 February 2013 at 08:02:41 UTC, Sönke Ludwig 
 wrote:
 To me the most interesting open question is this: Do we 
 actually gain
 from programmatic support for the build description, or does 
 it suffice
 to have a good purely descriptive system? If the former 
 should be true
 for more than 1% of the cases, that would definitely be a 
 good argument
 against pure data.

Well, in the Java world, there is ant. It does the trick, but it's quite ugly.

And it also does the really strange thing to actually build up a procedural build description in a declarative (+ugly) language. That's definitely not what I have in mind - what I want is really a pure /description/ of the system, from which the build steps (or anything else) can be inferred.

In my experience, this ends up with an explosion of plugins or special cases to handle some tricks in the build. At the end, you ends up having some kind of programming language, but horribly designed. I don't think both contradict themselves, as you can provide descriptive definition via several declaration with known names. You can also provide hooks or way to create plugins from withing the script definition. That were they belongs if you don't want user to download 10 bazillions plugins in addition to the build system.
Feb 23 2013
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 23 February 2013 at 11:21:06 UTC, Russel Winder 
wrote:
 On Sat, 2013-02-23 at 10:20 +0100, SomeDude wrote:
 […]
 Well, in the Java world, there is ant. It does the trick, but 
 it's quite ugly.

Anyone in the Java world still using Ant is just so last decade ;-) Maven attempts to be wholly declarative and succeeds in that all the hard work is done via plugins coded in Java or Groovy code.

Comparing ant and maven is not appropriate here as maven is a build system + a package manager when ant only builds. The plugin system of maven is notoriously hellish. This shows 2 things : 1/ That extending the build system is really required, and if not permitted from the build file themselves, by plugins to extends the descriptive capabilities of the descriptive language. 3/ That the benefice of having the build tool and the package manager working together is big as people plebiscite it even when the system is way worse on many other aspects.
Feb 23 2013
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 23 February 2013 at 11:21:06 UTC, Russel Winder 
wrote:
 Gradle makes no pretence as being either declarative or 
 iterative, but
 embraces both. As much of a specification is as declarative as 
 possible,
 but where imperative is needed it is available as Gradle 
 specifications
 are Groovy scripts with the Gradle internal DSL.
 

Do you have some link I can read about this ? This sounds like a very nice project !
Feb 23 2013
prev sibling next sibling parent "simendsjo" <simendsjo gmail.com> writes:
On Saturday, 23 February 2013 at 16:44:59 UTC, Nick Sabalausky 
wrote:
(...)
 Anyone still using Java is just so last decade ;)

I've managed to dodge Java all these years, but I just started a college which teach Java. Even after using it only for a couple of thousand lines of code, I understand the hatred.. Feels like I'm in a straitjacket. Yes, it might be easy to learn, but damn it's verbose!
Feb 23 2013
prev sibling next sibling parent Russel Winder <russel winder.org.uk> writes:
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Sat, 2013-02-23 at 17:57 +0100, simendsjo wrote:
 On Saturday, 23 February 2013 at 16:44:59 UTC, Nick Sabalausky=20
 wrote:
 (...)
 Anyone still using Java is just so last decade ;)


:-)
 I've managed to dodge Java all these years, but I just started a=20
 college which teach Java. Even after using it only for a couple=20
 of thousand lines of code, I understand the hatred.. Feels like=20
 I'm in a straitjacket. Yes, it might be easy to learn, but damn=20
 it's verbose!

Java 8 will be a revolution in Java much, much bigger than Java 5. It will either make Java a renewed and refreshed language, a joy to use, or it will be the end of the road. Javas 9, 10, 11, and 12 are planned but may be irrelevant. In the mean time Groovy already has everything that Java is seeking to include. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Feb 23 2013
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 23 February 2013 at 16:57:24 UTC, simendsjo wrote:
 On Saturday, 23 February 2013 at 16:44:59 UTC, Nick Sabalausky 
 wrote:
 (...)
 Anyone still using Java is just so last decade ;)

I've managed to dodge Java all these years, but I just started a college which teach Java. Even after using it only for a couple of thousand lines of code, I understand the hatred.. Feels like I'm in a straitjacket. Yes, it might be easy to learn, but damn it's verbose!

The whole trick with java, is that your IDE write most of the verbosity for you. This a whole new set of programming techniques to master.
Feb 23 2013
prev sibling next sibling parent Russel Winder <russel winder.org.uk> writes:
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Sat, 2013-02-23 at 22:24 +0400, Dmitry Olshansky wrote:
[=E2=80=A6]
 It doesn't help *reading* this verbosity.

Very, very true. Sadly, D has some arcane bits that make it equally difficult to read D code. For example: example.filter!isLongEnough().array() Why ! in one place and . in the others, it is all just calling a method on something. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Feb 23 2013
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Sat, 23 Feb 2013 09:17:37 -0800
"H. S. Teoh" <hsteoh quickfur.ath.cx> wrote:

 On Sat, Feb 23, 2013 at 05:57:23PM +0100, simendsjo wrote:
 On Saturday, 23 February 2013 at 16:44:59 UTC, Nick Sabalausky
 wrote:
 (...)
Anyone still using Java is just so last decade ;)

I've managed to dodge Java all these years, but I just started a college which teach Java. Even after using it only for a couple of thousand lines of code, I understand the hatred.. Feels like I'm in a straitjacket. Yes, it might be easy to learn, but damn it's verbose!

That was my reaction too, when I first starting learning Java. And that is still my reaction today.

When I used it, it was back when v2 was new. To it's credit, it *did* teach me to hate C++'s module system and classes. But, yea, "staightjacket" and "verbose" are the right words. (And if an advanced IDE is *required* to make a language usable, then the language sucks). I know there's been improvements from v5+, but by then I had already switched to the [at least at the time] far superior C# (which I've since gotten fed up with too, and abandoned completely in favor of D). Any modern Java improvements are just far too little, far too late. They could fix all it's problems tomorrow, but won't matter because the damage to its reputation has already been done. Plus does any serious coder really trust Oracle?
 It's not a *bad* language per se. In fact, a lot of it is quite ideal.
 Or rather, idealistic, should I say. Unfortunately, that makes it a
 pain to map to messy real-world situations -- you end up with a
 truckload of wrappers and incrediblyLongAndVerboseIdentifiers just so
 the language can remain "pure". As for being a straitjacketed
 language, this IOCCC entry says it best:
 
 	http://www.ioccc.org/2005/chia/chia.c
 
 ;-)
 

Nice :) It's interesting how much in-line that is with the quote from the old D homepage that was a big part of what made D win me over from day one: "It seems to me that most of the "new" programming languages fall into one of two categories: Those from academia with radical new paradigms and those from large corporations with a focus on RAD and the web. Maybe it's time for a new language born out of practical experience implementing compilers."
Feb 23 2013
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Sat, 23 Feb 2013 23:22:54 +0400
Dmitry Olshansky <dmitry.olsh gmail.com> wrote:
 
 A trivial example is storage of pairs or tuples. That plus using
 Java's containers makes memory footprint explode. At 100K+ hash-map I
 notice quite high factor of waste. It's some double digits compared
 to plain C arrays. I probably shouldn't care that much since servers
 are having tons of 0RAM these days

That "tons of RAM" isn't always true. My server is a lower-end VPS, so I have a mere 512 MB RAM: one-eighth as much as my *budget-range laptop*. But it's been fine for me so far, especially since I don't run JVM on it. (I know I could use a dedicated physical server and easily get far more RAM, but I *love* VPS hosting - all the many benefits of an external web host (ex: not having to pay for my own T1 or better), but without the near-total lack of control or the impossibility of finding a company that knows what they're doing.) Anyway, I'd love more RAM and I will certainly get it when I need to, but every double-ing of my server's RAM will double my hosting costs - and doing that just for the sake of downgrading from a great language like D to a mediocre one like Java (even if it's J8+) wouldn't make very good business sense ;) Higher costs *plus* lower productivity and morale...yea, not exactly a great deal ;) My main point, of course, being: "Tons of RAM" isn't *always* the case for servers. Thank goodness for D :)
Feb 23 2013
prev sibling next sibling parent Russel Winder <russel winder.org.uk> writes:
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Sat, 2013-02-23 at 14:58 +0100, deadalnix wrote:
[=E2=80=A6]
 Comparing ant and maven is not appropriate here as maven is a=20
 build system + a package manager when ant only builds.

Comparing Ant and Maven is perfectly valid since the goal of both is to build software from source.
 The plugin system of maven is notoriously hellish. This shows 2=20
 things :
 1/ That extending the build system is really required, and if not=20
 permitted from the build file themselves, by plugins to extends=20
 the descriptive capabilities of the descriptive language.
 3/ That the benefice of having the build tool and the package=20
 manager working together is big as people plebiscite it even when=20
 the system is way worse on many other aspects.

Thank goodness Gradle is available and showing that whilst Maven Central is a wonderful resource, Maven itself is second rate. Ant is fourth rate. We should note that Gradle and Maven have dependency management including of artefacts. In the Java-verse an artefact may contain many packages. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Feb 24 2013
prev sibling next sibling parent Russel Winder <russel winder.org.uk> writes:
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Sat, 2013-02-23 at 10:08 -0700, David Gileadi wrote:
[=E2=80=A6]
 I love Gradle!  Official site at http://www.gradle.org, with very good=

 docs including getting started tutorials.

Excellent. So do I, but I am biased. But less so that I was three or four years ago.
 In practice I've found it to be concise and readable compared to=20
 Ant/Maven, and (almost ridiculously) easily extended when necessary.=20
 IMO if you're doing Java builds it's the hands-down winner.

Also, Gradle handles dependencies properly where Maven doesn't and Ant doesn't even try. Also Gradle handles multi-module projects very well where Maven doesn't and Ant doesn't even try. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Feb 24 2013
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Sun, 24 Feb 2013 15:49:52 +0400
Dmitry Olshansky <dmitry.olsh gmail.com> wrote:

 c) a VM. Like it or not but corps love VMs and safe isolated 
 environments. For us the fact that it's cross-platform also removes 
 quite a bit of pain, plus no memory corruption bugs etc.

Luckily, modern server hardware should support hardware virtualization, and most languages/libs are pretty good at cross-platform, so this one shouldn't be much of a "reason for JVM" anymore like it might have been ten or so years ago.
Feb 24 2013
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Tue, 26 Feb 2013 05:16:29 +0000
Russel Winder <russel winder.org.uk> wrote:

 On Sun, 2013-02-24 at 16:32 -0500, Nick Sabalausky wrote:
 [=E2=80=A6]
 Luckily, modern server hardware should support hardware
 virtualization, and most languages/libs are pretty good at
 cross-platform, so this one shouldn't be much of a "reason for JVM"
 anymore like it might have been ten or so years ago.

But this is where "virtual !=3D virtual": hardware virtualization is a different thing from virtual machines. The reason for JVM and PVM remains even in a world of server virtualization.

How so? VM is about two things: sandboxing and cross-platform. Hardware virtualization is sandboxing without the overhead of bytecode. As for cross-platform:
 Cross platform is
 not the application developers problem using a virtual machine as it
 is with native codes. This has not changed.
=20

Anytime you actually *need* to take a platform difference into account, a VM will not help you. If anything it might get in your way. In all other cases, forcing handling of platform differences onto the application developer is a failure of library API design - using native does not change that.
Feb 26 2013
prev sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Tue, 26 Feb 2013 05:12:59 +0000
Russel Winder <russel winder.org.uk> wrote:

 On Sun, 2013-02-24 at 15:49 +0400, Dmitry Olshansky wrote:
 [=E2=80=A6]
 You missed the point that these have to be the *real* integer
 constants starting from 0. No frigging magic classes please.

I am not sure why they have to be hardware integers, this is a JVM-based system and hardware integers do not exist. I feel a contradiction between requirements and technology here! =20

JVM runs on hardware therefore hardware integers clearly do exist whether JVM chooses to expose them or block them.
 Safe Enum pattern/idiom is indeed all about the representation of the
 symbols being instances of a class. But small immutable objects are
 very cheap these days on the JVM.
=20

As cheap as a real native primitive?
 It is possible Java 9 or Java 10 will remove the primitive types
 completely so that all variables are reference types leaving it to the
 JVM to handle all boxing and unboxing internally thus making things a
 lot more efficient and faster.

How could that possibly be *more* efficient and faster?
=20
 Go has eschewed all dynamic linking and is making this a feature. But
 it has a mechanism for being able to call C from libraries. Python
 has a mechanism for calling C from shared libraries. D is at a
 disadvantage.
=20

D is also able to call C. And it doesn't pretend that missing dynamic lib support is a "feature". D is certainly not at any disadvantage here.
=20
 Go has managed to attract volunteer labour to write in Go new versions
 of everything previously written in C other than actual OSs. But even
 there people are beginning to write OSs in Go.
=20

FWIW, People have already written OS in D.
Feb 26 2013
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2013-02-16 18:10, Sönke Ludwig wrote:

 Preliminary package format documentation:
 http://registry.vibed.org/package-format

BTW, "x64" is not a predefined version identifier. -- /Jacob Carlborg
Feb 16 2013
parent =?UTF-8?B?U8O2bmtlIEx1ZHdpZw==?= <sludwig outerproduct.org> writes:
Am 16.02.2013 19:11, schrieb Jacob Carlborg:
 On 2013-02-16 18:10, Sönke Ludwig wrote:
 
 Preliminary package format documentation:
 http://registry.vibed.org/package-format

BTW, "x64" is not a predefined version identifier.

Thanks, you are right, that was a typo.
Feb 16 2013
prev sibling next sibling parent reply "SomeDude" <lovelydear mailmetrash.com> writes:
On Saturday, 16 February 2013 at 17:10:33 UTC, Sönke Ludwig wrote:
 Apart from that we have tried to be as flexible as possible 
 regarding
 the way people can organize their projects (although by default 
 it
 assumes source code to be in "source/" and string imports in 
 "views/",
 if those folders exist).

Many projects put their source code in src/ instead of source/ . Can DUB support this ?
Feb 16 2013
parent =?UTF-8?B?U8O2bmtlIEx1ZHdpZw==?= <sludwig outerproduct.org> writes:
Am 16.02.2013 20:07, schrieb SomeDude:
 On Saturday, 16 February 2013 at 17:10:33 UTC, Sönke Ludwig wrote:
 Apart from that we have tried to be as flexible as possible regarding
 the way people can organize their projects (although by default it
 assumes source code to be in "source/" and string imports in "views/",
 if those folders exist).

Many projects put their source code in src/ instead of source/ . Can DUB support this ?

Yes, it supports a "sourcePath" field to customize that.
Feb 16 2013
prev sibling next sibling parent Johannes Pfau <nospam example.com> writes:
Am Sat, 16 Feb 2013 11:37:00 -0800
schrieb "H. S. Teoh" <hsteoh quickfur.ath.cx>:

 
 Having a standardized way of generating build scripts is good, because
 then the D package manager can target the *end user*'s preferred build
 system, rather than whatever build system the package writers chose.
 The package writers can just specify how to build the stuff, then let
 the D packager generate makefiles for one user, Ant files for another
 user, etc.. This makes it much more friendly to use, and therefore,
 more likely people will actually use it.
 

Having a common standard build tool is always good. But some kind of projects require custom build scripts (calling swig, generating interface files for ipc stuff, executing the C compiler to check if a function is available to disable / enable additional features, calling pkgconfig for some reason, compiling additional c/c++ files, assembling additional files, ...). I think splitting DUB into a package manger and build tool would be a good idea. Ship them as one package, so that everyone using the package manager also has the build tool installed. And integrate them as much as possible, the default setup can still work exactly the same as if they were integrated. The benefit of splitting them in this way: You're forced to provide interfaces for the build tool to communicate with the package manager and every other build tool can use those interfaces as well. This way there are no second class build systems. As an example: package.json { "name": "myproject", "description": "A little web service of mine.", "authors": ["Peter Parker"], "homepage": "http://myproject.com", "license": "GPL v2", "dependencies": { "vibe-d": ">=0.7.11" }, "build": "DUB" } build.json { "configurations": { "metro-app": { "versions": ["MetroApp"], "libs": ["d3d11"] }, "desktop-app": { "versions": ["DesktopApp"], "libs": ["d3d9"] } } } doing a "dub-pkg install myproject" should fetch the sources, then call "dub-build build.json". dub-build will have to ask the package manager for some information: "dub-pkg package.json --query dependencies", "dub-pkg package.json --query --package=vibe.d --link-path". Or it might require some additional actions: "dup-pkg --install-dependency d3d9"
Feb 16 2013
prev sibling next sibling parent "alex" <info alexanderbothe.com> writes:
On Saturday, 16 February 2013 at 17:10:33 UTC, Sönke Ludwig wrote:
  - Full IDE support:

    Rather than focusing on performing the build by itself or 
 tying a
    package to a particular build tool, DUB translates a general
    build receipt to any supported project format (it can also 
 build
    by itself). Right now VisualD and MonoD are supported as 
 targets and
    rdmd is used for simple command line builds. Especially the 
 IDE
    support is really important to not simply lock out people 
 who prefer
    them.

Cool, I guess creating a built-in import/export functionality for DUB scripts in Mono-D should be amazing. Will note this down on my todo list.
Feb 16 2013
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Sat, 16 Feb 2013 22:21:55 +0100
S=F6nke Ludwig <sludwig outerproduct.org> wrote:
=20
 My idea for the things you mentioned (swig, c, etc.) was to have a set
 of hooks that can be used to run external tools (invoked before
 build/project file generation, before build or after build). That
 together with your proposed interface should provide all the necessary
 flexibility while putting an emphasis on a standard way to describe
 the build process.

I like a lot of what you've said, but my concern about this part is, does it support things like: - Dependencies between custom build steps (so that custom build steps aren't needlessly re-run when they're not out-of-date, and can possibly be parallelized) - Multiple custom build targets - Multiple custom build configurations I think those are essential for a real general-purpose build tool.
Feb 16 2013
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 2/16/13, S=F6nke Ludwig <sludwig outerproduct.org> wrote:
 Some may already have noticed it as it's mentioned already on the vibe.d
 website and is currently hosted on the same domain as the old VPM registr=

 http://registry.vibed.org/

So can we start using this already or is this just a preview? It says here[1] it's precompiled for win32, but where do we get the executable from? I thought a --recursive clone would fetch all dependencies, but app.d won't compile without vibe. [1] https://github.com/rejectedsoftware/dub Anyway it looks interesting!
Feb 16 2013
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 2/16/13, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:
 So can we start using this already or is this just a preview? It says
 here[1] it's precompiled for win32, but where do we get the executable
 from? I thought a --recursive clone would fetch all dependencies, but
 app.d won't compile without vibe.

 [1] https://github.com/rejectedsoftware/dub

 Anyway it looks interesting!

Ah I didn't spot the download link http://registry.vibed.org/download I guess this could be made more visible by adding a link to the download page from the github repository, and maybe putting the { * Using DUB * Download * Publishing packages * Helping developme } section at the top instead of the bottom. Well, the latter is a stylistic issue so I don't mind it being this way. On typical websites I find important stuff at the top, and copyright and privacy policies at the bottom.
Feb 16 2013
prev sibling next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 2/16/13, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:
 Ah I didn't spot the download link http://registry.vibed.org/download

I get an error running dub.exe: --------------------------- dub.exe - Unable To Locate Component --------------------------- This application has failed to start because libevent.dll was not found. Re-installing the application may fix this problem. --------------------------- You might want to list all the dependencies needed for dub or distribute them in a zip.
Feb 16 2013
parent reply =?ISO-8859-1?Q?S=F6nke_Ludwig?= <sludwig outerproduct.org> writes:
 You might want to list all the dependencies needed for dub or
 distribute them in a zip.
 

They are in the .zip now and I listed the dependencies on the download page. Sorry, the distribution stuff is still very much ad-hoc ATM. I'll make some installers once the build process is automated.
 Ah I didn't spot the download link http://registry.vibed.org/download
 
 I guess this could be made more visible by adding a link to the
 download page from the github repository, and maybe putting the { *
 Using DUB * Download * Publishing packages * Helping developme }
 section at the top instead of the bottom.

There now is a link on the github page + a note for non-Windows that libevent/libssl are needed. I also added a short sentence how to build by hand. The dependencies will also likely change to just libcurl at some point with a make file or something to make bootstrapping as simple as possible. I also agree regarding the navigation links. The page layout will have to be extended a bit to handle more packages anyway (pages, search function, possibly categories) so I'll keep it as a TODO item for a few days and then do both at once.
Feb 16 2013
parent reply =?UTF-8?B?U8O2bmtlIEx1ZHdpZw==?= <sludwig outerproduct.org> writes:
Am 21.02.2013 22:06, schrieb Graham Fawcett:
 On Sunday, 17 February 2013 at 07:23:22 UTC, Sönke Ludwig wrote:
 You might want to list all the dependencies needed for dub or
 distribute them in a zip.

They are in the .zip now and I listed the dependencies on the download page. Sorry, the distribution stuff is still very much ad-hoc ATM. I'll make some installers once the build process is automated.
 Ah I didn't spot the download link http://registry.vibed.org/download

 I guess this could be made more visible by adding a link to the
 download page from the github repository, and maybe putting the { *
 Using DUB * Download * Publishing packages * Helping developme }
 section at the top instead of the bottom.

There now is a link on the github page + a note for non-Windows that libevent/libssl are needed. I also added a short sentence how to build by hand. The dependencies will also likely change to just libcurl at some point with a make file or something to make bootstrapping as simple as possible.

Personally, I think that libcurl-only dependency is an important goal. Dub's third-party dependencies are far too "modern." For example, I have an older Ubuntu instance I use for testing (10.10), where libevent 2.x simply isn't available (can't run your binary, and can't compile your source). For Vibe, these may be acceptable requirements, but not for a general packaging tool. I would hope that a future version of Dub wouldn't have any dependencies on Vibe, either. That's an odd bootstrapping arrangement. Best, Graham

Fully agree, this planned as the next step (since it was a part of vibe.d in the beginning, using that just was the most natural choice back then).
Feb 21 2013
parent reply =?UTF-8?B?U8O2bmtlIEx1ZHdpZw==?= <sludwig outerproduct.org> writes:
Am 22.02.2013 07:56, schrieb Sönke Ludwig:
 I would hope that a future version of Dub wouldn't have any dependencies
 on Vibe, either. That's an odd bootstrapping arrangement.


Done now on master. Does anyone know which curl package needs to be installed on Ubuntu so that std.net.curl is happy? I tried libcurl4-openssl-dev but get a large list of unresolved symbols.
Feb 22 2013
parent =?UTF-8?B?U8O2bmtlIEx1ZHdpZw==?= <sludwig outerproduct.org> writes:
Am 22.02.2013 10:40, schrieb Sönke Ludwig:
 Am 22.02.2013 07:56, schrieb Sönke Ludwig:
 I would hope that a future version of Dub wouldn't have any dependencies
 on Vibe, either. That's an odd bootstrapping arrangement.


Done now on master. Does anyone know which curl package needs to be installed on Ubuntu so that std.net.curl is happy? I tried libcurl4-openssl-dev but get a large list of unresolved symbols.

On Debian it worked with that same package. Both, Ubuntu 11.10 and 12.04 just generate linker errors... Well new binaries with libcurl dependency and without libevent dependency are available: http://registry.vibed.org/download (No 32-bit Linux version for now)
Feb 22 2013
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Sat, 16 Feb 2013 23:58:41 +0100
"Rob T" <alanb ucora.com> wrote:
 
 Why use json (which is a subset of javascript), or ruby, or 
 python, etc? Is there something fundamentally wrong with D that 
 makes it unsuitable for this role?
 

I largely agree, except: 1. For simple projects with trivial build system requirements, D is overkill compared to a purely data-only language. 2. If it's in D, it'll tend to end up tied to a particular range of compiler versions (at least for now). Granted, the same will be true of whatever D-based project it's building, but a script written in a more stable language (as data-only languages generally tend to be) could conceivably be able to actually detect the installed compiler and react accordingly. Of course, #2 could be easily mitigated if something like DVM were totally standard and then (at least for Linux, maybe not Windows) your D-based buildscript started with something like: #!dmd-2.058
Feb 16 2013
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Sun, 17 Feb 2013 03:25:04 +0100
"SomeDude" <lovelydear mailmetrash.com> wrote:

 On Saturday, 16 February 2013 at 22:49:27 UTC, Nick Sabalausky 
 wrote:
 
 I like a lot of what you've said, but my concern about this 
 part is,
 does it support things like:

 - Dependencies between custom build steps (so that custom build 
 steps
 aren't needlessly re-run when they're not out-of-date, and can
 possibly be parallelized)

One problem with parallelizing builds right now is, the risk is high they fail with out of memory errors before they finish. Unless it's possible to pararellize them on different build boxes, or one owns a build machine with 64 Gb of RAM.

For compiling D sources yea, but not necessarily for other custom build steps.
Feb 16 2013
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Sun, 17 Feb 2013 09:24:08 +0000
Russel Winder <russel winder.org.uk> wrote:
 
 Using JSON (YAML, whatever) is a recipe for failure. The syntax is
 data exchange facing, whereas writing and maintaining a build script
 (even if it is only 4 lines long), is a human activity. Do not
 underestimate the reaction "why do I have to write this data exchange
 format?" The rapid demise of XML, Ant, Maven, etc. is testament to
 people awakening to the fact that they were forced to use manually a
 language designed for another purpose.
 

 
 If all the energy around DUB and Orbit leads to the situation where D
 cannot be used to create an internal DSL for describing build and
 package/artefact management, then D has failed.
 

I half-agree with you. Where I disagree is with the idea that it's an issue of "embedded DSL (good) vs data language (bad)". I think the real issue is just simply having good clean syntax for what you need to accomplish. Note that your examples of "rapid demise" (XML, Ant, Maven) are all XML, which is notoriously over-verbose. JSON has much the same problem, just to a lesser degree: It's a subset of human-intended JavaScript, yes, but it just happens to be an unfortunately verbose subset (although not to the extreme degree as XML). Then on the flipside, we have the example of INI files: Definitely a purely data-language, definitely not an embedded DSL, and yet that's never been a hindrance for it: it's been a lasting success for many things. And the only time anyone complains about it is when more power is needed. (And no, I'm not suggesting DUB or Orbit use INI files. Again, specifically because more power is needed here.) So I agree JSON/YAML carries an unfortunate risk of turning people off - but because of excess syntax, not because it's a data language or because it's not embedded in D. I think SDL (Simple Declarative Language) hits the sweet spot: http://sdl.ikayzo.org/display/SDL/Language+Guide FWIW, I just resumed working on a D parser for SDL. EVen if SDL doesn't get used for DUB or Orbit or dmd.conf, I think it'll still be a great thing to have available. It's such a simple grammar, I don't think it'll take long to reach a usable point.
Feb 17 2013
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Sun, 17 Feb 2013 14:28:14 +0100
"Peter Sommerfeld" <noreply rubrica.at> wrote:

 Am 17.02.2013, 14:03 Uhr schrieb Jacob Carlborg:
 
 On 2013-02-16 23:19, Peter Sommerfeld wrote:

 Another issue: I understand why you are using json but it is
 not the best suited format IMHO. D put some restriction on
 module names, thus the format can be simplified. Compare:

Inventing a new format is pointless. I you want it less verboseYaml is an alternative.

If you prefer indentation. I would never touch it.

Plus, some of it's syntax is rather non-intuitive: !!map { ? !!str "---" : !!str "foo", ? !!str "...", : !!str "bar" } WTF?
Feb 17 2013
prev sibling next sibling parent reply "Dicebot" <m.strashun gmail.com> writes:
Don't really have a vision how it will look like in practice 
based on released data, but looking at current discussion I'd 
like to state that I am most interested in a centralized 
library/app list, dependency tracker and an easy way to get 
sources. Support for a popular build systems may be fine as a 
bonus, but only as an added feature, with no coupling. I'd hate 
to see a ruby gem (or similar) hell pushed as a "D way". 
Packaging is best done (and should be) by OS package manager, not 
hundreds of languages-specific managers. Good language package 
manager in my opinion is just an information source for OS 
package builders.
Feb 17 2013
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2013-02-17 15:40, Dicebot wrote:
 Don't really have a vision how it will look like in practice based on
 released data, but looking at current discussion I'd like to state that
 I am most interested in a centralized library/app list, dependency
 tracker and an easy way to get sources. Support for a popular build
 systems may be fine as a bonus, but only as an added feature, with no
 coupling. I'd hate to see a ruby gem (or similar) hell pushed as a "D
 way". Packaging is best done (and should be) by OS package manager, not
 hundreds of languages-specific managers. Good language package manager
 in my opinion is just an information source for OS package builders.

There are no package managers out of the box for Mac OS X or Windows. -- /Jacob Carlborg
Feb 17 2013
next sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-02-17 16:13, Dicebot wrote:

 Thus I admit that adding _possibility_ of get&build in one box may be
 useful. But it makes no sense to make linux users suffer (by making it
 The Way to Do Things)  because of design mistakes of other OSes. After
 all, as far as I am aware, Windows gets a package manager in form of
 application store in last released version, doesn't it?

1. It hardly doubt that you can put libraries there 2. So suddenly D will require Windows 8 to use? Not going to happen -- /Jacob Carlborg
Feb 17 2013
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-02-17 22:43, Dicebot wrote:

 What are the limitations then? I am only run-by Windows user, so it was
 actually a question.

As far as I'm aware it's only for applications. I hardly doubt you can but libraries and tools (command line) there. Same goes for the App Stor on Mac OS X. -- /Jacob Carlborg
Feb 18 2013
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-02-18 00:51, David Nadlinger wrote:

 I think any D package management tool needs to be able to handle
 multiple coexisting compiler configurations, or at least allow being
 used for multiple installation (similar to RubyGems and rbenv/rvm).

Orbit and DVM :) -- /Jacob Carlborg
Feb 18 2013
prev sibling next sibling parent =?UTF-8?B?U8O2bmtlIEx1ZHdpZw==?= <sludwig outerproduct.org> writes:
Am 18.02.2013 00:51, schrieb David Nadlinger:
 On Sunday, 17 February 2013 at 14:40:26 UTC, Dicebot wrote:
 Packaging is best done (and should be) by OS package manager, not
 hundreds of languages-specific managers. Good language package manager
 in my opinion is just an information source for OS package builders.

D does not lend itself well to OS-level packaging, because different compilers as well as different versions of the same compiler will not be binary-compatible in the foreseeable future. I think any D package management tool needs to be able to handle multiple coexisting compiler configurations, or at least allow being used for multiple installation (similar to RubyGems and rbenv/rvm). David

On top of that, there are also certain use cases which usually are not possible using the typical OS package managers: - Install different versions of the same library for different applications (in theory the right version schema would solve that, but in practice this goes only for mature/stable libraries) - Installation by unprivileged users - Working on a library and a dependent application at the same time (DUB supports "local" package sources for that) - normally this would require to upload a new version of the library after each change and then reinstall, or to completely leave the package system and manually set different import paths for development - Automatic dependency installation not just for published packages, but also for local/closed/WIP ones Then there are also a lot of other D specific things to consider, such as compiler flags/version identifiers that may be defined in a single package, but need to be applied throughout all dependencies and the application itself to not get compiler/linker errors in the end. But on the other hand, adding support for automatic OS package generation may be a good option. All necessary information would be there.
Feb 18 2013
prev sibling next sibling parent Marco Nembrini <marco.nembrini.co gmail.com> writes:
On 18.02.2013 08:32, Nick Sabalausky wrote:
 On Sun, 17 Feb 2013 15:40:25 +0100
 "Dicebot" <m.strashun gmail.com> wrote:
 Packaging is best done (and should be) by OS package manager, not
 hundreds of languages-specific managers. Good language package
 manager in my opinion is just an information source for OS
 package builders.

I'm not real big on the idea of OS package managers. Not when Unix is in the picture anyway. I'm getting really fed up with software that has a "download / install" webpage populated with totally different instructions for an endless, yet always incomplete, list of Linux variants. And *maybe* BSD. And then on top of that, the poor *project* maintainers have to maintain all of that distro-specific cruft. Unless they're lucky and the project is big enough that the ditro maintainers are willing to waste *their* time converting the package into something that only works on their own distro. I believe I can sum up my thoughts with: "Fuck that shit."

Are you aware of the 0install project (http://zero-install.sourceforge.net/) ? It seems to me that it solves most packaging problems while still being able to collaborate with the OS package manager if needed. From the project page: "Zero Install is a decentralised cross-distribution software installation system. Other features include full support for shared libraries, sharing between users, and integration with native platform package managers. It supports both binary and source packages, and works on Linux, Mac OS X, Unix and Windows systems. It is fully Open Source." -- Marco Nembrini
Feb 18 2013
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2013-02-19 08:19, Dicebot wrote:

 You can. Debian is weird though because it is done via dpkg, not apt-get.
 In Arch it is as simple to "pacman -U package-file" vs "pacman -S
 name-in-repo".

Isn't apt-get built on top of dpkg or something like that? -- /Jacob Carlborg
Feb 19 2013
parent Matt Soucy <msoucy csh.rit.edu> writes:
On 02/19/2013 04:18 AM, Dicebot wrote:
 On Tuesday, 19 February 2013 at 08:34:13 UTC, Jacob Carlborg wrote:
 On 2013-02-19 08:19, Dicebot wrote:

 You can. Debian is weird though because it is done via dpkg, not
 apt-get.
 In Arch it is as simple to "pacman -U package-file" vs "pacman -S
 name-in-repo".

Isn't apt-get built on top of dpkg or something like that?

It is. AFAIK apt-get is dpkg + getting packages via repo sources. But last time I have been searching, found no way to proxy dpkg call to install local package via apt-get which felt a bit weird. Probably my failure at searching though.

In Red Hat & Fedoraland, it's a somewhat similar situation. RPMs are (usually) installed with the rpm command: rpm -I dmd-2.062-0.fedora.x86_64.rpm but yum (also, I believe, dnf for Fedora 18 users) can also install them, and do dependency checking: yum install dmd-2.062-0.fedora.x86_64.rpm
Feb 19 2013
prev sibling parent FG <home fgda.pl> writes:
On 2013-02-20 11:32, Moritz Maxeiner wrote:
 As for the X11 stuff, that's still more manual than I'd like when it
 comes to X11. (Like I said, I've had *BIG* problems dealing directly
 with X11 in the past.) But I may give it a try. I'm sure it's improved
 since the nightmares I had with it back around 2001/2002, but I
 still worry *how* much improved... Heck, I've even had X11 problems as
 recently as Ubuntu 10.

Ah, okay, that's strange but I can understand that. The only problems I ever had with X was that I had to add an InputClass to the evdev file because evdev otherwise kept refusing to enable USB mice(s).

Diving deeper into the OT... Not strange at all. I had similar experiences around 2001 when I bought a new immitation-of-ATI GFX card -- first there were no drivers for it and, when they finally showed up (proprietary and others), after weeks of configuring Xorg, I still couldn't make 3d acceleration work and ended up without it for the next few years. Not only Xorg are a problem. Even today I can't fire up the newest Ubuntu install CDs without the screen going blank. That's how bad things are with X and even framebuffer console. So I am not surprised hearing about problems in this domain. As for package managers, I'm fine with using the OS ones for almost everything and Python's own system for its extra modules (only because I consider it an ecosystem of its own). Still, I compile some programs and libs myself (when their most current version is required), but only when they aren't a dependency for something I wouldn't want to compile on my own. I am still not convinced why D would need a package manager. Why not use a standardized build script with dependency checks or just use CMake like everybody else does?
Feb 20 2013
prev sibling next sibling parent "Dicebot" <m.strashun gmail.com> writes:
On Sunday, 17 February 2013 at 15:08:08 UTC, Jacob Carlborg wrote:
 There are no package managers out of the box for Mac OS X or 
 Windows.

Thus I admit that adding _possibility_ of get&build in one box may be useful. But it makes no sense to make linux users suffer (by making it The Way to Do Things) because of design mistakes of other OSes. After all, as far as I am aware, Windows gets a package manager in form of application store in last released version, doesn't it?
Feb 17 2013
prev sibling next sibling parent Johannes Pfau <nospam example.com> writes:
Am Sun, 17 Feb 2013 00:20:48 -0800
schrieb Jonathan M Davis <jmdavisProg gmx.com>:

 On Sunday, February 17, 2013 09:12:00 S=C3=B6nke Ludwig wrote:
 BTW, I think YAML as a superset of JSON is also a good contender
 with nice syntax features, but also much more complex.

It's also whitespace-sensitive, which is downright evil IMHO. I'd take JSON over YAML any day. =20 - Jonathan M Davis

Are you sure? YAML 1.1 required whitespace after comma and in some more cases, but YAML 1.2 dropped that to be 100% compatible with JSON. If you write JSON you have valid YAML and you can write YAML that is valid JSON. http://en.wikipedia.org/wiki/YAML#JSON http://en.wikipedia.org/wiki/YAML#cite_note-9
Feb 17 2013
prev sibling next sibling parent Robert <jfanatiker gmx.at> writes:
 There are no package managers out of the box for Mac OS X or Windows.
 

That's right, so we have to provide a custom way. Which also is necessary for non root installs and for experimentation and trying out (your own) packages. Nevertheless I think in the long run it should not be a problem to integrate with: http://openbuildservice.org/ in order to also provide packages for distributions? Ideally the build systems configuration contains everything needed to automatically create a spec file for Fedora, for example.
Feb 17 2013
prev sibling next sibling parent Johannes Pfau <nospam example.com> writes:
Am Sun, 17 Feb 2013 14:02:02 +0100
schrieb Jacob Carlborg <doob me.com>:

 On 2013-02-16 21:02, Johannes Pfau wrote:
 
 I think splitting DUB into a package manger and build tool would be
 a good idea.

Exactly. But I see no reason for communicating with processes. Just make them both into libraries and call functions plain functions.

As long as the build script is written in D (or probably even C/C++) a library is really the best solution. If you want to support other build scripts (python/waf, scons, ...) providing a command line tool is nice.
Feb 17 2013
prev sibling next sibling parent Russel Winder <russel winder.org.uk> writes:
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Sun, 2013-02-17 at 16:08 +0100, Jacob Carlborg wrote:
[=E2=80=A6]
 There are no package managers out of the box for Mac OS X or Windows.

The MacPorts, Fink, and Brew folks almost certainly dispute the first of those claims. ;-) --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Feb 17 2013
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Sun, 17 Feb 2013 16:14:25 +0100
Johannes Pfau <nospam example.com> wrote:

 Am Sun, 17 Feb 2013 00:20:48 -0800
 schrieb Jonathan M Davis <jmdavisProg gmx.com>:
 
 It's also whitespace-sensitive, which is downright evil IMHO. I'd
 take JSON over YAML any day.
 

Are you sure? YAML 1.1 required whitespace after comma and in some more cases, but YAML 1.2 dropped that to be 100% compatible with JSON. If you write JSON you have valid YAML and you can write YAML that is valid JSON.

The JSON-compatible subset of YAML is whitespace-insensitive, but indent-scoping is one of the key features YAML adds on top of that.
Feb 17 2013
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Sun, 17 Feb 2013 16:13:47 +0100
"Dicebot" <m.strashun gmail.com> wrote:
 After all, as far as I am aware, Windows gets a 
 package manager in form of application store in last released 
 version, doesn't it?

That doesn't remotely count as a real package manager. (Then again, there isn't much in Win8 that counts for half a damn...And that's coming from a Windows user.)
Feb 17 2013
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Sun, 17 Feb 2013 15:40:25 +0100
"Dicebot" <m.strashun gmail.com> wrote:
 Packaging is best done (and should be) by OS package manager, not 
 hundreds of languages-specific managers. Good language package 
 manager in my opinion is just an information source for OS 
 package builders.

I'm not real big on the idea of OS package managers. Not when Unix is in the picture anyway. I'm getting really fed up with software that has a "download / install" webpage populated with totally different instructions for an endless, yet always incomplete, list of Linux variants. And *maybe* BSD. And then on top of that, the poor *project* maintainers have to maintain all of that distro-specific cruft. Unless they're lucky and the project is big enough that the ditro maintainers are willing to waste *their* time converting the package into something that only works on their own distro. I believe I can sum up my thoughts with: "Fuck that shit."
Feb 17 2013
prev sibling next sibling parent "Dicebot" <m.strashun gmail.com> writes:
On Sunday, 17 February 2013 at 21:32:15 UTC, Nick Sabalausky 
wrote:
 On Sun, 17 Feb 2013 15:40:25 +0100
 "Dicebot" <m.strashun gmail.com> wrote:
 Packaging is best done (and should be) by OS package manager, 
 not hundreds of languages-specific managers. Good language 
 package manager in my opinion is just an information source 
 for OS package builders.

I'm not real big on the idea of OS package managers. Not when Unix is in the picture anyway. I'm getting really fed up with software that has a "download / install" webpage populated with totally different instructions for an endless, yet always incomplete, list of Linux variants. And *maybe* BSD. And then on top of that, the poor *project* maintainers have to maintain all of that distro-specific cruft. Unless they're lucky and the project is big enough that the ditro maintainers are willing to waste *their* time converting the package into something that only works on their own distro. I believe I can sum up my thoughts with: "Fuck that shit."

In perfect world that software should have only one download link - to sources. Habit to get stuff from some probably official web pages is Windows habit. I have no idea why .deb and .rpm are provided so often, have never used a single one. Probably habit again. Then, if your project is small, it is in your interest to maintain packages for distros you want (minimal efforts comparing to software maintenance itself). If it is big, someone will be willing to do it for you. Simple and works naturally better with bigger user base. In return you get one single way to get software from people you may somewhat trust and sane dependency tracking. Beats anything for me and recent move towards various repo-like "stores" only proves it.
Feb 17 2013
prev sibling next sibling parent "Dicebot" <m.strashun gmail.com> writes:
On Sunday, 17 February 2013 at 21:19:42 UTC, Nick Sabalausky 
wrote:
 On Sun, 17 Feb 2013 16:13:47 +0100
 "Dicebot" <m.strashun gmail.com> wrote:
 After all, as far as I am aware, Windows gets a package 
 manager in form of application store in last released version, 
 doesn't it?

That doesn't remotely count as a real package manager. (Then again, there isn't much in Win8 that counts for half a damn...And that's coming from a Windows user.)

What are the limitations then? I am only run-by Windows user, so it was actually a question.
Feb 17 2013
prev sibling next sibling parent "David Nadlinger" <see klickverbot.at> writes:
On Sunday, 17 February 2013 at 14:40:26 UTC, Dicebot wrote:
 Packaging is best done (and should be) by OS package manager, 
 not hundreds of languages-specific managers. Good language 
 package manager in my opinion is just an information source for 
 OS package builders.

D does not lend itself well to OS-level packaging, because different compilers as well as different versions of the same compiler will not be binary-compatible in the foreseeable future. I think any D package management tool needs to be able to handle multiple coexisting compiler configurations, or at least allow being used for multiple installation (similar to RubyGems and rbenv/rvm). David
Feb 17 2013
prev sibling next sibling parent Russel Winder <russel winder.org.uk> writes:
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Sun, 2013-02-17 at 16:32 -0500, Nick Sabalausky wrote:
[=E2=80=A6]
 I'm not real big on the idea of OS package managers. Not when Unix is
 in the picture anyway. I'm getting really fed up with software that has
 a "download / install" webpage populated with totally different
 instructions for an endless, yet always incomplete, list of Linux
 variants. And *maybe* BSD. And then on top of that, the poor *project*
 maintainers have to maintain all of that distro-specific cruft. Unless
 they're lucky and the project is big enough that the ditro maintainers
 are willing to waste *their* time converting the package into something
 that only works on their own distro.
=20
 I believe I can sum up my thoughts with: "Fuck that shit."

Generally I am of the opposite view, that using the distribution's package management is by far the best way (*). When a language decides it has to ignore platforms and provide it's own, I generally think "Another introverted, self obsessed language, ignoring platform's management structures." Trying to cover every platform is though a real pain in the proverbials, I totally agree with that. But let's look at the need for coverage: Windows, OS X, Debian (hence Ubuntu, Mint), Fedora (hence RHEL, CentOS) after that the user base is currently so low that people probably expect to fend for themselves. Windows and OS X require handling because of the very nature of the infrastructure. Debian and Fedora need a good relationship with a recognized packager to get stuff into the distributions and repackaged for each version. This also goes for MacPorts, and Brew (I am guessing Fink is dying?). (*) Debian has GCC and Python management taped nicely, multiple separate versions installable side-by-side and usable. On the other hand Debian totally sucks at handling anything related to Java because they insist on one and only one version of an artefact. It seems they believe the WORA fiction. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Feb 17 2013
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Mon, 18 Feb 2013 06:59:39 +0000
Russel Winder <russel winder.org.uk> wrote:

 On Sun, 2013-02-17 at 16:32 -0500, Nick Sabalausky wrote:
 [=E2=80=A6]
 I'm not real big on the idea of OS package managers. Not when Unix
 is in the picture anyway. I'm getting really fed up with software
 that has a "download / install" webpage populated with totally
 different instructions for an endless, yet always incomplete, list
 of Linux variants. And *maybe* BSD. And then on top of that, the
 poor *project* maintainers have to maintain all of that
 distro-specific cruft. Unless they're lucky and the project is big
 enough that the ditro maintainers are willing to waste *their* time
 converting the package into something that only works on their own
 distro.
=20
 I believe I can sum up my thoughts with: "Fuck that shit."

Generally I am of the opposite view, that using the distribution's package management is by far the best way (*). When a language decides it has to ignore platforms and provide it's own, I generally think "Another introverted, self obsessed language, ignoring platform's management structures." =20 Trying to cover every platform is though a real pain in the proverbials, I totally agree with that. But let's look at the need for coverage: =20 Windows, OS X, Debian (hence Ubuntu, Mint), Fedora (hence RHEL, CentOS) =20 after that the user base is currently so low that people probably expect to fend for themselves. Windows and OS X require handling because of the very nature of the infrastructure. Debian and Fedora need a good relationship with a recognized packager to get stuff into the distributions and repackaged for each version. This also goes for MacPorts, and Brew (I am guessing Fink is dying?). =20 =20 (*) Debian has GCC and Python management taped nicely, multiple separate versions installable side-by-side and usable. On the other hand Debian totally sucks at handling anything related to Java because they insist on one and only one version of an artefact. It seems they believe the WORA fiction. =20

I do like when I can just "apt-get install whatever" and be done. (Assuming I happen to know the package name used, and they're not always entirely intuitive). But then you always come across stuff that isn't in the repos or, almost as bad, needs requires jerking around with sources.list and then installing a key. May as well just download & unzip a file and be done with it (if linux actually had real binary compatibility). But even that problem isn't because of anyone doing anything wrong, or app/lib developers failing to "see the light" of OS package managers.=20 It's because of all the roadblocks in the way of making "proper" packages and getting them into the official repos. And keeping them properly up-to-date. And *those* roadblocks *aren't* the fault of the OS people or the OS package manager people. There isn't really anything they *can* do about it - it's all natural consequences of the whole OS-level package manager system. The fault lies with the model itself. In other words, having OS package managers be the be-all-end-all of package management is a wonderful idea in theory, but it's a pie-in-the-sky dream. It's just not realisticlly feasable because the model just doesn't work well enough at scale: Getting stuff into the right package formats, knowing how to even do that, getting it into the official repos, getting it *past* whatever testing/staging repos there may be, and then actually *to* people, and then getting updates promptly handled. And that's just public OSS stuff, there's a whole different set of issues for anything privately-distributed. It's all far too much to expect from the lib/app devs who already have more than enough to do. So they don't do it. Why deal with all that red tape when you can just toss up a zip, while also giving your users the benefit of not having to wait for all those middlemen, or deal with a totally different set of install instructions for each system, or creating extra hurtles for less-major OSes, etc. It's a win-win devs and their users. So ultimately, OS-level package managers *encourage* developers not to use them. So yea, I like "apt-get install foobar" when it works, but it *can't* always work. The whole thing is just a big broken dynamic that only works well for the big-name packages.
Feb 18 2013
prev sibling next sibling parent "Dicebot" <m.strashun gmail.com> writes:
On Monday, 18 February 2013 at 11:51:14 UTC, Nick Sabalausky 
wrote:
 ...

Ugh, have you ever tried to do it in practice? Because I have been maintaining few packages, primarily for Arch Linux, and it is not even remotely close to what you say. There may be some bureaucratic headache to get stuff into official Debian repos, but you always can create your own mirror, like it was done here for D stuff: http://code.google.com/p/d-apt/wiki/APT_Repository . Packaging itself is always simple and requires close to zero efforts. And saying you don't want to learn OS package manager to distribute stuff for it is like saying you don't want to learn OS kernel API to write drivers to it. Sure it is so better to be forced to learn dozens of language-specific package & build managers to just get a single application working. Software ecosystems are evil.
Feb 18 2013
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Mon, 18 Feb 2013 14:02:16 +0100
"Dicebot" <m.strashun gmail.com> wrote:

 On Monday, 18 February 2013 at 11:51:14 UTC, Nick Sabalausky 
 wrote:
 ...

Ugh, have you ever tried to do it in practice?

'Course not, because why should I? With OS-independent language package managers, all I have to do is toss into a trivial ./build script the few commands (if even that much) needed to grab the dependencies (via DUB/Orbit/Gems/whatever) and launch my buildscript, with only minor tweaks for the build.bat variant (which most Win users won't even need to use at all since a prebuilt .exe is pretty much guaranteed to work out-of-the-box). That's *all* I need for it to work for *everyone*. *And* nobody needs to deal with a long list of "If you're on OS A do this, if you're on OS B do this, OS C do that, etc." Or I can use the OS-based stuff and have it only work for *some* people on *some* OSes. Yea, that sounds really worthwhile. Even if it *is* super-simple, as a lib or app developer I still have no reason to even do so at all in the first place.
 Because I have 
 been maintaining few packages, primarily for Arch Linux, and it 
 is not even remotely close to what you say. There may be some 
 bureaucratic headache to get stuff into official Debian repos, 
 but you always can create your own mirror, like it was done here 
 for D stuff: http://code.google.com/p/d-apt/wiki/APT_Repository . 
 Packaging itself is always simple and requires close to zero 
 efforts.
 

Yea, I'm sure it is a lot simpler if you're primarily targeting just one linux distro and little else. ;) Not simpler for your users, though. :/
 And saying you don't want to learn OS package manager to 
 distribute stuff for it is like saying you don't want to learn OS 
 kernel API to write drivers to it. Sure it is so better to be 
 forced to learn dozens of language-specific package & build 
 managers to just get a single application working. 

If you're dealing with a handful of different languages just to write one program, you're already doing it wrong to begin with. (That's the generalized "you", I don't mean you specifically). And even if that weren't the case, how is needing to deal with a variety of different OS-specific package managers just to release one program any better?
Feb 18 2013
prev sibling next sibling parent "Dicebot" <m.strashun gmail.com> writes:
On Monday, 18 February 2013 at 13:42:49 UTC, Nick Sabalausky 
wrote:
 ...

You are mixing together programmer needs and end-user needs. As package manager takes care of dependencies, it naturally leaks as a mandatory tool to use for someone who want to install your application. And guess what? Those freaking ruby gems almost never "just work" because of some forgotten requirements. I have stumbled upon this just a week ago, trying to install Redmine on Arch Linux : ImageMagick version in repo was too new and all build/install system just died in pain with some obscure error. Was forced to study rake file to tweak dependencies. A lot of pain instead of just having clean full dependency list that I can take care of myself and some generic widely adopted build system. If you want to target specific OS or distro, you'll need learn a good part about its design and environment anyway, at least to adjust your code. If you have not done it, better not pretend your software actually targets it and let enthusiasts take care of it. I have done PKGBUILD's and .deb's so far and those are so damn simple comparing to writing proper cross-platform code.
Feb 18 2013
prev sibling next sibling parent "Dicebot" <m.strashun gmail.com> writes:
On Monday, 18 February 2013 at 14:14:30 UTC, Dicebot wrote:
 ...

end-user who has single OS and is forced to deal with miriads of different package systems or, even better, binary bloat of programs that try to ship every single dependency with it.
Feb 18 2013
prev sibling next sibling parent =?UTF-8?B?U8O2bmtlIEx1ZHdpZw==?= <sludwig outerproduct.org> writes:
Just added a new version with support for GDC and LDC (e.g.
--compiler=gdc or --compiler=ldc2) and some fixes for the VisualD and
Mono-D project file generation. It builds now directly using the
specified compiler by default - rdmd can still be used by passing --rdmd
to the command line.

"dub init" now also creates example source code that actually compiles
(without vibe.d as a dependency) and invoking dub outside of a package
directory now gives a nicer error message.

(Binaries: <http://registry.vibed.org/download>)
Feb 18 2013
prev sibling next sibling parent Martin Nowak <code dawg.eu> writes:
On 02/16/2013 06:10 PM, Sönke Ludwig wrote:
 Some may already have noticed it as it's mentioned already on the vibe.d
 website and is currently hosted on the same domain as the old VPM registry:

 http://registry.vibed.org/

Thanks for finally tackling this important necessity in a pragmatic manner.
 http://registry.vibed.org/package-format

Meets what you'd expect from a package tool on a two page spec.
 https://github.com/rejectedsoftware/dub

The code looks modular/extensible enough to add whatever specialized needs will come up.
Feb 18 2013
prev sibling next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 2/16/13, S=F6nke Ludwig <sludwig outerproduct.org> wrote:
 http://registry.vibed.org/

Why does dub automatically try to build an executable? I wanted to create a library package but I don't see from the docs how to do this. I also think it's counter-intuitive that running dub with no arguments will autobuild the project, this should be a a separate 'build' command.
Feb 18 2013
parent =?ISO-8859-1?Q?S=F6nke_Ludwig?= <sludwig outerproduct.org> writes:
Am 18.02.2013 21:49, schrieb Andrej Mitrovic:
 On 2/16/13, Snke Ludwig <sludwig outerproduct.org> wrote:
 http://registry.vibed.org/

Why does dub automatically try to build an executable? I wanted to create a library package but I don't see from the docs how to do this.

The current concept is to view all libraries as source libraries for various reasons: - No ABI compatibility btw. different compiler vendors/versions - Different version statements/compiler flags of the main project or any dependency may require a recompile anyway - Compilation times are so much faster than C++ anyway that the question is if there is any advantage in most cases Closed source projects are different in that regard though and it probably makes sense to add library builds anyway. That said, "dub generate visuald" currently creates a project per dependency and builds them as a static library, but that's just a special case.
 I also think it's counter-intuitive that running dub with no arguments
 will autobuild the project, this should be a a separate 'build'
 command.
 

Just running "dub" implies "dub run" so that it is quick to run a project during development. Of course it could also do nothing, but that would waste an opportunity to do something useful with only four keystrokes. But there is also "dub build" and "dub run" to make that explicit (it should be mentioned in the help screen that "run" is the default, though).
Feb 18 2013
prev sibling next sibling parent reply "Moritz Maxeiner" <moritz ucworks.org> writes:
On Saturday, 16 February 2013 at 17:10:33 UTC, Sönke Ludwig wrote:
 With the recent talk about Orbit, I thought it is time to also 
 announce
 the package manager that we have been working out based on the 
 simple
 VPM system that has always been in vibe.d. I don't really like 
 stepping
 into competition with Jacob here (*), but the approach is 
 different
 enough that I think it should be put on the table.

Great work, thank you! I've taken the liberty of creating Archlinux packages in the AUR for DUB, in case anyone is interested: Release version: https://aur.archlinux.org/packages/dub/ Trunk version: https://aur.archlinux.org/packages/dub-git/
Feb 18 2013
parent reply =?UTF-8?B?U8O2bmtlIEx1ZHdpZw==?= <sludwig outerproduct.org> writes:
Am 18.02.2013 22:25, schrieb Moritz Maxeiner:
 On Saturday, 16 February 2013 at 17:10:33 UTC, Sönke Ludwig wrote:
 With the recent talk about Orbit, I thought it is time to also announce
 the package manager that we have been working out based on the simple
 VPM system that has always been in vibe.d. I don't really like stepping
 into competition with Jacob here (*), but the approach is different
 enough that I think it should be put on the table.

Great work, thank you! I've taken the liberty of creating Archlinux packages in the AUR for DUB, in case anyone is interested: Release version: https://aur.archlinux.org/packages/dub/ Trunk version: https://aur.archlinux.org/packages/dub-git/

Thanks! I've listed it on the github page: https://github.com/rejectedsoftware/dub#arch-linux BTW, the build process has been simplified now - dependencies are just DMD+libcurl and building works using "./build.sh" instead of using "vibe build".
Feb 22 2013
parent =?UTF-8?B?U8O2bmtlIEx1ZHdpZw==?= <sludwig outerproduct.org> writes:
Am 22.02.2013 14:31, schrieb Moritz Maxeiner:
 On Friday, 22 February 2013 at 11:01:12 UTC, Sönke Ludwig wrote:
 Thanks! I've listed it on the github page:
 https://github.com/rejectedsoftware/dub#arch-linux

 BTW, the build process has been simplified now - dependencies are just
 DMD+libcurl and building works using "./build.sh" instead of using
 "vibe build".

Thanks for the news, I've updated both packages with the correct dependencies and bumped the release to 0.9.7. Btw. is there some way in Github to be notified (only) about new tags of a project (So I can update the AUR release package asap)?

I think there is none, just a commit feed that doesn't include tags. But once I have set up automatic builds, I could add a small script that makes a notification or simply uploads the new package file.
Feb 23 2013
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Mon, 18 Feb 2013 15:17:05 +0100
"Dicebot" <m.strashun gmail.com> wrote:

 On Monday, 18 February 2013 at 14:14:30 UTC, Dicebot wrote:
 ...

end-user who has single OS and is forced to deal with miriads of different package systems

First of all, I *am* thinking about the end-user. OS-based package managers frequently *do* suck for the end-user: http://cran.r-project.org/bin/linux/ubuntu/README.html Look at all that idiotic bullshit the users have to deal with just for something that *could* have been a trivial download/extract/run (Or an even simpler "wget ... -O fetch-and-build.sh && ./fetch-and-build.sh"). And that page is *just* for Debian/Ubuntu. And then there's stuff like this which isn't much better: http://www.claws-mail.org/downloads.php?section=downloads Secondly, where do you get that crazy idea that all end-users only ever have one OS to deal with? ATM, I've got a couple windows machines, a kubuntu desktop (old), and a debian 6 server. And that's not counting VMs. Other people have even more than that, and it doesn't help anyone to have a totally different set of instructions for doing the same damn thing each one. *I* can install any version of DMD I want on any of my systems by doing this: dvm install 2.0xx Same damn task, same damn command, character-for-character, on freaking everything. You're seriously going to try to tell me that's *worse* for me than having to do it totally different on each system? And finally, there's two types of users here, lib users and app users: Libs: If they're interested in your lib, then they're already familiar with the language's package manager. Apps: If your user has to deal directly with any of the language-based packager managers involved, then your buildscript sucks. But that's just for actually compiling. If your user has to deal with any language's package manager merely to *run* your app, then, well again, then you're doing something wrong. (Such as maybe using Python: I'll agree that Gem is fucking shit - more than half the times I've tried to install something it would just vomit out a Traceback.) Language-based package managers are a developer thing, not an end-user thing. Even if your app is using a language-based packager manager, that doesn't mean the end-user even needs to touch it directly.
or, even better, binary bloat of 
 programs that try to ship every single dependency with it.

Right, binary bloat in this >1GB HDD age is sooo much worse than running into unexpected version incompatibilities and conflicts.
Feb 18 2013
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Mon, 18 Feb 2013 18:16:00 -0500
Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> wrote:
[...]

Let me put it this way: My issues with OS-specific package managers
mainly boil down to:

- They're OS-specific.

- Anything that isn't part of the official public repo(s) is a
second-class citizen. Ex: AFAIK, You can't really do anything like
"apt-get install http://example.com/foo/bar-2.7" or "apt-get install
./private-package-that-joe-sent-me-via-email".

- No private, non-systemwide, restricted-user installations (AFAIK).

- [This one might be Debian-specific:] Totally different repos and
version availability depending on which OS version.

- <rant> [Definitely Debian-specific:] They can't even name the damn
multiple repos sanely: "woody', "squeeze", "sarge", are you fucking
kidding me? They're not even alphabetical, for fuck's sake! Just
give me ".../repos/debian-6" please, and keep your idiotic versioning
pet names to yourselves. Also, backports should be enabled by
default, especially for an OS like Debian with infrequent official
releases.</rant>

If those problems are fixed, then *great*, I'll jump right on board
both feet first. But until then, there will always be legitimate
reasons to side-step the OS-based package managers, for the sakes of
*both* user and developer.

FWIW: I'll take the OS-level package managers *anyway* over the bad-old
days of ~2000/2001 when we had all that fun of dealing with individual
rpms/debs and such manually. Or autotools-based src-only releases that
barely did any dependency management at all and just barfed out
compiler errors if *everything* wasn't already set up perfectly.
Feb 18 2013
prev sibling next sibling parent "Moritz Maxeiner" <moritz ucworks.org> writes:
On Tuesday, 19 February 2013 at 00:08:40 UTC, Nick Sabalausky 
wrote:
 On Mon, 18 Feb 2013 18:16:00 -0500
 Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> wrote:
 [...]

 - Anything that isn't part of the official public repo(s) is a
 second-class citizen. Ex: AFAIK, You can't really do anything 
 like
 "apt-get install http://example.com/foo/bar-2.7" or "apt-get 
 install
 ./private-package-that-joe-sent-me-via-email".

I agree with you in general, but you do represent this one point as if it was the case in every OS. It is in every debian-derivate I know (debian,ubuntu,mint, etc) and I don't intend to argue about that,but there are others, mainly Archlinux, who don't do it that way. E.g. everything in Arch is build via PKGBUILD's. The packages in the main repos and the packages in the AUR (which is a place *anyone* can contribute PKGBUILD's to in an orderly fashion). Writing a PKGBUILD from the skeleton file is usually less than 2 minutes work and then you in fact, can send your friend that package via email: Send the PKGBUILD and the source tarball, your friend then only has to do "makepkg -s" and "sudo pacman -U package-created-by-makepkg". There are no second-class citizens (packages) in Archlinux. I don't want to say that it's your job to write a PKGBUILD file, or any OS-specific package stuff and I do agree with you on your other points - especially since I do use multiple native OSs and several VMs, I'm just saying don't hate all OS package managers just because apt is a (imho) piece of ****.
Feb 18 2013
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Tue, 19 Feb 2013 01:38:14 +0100
"Moritz Maxeiner" <moritz ucworks.org> wrote:

 On Tuesday, 19 February 2013 at 00:08:40 UTC, Nick Sabalausky 
 wrote:
 On Mon, 18 Feb 2013 18:16:00 -0500
 Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> wrote:
 [...]

 - Anything that isn't part of the official public repo(s) is a
 second-class citizen. Ex: AFAIK, You can't really do anything 
 like
 "apt-get install http://example.com/foo/bar-2.7" or "apt-get 
 install
 ./private-package-that-joe-sent-me-via-email".

I agree with you in general, but you do represent this one point as if it was the case in every OS. It is in every debian-derivate I know (debian,ubuntu,mint, etc) and I don't intend to argue about that,

Admittedly, most of my linux experience (an unix in general) is Debian-derived stuff. (And a little bit of Mandrake from way back when it was still called Mandrake, but that's not exactly relevant experience anymore ;) )
 but there are others, mainly Archlinux, who don't do 
 it that way.
 E.g. everything in Arch is build via PKGBUILD's. The packages in 
 the main repos and the packages in the AUR (which is a place 
 *anyone* can contribute PKGBUILD's to in an orderly fashion).
 Writing a PKGBUILD from the skeleton file is usually less than 2 
 minutes work and then you in fact, can send your friend that 
 package via email: Send the PKGBUILD and the source tarball, your 
 friend then only has to do "makepkg -s" and "sudo pacman -U 
 package-created-by-makepkg".
 There are no second-class citizens (packages) in Archlinux.

Ahh, that's actually good to hear. I may have to try Arch sometime (there's been other good things said about it here before, too, which grabbed my interest). Although I'll probably wait until the rumblings I've heard about efforts to make it easier to set up start bearing fruit - I've been pretty much scarred for life on any sort of manual configuring of X11. ;) In any case though, there still remains the problem that OS-level package managers are more or less OS-specific. Something like 0install sounds great, although I admit that I've been aware of it for years and still have yet to actually try it.
Feb 18 2013
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Mon, 18 Feb 2013 19:52:58 -0500
Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> wrote:
 
 In any case though, there still remains the problem that OS-level
 package managers are more or less OS-specific. Something like 0install
 sounds great, although I admit that I've been aware of it for years
 and still have yet to actually try it.
 

Ie, IMO, the ideal package manager is both OS-agnostic and language-agnostic. Without a good popular one like that, there's good practical reasons for *both* OS-based and language-based package managers to co-exist.
Feb 18 2013
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Mon, 18 Feb 2013 19:52:58 -0500
Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> wrote:

 Something like 0install [...]

Oops, forgot link: http://0install.net/injector.html
Feb 18 2013
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Tue, 19 Feb 2013 11:48:41 +1100
Marco Nembrini <marco.nembrini.co gmail.com> wrote:

 On 18.02.2013 08:32, Nick Sabalausky wrote:
 On Sun, 17 Feb 2013 15:40:25 +0100
 "Dicebot" <m.strashun gmail.com> wrote:
 Packaging is best done (and should be) by OS package manager, not
 hundreds of languages-specific managers. Good language package
 manager in my opinion is just an information source for OS
 package builders.

I'm not real big on the idea of OS package managers. Not when Unix is in the picture anyway. I'm getting really fed up with software that has a "download / install" webpage populated with totally different instructions for an endless, yet always incomplete, list of Linux variants. And *maybe* BSD. And then on top of that, the poor *project* maintainers have to maintain all of that distro-specific cruft. Unless they're lucky and the project is big enough that the ditro maintainers are willing to waste *their* time converting the package into something that only works on their own distro. I believe I can sum up my thoughts with: "Fuck that shit."

Are you aware of the 0install project (http://zero-install.sourceforge.net/) ? It seems to me that it solves most packaging problems while still being able to collaborate with the OS package manager if needed. From the project page: "Zero Install is a decentralised cross-distribution software installation system. Other features include full support for shared libraries, sharing between users, and integration with native platform package managers. It supports both binary and source packages, and works on Linux, Mac OS X, Unix and Windows systems. It is fully Open Source."

Heh, coincidentally, I just mentioned that in a reply to Moritz *just* before reading your post here ;) In summary, yea, I heared about it years ago, It *does* sound exactly like what I want to see, and I've been wanting to see it widely succeed...And yet I still haven't gotten around to actually trying it :P
Feb 18 2013
prev sibling next sibling parent "jerro" <a a.com> writes:
 - Anything that isn't part of the official public repo(s) is a
 second-class citizen. Ex: AFAIK, You can't really do anything 
 like
 "apt-get install http://example.com/foo/bar-2.7" or "apt-get 
 install
 ./private-package-that-joe-sent-me-via-email".

You can do dpkg -i ./private-package-that-joe-sent-me-via-email".
Feb 18 2013
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Tue, 19 Feb 2013 02:23:07 +0100
"jerro" <a a.com> wrote:

 - Anything that isn't part of the official public repo(s) is a
 second-class citizen. Ex: AFAIK, You can't really do anything 
 like
 "apt-get install http://example.com/foo/bar-2.7" or "apt-get 
 install
 ./private-package-that-joe-sent-me-via-email".

You can do dpkg -i ./private-package-that-joe-sent-me-via-email".

Yea, but if apt-get can't be used for non-repo packages, and packages from different sources have to be installed in a completely different way, then apt-get is [partial] FAIL. I get the whole "do one thing, blah blah blah" stuff, but if Debian user can't have *ONE* tool that just *installs a freaking package* with proper dependency handling *regardless* of source, then the something's gone wrong - "Install a package with dependencies" would be doing one thing well. "Install a package with dependencies, but only if the package happens to be from a certain source and if it isn't then force the user to use some other tool" is doing half a thing poorly.
Feb 18 2013
prev sibling next sibling parent "Dicebot" <m.strashun gmail.com> writes:
On Monday, 18 February 2013 at 23:16:11 UTC, Nick Sabalausky 
wrote:
 http://cran.r-project.org/bin/linux/ubuntu/README.html

 Look at all that idiotic bullshit the users have to deal with 
 just for something that *could* have been a trivial 
 download/extract/run

It is simple, rational and I'll take it any day over download/extract/run idiom. Actually, I stopped installing anything without making a package before a long time ago. Other for Windows, of course, but, oh well, it is Windows. You sound biased.
 Secondly, where do you get that crazy idea that all end-users 
 only
 ever have one OS to deal with? ATM, I've got a couple windows 
 machines,
 a kubuntu desktop (old), and a debian 6 server. And that's not 
 counting
 VMs.

It is possible, but if you have a single language to deal with and a lot of OSes, your cases is probably a minority and few OSes with lot of languages are more relevant. I use 4 OSes in daily workflow too and I honestly can't imagine how can you use one without learning package manager in details anyway. Sorry, but is sounds completely ignorant.
 Other people have even more than that, and it doesn't help 
 anyone to
 have a totally different set of instructions for doing the same
 damn thing each one. *I* can install any version of DMD I want 
 on any
 of my systems by doing this:

 dvm install 2.0xx

And it is one more command to know as your supposed to know your package manager _anyway_. It is a damn first thing to learn about your distro.
 And finally, there's two types of users here, lib users and app 
 users:

I am speaking about dependencies here. They naturally leak from build system to distribution package. And if you think that large HDDs is a reason to package boost libs for hundreds of times, than I need to thank very same fucked up logic for having Core i7 sometimes behave as slow as 10 year old Celeron on trivial applications. P.S. gems are Ruby, not Python
Feb 18 2013
prev sibling next sibling parent "Dicebot" <m.strashun gmail.com> writes:
On Tuesday, 19 February 2013 at 00:08:40 UTC, Nick Sabalausky 
wrote:
 You can't really do anything like
 "apt-get install http://example.com/foo/bar-2.7" or "apt-get 
 install
 ./private-package-that-joe-sent-me-via-email".

You can. Debian is weird though because it is done via dpkg, not apt-get. In Arch it is as simple to "pacman -U package-file" vs "pacman -S name-in-repo".
Feb 18 2013
prev sibling next sibling parent "Dicebot" <m.strashun gmail.com> writes:
On Tuesday, 19 February 2013 at 08:34:13 UTC, Jacob Carlborg 
wrote:
 On 2013-02-19 08:19, Dicebot wrote:

 You can. Debian is weird though because it is done via dpkg, 
 not apt-get.
 In Arch it is as simple to "pacman -U package-file" vs "pacman 
 -S
 name-in-repo".

Isn't apt-get built on top of dpkg or something like that?

It is. AFAIK apt-get is dpkg + getting packages via repo sources. But last time I have been searching, found no way to proxy dpkg call to install local package via apt-get which felt a bit weird. Probably my failure at searching though.
Feb 19 2013
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Tue, 19 Feb 2013 08:15:16 +0100
"Dicebot" <m.strashun gmail.com> wrote:

 On Monday, 18 February 2013 at 23:16:11 UTC, Nick Sabalausky 
 wrote:
 
 You sound biased.


So do you. There, that was constructive ;)
 It is possible, but if you have a single language to deal with 
 and a lot of OSes, your cases is probably a minority and few OSes 
 with lot of languages are more relevant. I use 4 OSes in daily 
 workflow too and I honestly can't imagine how can you use one 
 without learning package manager in details anyway. Sorry, but is 
 sounds completely ignorant.
 
 Other people have even more than that, and it doesn't help 
 anyone to
 have a totally different set of instructions for doing the same
 damn thing each one. *I* can install any version of DMD I want 
 on any
 of my systems by doing this:

 dvm install 2.0xx

And it is one more command to know as your supposed to know your package manager _anyway_. It is a damn first thing to learn about your distro.

Don't twist my words around. I never said anything about not learning the OS package manager. The issue is, if I'm going to do the same thing on multiple systems, there's no reason it can't be doable the same way, and there's no benefit to having it be completely different. So yea, I could install DMD, for example, a totally different way on different systems, but why should I when I can just do "dvm install xxxxx" on *all* the systems? And to top it off, imagine trying to do that as part of a bigger script. I could write totally different scripts for each different system just to do the same stupid thing, or one big script with tons of platform-specific branches, or instead, I could use lanaguage-oriented stuff like DVM/Dub/etc, avoid OS-specific stuff, and use the same single script everywhere. Yea, how horrible. Why do you prefer making extra work for yourself? Some puritanical ideal of "If I'm on x OS I *have* to use the stuff that only works there"? And don't tell me it's because you don't want to have to learn a few extra trivial commands, because you're doing *plenty* of complaining here about how completely ridiculous you think it is to avoid learning a few more easy commands. (Nevermind that you're also the only one who's actually objected to having to learn commands, in the same post nonetheless.)
 And if you think that large 
 HDDs is a reason to package boost libs for hundreds of times, 
 than I need to thank very same fucked up logic for having Core i7 
 sometimes behave as slow as 10 year old Celeron on trivial 
 applications.

If you're just going to resort to obvious hyperbole, there's no point in dealing with you.
 
 P.S. gems are Ruby, not Python

Whatever python's package manager is called. It's been awhile since I touched it, or Python or Ruby.
Feb 19 2013
prev sibling next sibling parent "Moritz Maxeiner" <moritz ucworks.org> writes:
On Tuesday, 19 February 2013 at 00:53:07 UTC, Nick Sabalausky 
wrote:
 Admittedly, most of my linux experience (an unix in general) is
 Debian-derived stuff. (And a little bit of Mandrake from way 
 back when
 it was still called Mandrake, but that's not exactly relevant
 experience anymore ;) )

I was hooked on Ubuntu myself, until they began getting all "MUST_CLONE_MACOSX", "MUST_TAKE_CONTROL_AWAY_FROM_USER" on everyone's ass (around the versions 8/9, I think). Tried a lot of different distros, eventually landed with Arch. I think it's just the right mixture of convenience and customizability.
 Although I'll probably wait until the
 rumblings I've heard about efforts to make it easier to set up 
 start
 bearing fruit - I've been pretty much scarred for life on any 
 sort of
 manual configuring of X11. ;)

I'll treat that as two seperate points :) (1) Setup Arch from install medium to first login: That is unpleasant work, sadly. There once was something called AIF (Arch installation framework), which was an ncurses-graphical installer; it was good, but old and iirc barely maintained. Eventually they devs apparently decided to drop it and only ship a couple of scripts, that were easier to maintain and as far as I know they have not made any plans public where they would do more than provide these scripts. Point being, don't expect this part to get easier any time soon, it probably won't, so I'd suggest not tying the trying Archlinux out part to that problem. On the other hand the Archlinux wiki (wiki.archlinux.org) has an excellent Beginner's guide and said scripts are fairly easy to use and remember, so after the second time you can usually do an Arch installation faster than the auto-installer of other distros (only possible because the Arch base system so very small, of course). (2) X11 setup: Why would you want to configure X11 manually? "sudo pacman -S xorg-server xorg-xinit xf86-input-evdev xorg-video-(ati/intel/nouveau)", then install your desktop environment, e.g. "sudo pacman -S enlightenment17", copy the skeleton xinitrc file "cp /etc/skel/.xinitrc ~/" and change the exec line to your desktop environment, e.g. "exec enlightenment_start". Done. Now "startx" will give you your fully functional desktop environment, no need for any xorg.confs, X11 configures itself automatically. Usually the only reason for an xorg.conf is when using the proprietary nvidia/ati drivers, but the Arch wiki has lenghtly (well-written) articles regarding those.
 In any case though, there still remains the problem that 
 OS-level
 package managers are more or less OS-specific. Something like 
 0install
 sounds great, although I admit that I've been aware of it for 
 years
 and still have yet to actually try it.

I'm not familiar with 0install myself and the truth is I probably never will look at it - unless it can integrate with pacman, that is - I've simply grown to dependent on the convenience of pacman to try anything else :) Anyway, I didn't want to put more oil in the fire of the OS-specific-language-independent-package-manager vs. language-specific-OS-independent-package manager debate (because frankly, I can't contribute much in that area, all I want is a package manager that simply works, be it OS or language specific, I really don't care as long as it just gets the job done right - one of the reasons I'm happy with pacman btw.), I just wanted to point out that not all OS-package-managers are evil. Sorry for dragging you slightly off-topic for so long^^
Feb 19 2013
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Wed, 20 Feb 2013 03:30:15 +0100
"Moritz Maxeiner" <moritz ucworks.org> wrote:

 On Tuesday, 19 February 2013 at 00:53:07 UTC, Nick Sabalausky 
 wrote:
 Admittedly, most of my linux experience (an unix in general) is
 Debian-derived stuff. (And a little bit of Mandrake from way 
 back when
 it was still called Mandrake, but that's not exactly relevant
 experience anymore ;) )

I was hooked on Ubuntu myself, until they began getting all "MUST_CLONE_MACOSX", "MUST_TAKE_CONTROL_AWAY_FROM_USER" on everyone's ass (around the versions 8/9, I think).

Yea, same thing here. And I found the help from their ticket support people to be...irrational. Incidentally, the "MUST_CLONE_MACOSX", "MUST_TAKE_CONTROL_AWAY_FROM_USER" just happen to also be the exact same reasons I'm fed up with all forms of Windows post-XP. I'll never understand why so many people have been so obsessed with cloning an OS that's never even managed to reach double-digit market share. It's like trying to clone the Ford Edsel: Why? Even if some people like it, they'll just use the real thing anyway. With Linux, when I outgrew Ubuntu I went upstream to Debian. Seemed the most sensible choice given their close relationship and my Ubuntu familiarity. I've had my eye on Mint, but, I dunno, it seems a little too "downstream". And like I said, I'm starting to keep an eye on Arch now too.
 
 I'll treat that as two seperate points :)
 (1) Setup Arch from install medium to first login:[...]
 
 (2) X11 setup: Why would you want to configure X11 manually? 
 "sudo pacman -S xorg-server xorg-xinit xf86-input-evdev 
 xorg-video-(ati/intel/nouveau)", then install your desktop 
 environment, e.g. "sudo pacman -S enlightenment17", copy the 
 skeleton xinitrc file "cp /etc/skel/.xinitrc ~/" and change the 
 exec line to your desktop environment, e.g. "exec 
 enlightenment_start". Done. Now "startx" will give you your fully 
 functional desktop environment, no need for any xorg.confs, X11 
 configures itself automatically. Usually the only reason for an 
 xorg.conf is when using the proprietary nvidia/ati drivers, but 
 the Arch wiki has lenghtly (well-written) articles regarding 
 those.

Ahh, thanks for all the info :) As for the X11 stuff, that's still more manual than I'd like when it comes to X11. (Like I said, I've had *BIG* problems dealing directly with X11 in the past.) But I may give it a try. I'm sure it's improved since the nightmares I had with it back around 2001/2002, but I still worry *how* much improved... Heck, I've even had X11 problems as recently as Ubuntu 10.
 
 I'm not familiar with 0install myself and the truth is I probably 
 never will look at it - unless it can integrate with pacman, that 
 is - I've simply grown to dependent on the convenience of pacman 
 to try anything else :)
 Anyway, I didn't want to put more oil in the fire of the 
 OS-specific-language-independent-package-manager vs. 
 language-specific-OS-independent-package manager debate (because 
 frankly, I can't contribute much in that area, all I want is a 
 package manager that simply works, be it OS or language specific, 
 I really don't care as long as it just gets the job done right - 
 one of the reasons I'm happy with pacman btw.), I just wanted to 
 point out that not all OS-package-managers are evil. Sorry for 
 dragging you slightly off-topic for so long^^

No prob :) But I don't think OS-package-managers are evil (like I've said, I like "apt-get install" *when it works*). It's just that I think it's patently absurd when people claim that OS-package-managers are the *only* good way to go and that there's no good legitimate purpose for language-based OS-independent stuff. As long as they're OS-dependent there will always be legitimate reasons for alternatives.
Feb 19 2013
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Wednesday, 20 February 2013 at 03:52:12 UTC, Nick Sabalausky 
wrote:
 No prob :) But I don't think OS-package-managers are evil (like 
 I've
 said, I like "apt-get install" *when it works*). It's just that 
 I
 think it's patently absurd when people claim that 
 OS-package-managers
 are the *only* good way to go and that there's no good 
 legitimate
 purpose for language-based OS-independent stuff. As long as 
 they're
 OS-dependent there will always be legitimate reasons for
 alternatives.

Aptitude isn't really OS depend
Feb 19 2013
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Wednesday, 20 February 2013 at 03:52:12 UTC, Nick Sabalausky 
wrote:
 No prob :) But I don't think OS-package-managers are evil (like 
 I've
 said, I like "apt-get install" *when it works*). It's just that 
 I
 think it's patently absurd when people claim that 
 OS-package-managers
 are the *only* good way to go and that there's no good 
 legitimate
 purpose for language-based OS-independent stuff. As long as 
 they're
 OS-dependent there will always be legitimate reasons for
 alternatives.

Aptitude isn't really OS dependant. Just debian provide linux and BSD flavors, and it is used in many other distros. Same goes for many other package managers. The multiplication of package manager always seemed to me like a huge waste of resources.
Feb 19 2013
prev sibling next sibling parent "Moritz Maxeiner" <moritz ucworks.org> writes:
On Wednesday, 20 February 2013 at 03:52:12 UTC, Nick Sabalausky 
wrote:
 Incidentally, the "MUST_CLONE_MACOSX",
 "MUST_TAKE_CONTROL_AWAY_FROM_USER" just happen to also be the 
 exact same
 reasons I'm fed up with all forms of Windows post-XP. I'll never
 understand why so many people have been so obsessed with 
 cloning an OS
 that's never even managed to reach double-digit market share. 
 It's like
 trying to clone the Ford Edsel: Why? Even if some people like 
 it,
 they'll just use the real thing anyway.

Since we're getting further OT I'll just mark this [OT] With MS I see it as a marketing attempt to keep as many users with windows as possible, because Apple had been getting many users with their "we're different" approach. Combine that with the fact that the normal PC/Laptop-market has been slowly going into decline ever since the rise of the tablet-hype (and there doesn't seem to be an end in sight for that) the facts seem to be to me that a lot of the "common" people these days use their computers for to things: Youtube and Facebook (and derivates thereof), maaaybe newssites as well. And since Apple were the ones who succesfully pushed for feasible commercial tablets (not the first, but the ones who started the hype) their OS more or less became "the design to be or to be close to in mobile computing", hence everyone with a lot of money invested in OS design tries to copy them. At least that is how I see the developments of the recent years^^ [/OT]
 With Linux, when I outgrew Ubuntu I went upstream to Debian. 
 Seemed the
 most sensible choice given their close relationship and my 
 Ubuntu
 familiarity. I've had my eye on Mint, but, I dunno, it seems a 
 little
 too "downstream". And like I said, I'm starting to keep an eye 
 on Arch
 now too.

Another potention Archlinux user GET *evil laugh*.
 Ahh, thanks for all the info :)

 As for the X11 stuff, that's still more manual than I'd like 
 when it
 comes to X11. (Like I said, I've had *BIG* problems dealing 
 directly
 with X11 in the past.) But I may give it a try. I'm sure it's 
 improved
 since the nightmares I had with it back around 2001/2002, but I
 still worry *how* much improved... Heck, I've even had X11 
 problems as
 recently as Ubuntu 10.

Ah, okay, that's strange but I can understand that. The only problems I ever had with X was that I had to add an InputClass to the evdev file because evdev otherwise kept refusing to enable USB mice(s).
 No prob :) But I don't think OS-package-managers are evil (like 
 I've
 said, I like "apt-get install" *when it works*). It's just that 
 I
 think it's patently absurd when people claim that 
 OS-package-managers
 are the *only* good way to go and that there's no good 
 legitimate
 purpose for language-based OS-independent stuff. As long as 
 they're
 OS-dependent there will always be legitimate reasons for
 alternatives.

Ah, your previous posts sounded a bit like that, but I just read too much into them, then, I guess. I just hope either one of dub or orbit gets succesfully adopted as the standard D package manager, or that they're going to be compatible with each other in some way. I'd hate to see something (even remotely) similar to the initial phobos/tango breakup happening again (I was quite suprised that the language as a whole was able to survive that).
Feb 20 2013
prev sibling next sibling parent "Dicebot" <m.strashun gmail.com> writes:
On Tuesday, 19 February 2013 at 09:55:30 UTC, Nick Sabalausky 
wrote:
 So do you.

 There, that was constructive ;)

Well, at least I have tried both of approaches, both as user and as maintainer. I really can't understand how you can state that OS package managers do not work if you have not even tried packaging.
 Don't twist my words around. I never said anything about not 
 learning
 the OS package manager.

 The issue is, if I'm going to do the same thing on multiple 
 systems,
 there's no reason it can't be doable the same way, and there's 
 no
 benefit to having it be completely different.

Why it is the issues? Obsession with "same way" is as harmful, in my opinion, as obsession with being cross-platform. You always want to take care about OS specifics, why hide them anyway? There is no benefit in using same command everywhere.
 So yea, I could install DMD, for example, a totally different 
 way on
 different systems, but why should I when I can just do "dvm 
 install
 xxxxx" on *all* the systems?

Because you can be somewhat certain then that dependencies are taken care of right, file location does not interfere with your filesystem layout, no garbage will be left upon uninstall etc. Because it is a waste of resources to implement a new mature package manager for each new language when one already exists for target platform.
 And to top it off, imagine trying to do that as part of a bigger
 script.

Build scripts that install stuff instead of you are evil. This bigger script should provide dependency list you or your package manager can take care of. Irrational coupling of functionality is evil, too.
 Why do you prefer making extra work for yourself? Some 
 puritanical
 ideal of "If I'm on x OS I *have* to use the stuff that only
 works there"? And don't tell me it's because you don't want to 
 have
 to learn a few extra trivial commands, because you're doing 
 *plenty* of
 complaining here about how completely ridiculous you think it 
 is to
 avoid learning a few more easy commands. (Nevermind that you're 
 also the
 only one who's actually objected to having to learn commands, 
 in the
 same post nonetheless.)

You rarely learn something just because you can (unless you have a lot of spare time). There should be some benefit, some reason. Especially when this news stuff does something that is already perfectly done by existing and known stuff. Especially when this new stuff attempts to hide from you something that you really need to take care of.
 If you're just going to resort to obvious hyperbole, there's no 
 point
 in dealing with you.

It is an analogy, not hyperbole and I am quite serious about it. Personal insults does not help.
Feb 20 2013
prev sibling next sibling parent reply =?UTF-8?B?U8O2bmtlIEx1ZHdpZw==?= <sludwig outerproduct.org> writes:
Since the discussion about yes or no regarding OS specific package
managers still goes on, IMO there is one argument that is far more
important than all technical or aesthetic aspects.

A language specific, but cross-platform, package manager makes
publishing and using published libraries a lot simpler for /developers/.
And since D wants to grow, it's extremely important to provide new
developers with the most comfortable and efficient development
experience, so that they also stay and get productive after the first looks.

I think that package managers in Ruby, Python, JavaScript/node.js were
crucial in their growth. Without them, they probably wouldn't have that
rich ecosystem of libraries and tools that is available today and is one
of the key reasons why so many people choose those languages.

Implementing an export function to turn a D package into a variety of
platform specific package formats is a possible option that could close
the gap and make installing applications also comfortable for the end user.
Feb 20 2013
parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Wednesday, 20 February 2013 at 11:12:30 UTC, Sönke Ludwig 
wrote:
 Since the discussion about yes or no regarding OS specific 
 package
 managers still goes on, IMO there is one argument that is far 
 more
 important than all technical or aesthetic aspects.

 A language specific, but cross-platform, package manager makes
 publishing and using published libraries a lot simpler for 
 /developers/.
 And since D wants to grow, it's extremely important to provide 
 new
 developers with the most comfortable and efficient development
 experience, so that they also stay and get productive after the 
 first looks.

 I think that package managers in Ruby, Python, 
 JavaScript/node.js were
 crucial in their growth. Without them, they probably wouldn't 
 have that
 rich ecosystem of libraries and tools that is available today 
 and is one
 of the key reasons why so many people choose those languages.

 Implementing an export function to turn a D package into a 
 variety of
 platform specific package formats is a possible option that 
 could close
 the gap and make installing applications also comfortable for 
 the end user.

I agree. In the end, you need developers before you can have end-users! Also, developers often want to micro-manage the experience the end-user gets, including installers etc... Look at python. Python has good package management, but it only gets used by developers. No end-user reaches for pip/easy_install to get the dependencies for Blender and no-one will, it all gets taken care of by OS-level package managers or is bundled with the installer. The end user of a piece of software should never have to know what language it is written in or have to get in involved in that languages own ecosystem. End-users need language-agnostic and OS-specific, developers often benefit best from the opposite.
Feb 20 2013
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Wed, 20 Feb 2013 12:47:28 +0100
FG <home fgda.pl> wrote:

 On 2013-02-20 11:32, Moritz Maxeiner wrote:
 As for the X11 stuff, that's still more manual than I'd like when
 it comes to X11. (Like I said, I've had *BIG* problems dealing
 directly with X11 in the past.) But I may give it a try. I'm sure
 it's improved since the nightmares I had with it back around
 2001/2002, but I still worry *how* much improved... Heck, I've
 even had X11 problems as recently as Ubuntu 10.

Ah, okay, that's strange but I can understand that. The only problems I ever had with X was that I had to add an InputClass to the evdev file because evdev otherwise kept refusing to enable USB mice(s).

Diving deeper into the OT... Not strange at all. I had similar experiences around 2001 when I bought a new immitation-of-ATI GFX card -- first there were no drivers for it and, when they finally showed up (proprietary and others), after weeks of configuring Xorg, I still couldn't make 3d acceleration work and ended up without it for the next few years. Not only Xorg are a problem. Even today I can't fire up the newest Ubuntu install CDs without the screen going blank. That's how bad things are with X and even framebuffer console. So I am not surprised hearing about problems in this domain.

Back with ~2001 Mandrake (and also the RedHat from the same era), I would have a fresh OS install, everything would work fine at first, but then after a few days, X11 would inexplicably just...not start. At all, not even manually. And for no apparent reason - I hadn't touched or messed with anything even related. Only thing I was able to figure out that actually worked, even with plenty of Googling, was yet another fresh reinstall. And then a few days later it would just happen again, totally out-of-the-blue. Between that and various other Linux issues at the time (For example, nothing comparable to today's apt-get existed), I ended up giving up on Linux entirely for the next ~7 years after. Most things are a lot better now, though. I was genuinely surprised/impressed at some of the improvements when I tried it again around Ubuntu ~9.
Feb 21 2013
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Wed, 20 Feb 2013 11:32:34 +0100
"Moritz Maxeiner" <moritz ucworks.org> wrote:

 On Wednesday, 20 February 2013 at 03:52:12 UTC, Nick Sabalausky 
 wrote:
 Incidentally, the "MUST_CLONE_MACOSX",
 "MUST_TAKE_CONTROL_AWAY_FROM_USER" just happen to also be the 
 exact same
 reasons I'm fed up with all forms of Windows post-XP. I'll never
 understand why so many people have been so obsessed with 
 cloning an OS
 that's never even managed to reach double-digit market share. 
 It's like
 trying to clone the Ford Edsel: Why? Even if some people like 
 it,
 they'll just use the real thing anyway.

Since we're getting further OT I'll just mark this [OT] With MS I see it as a marketing attempt to keep as many users with windows as possible, because Apple had been getting many users with their "we're different" approach. Combine that with the fact that the normal PC/Laptop-market has been slowly going into decline ever since the rise of the tablet-hype (and there doesn't seem to be an end in sight for that) the facts seem to be to me that a lot of the "common" people these days use their computers for to things: Youtube and Facebook (and derivates thereof), maaaybe newssites as well. And since Apple were the ones who succesfully pushed for feasible commercial tablets (not the first, but the ones who started the hype) their OS more or less became "the design to be or to be close to in mobile computing", hence everyone with a lot of money invested in OS design tries to copy them. At least that is how I see the developments of the recent years^^ [/OT]

Mobile is where all the buzz is, but I'm pretty sure most computer usage is still desktop/laptop. Just because tablet hasn't peaked yet doesn't mean it won't. But, of course, that doesn't mean that MS necessarily sees it that way. I don't doubt many of them see Apple's "buzz" and mistake that for overall numbers compared to desktop/laptop (An area where apple is still doing no better than they ever have - So I don't know what brain defect made MS decide Win7 needed to be an OSX clone). It's a *very* common mistake in the computer world, confusing amount of buzz with amount of actual usage. For example, a few years ago, from the way people talked, you would have thought most internet users were on Second Life. Huge buzz, and yea, enviable raw numbers, but proportionally still *very* much a niche. Very similar thing today with the twitface sites. Most people *don't* use them, but try telling any suit that. They think "buzz == reality".
Feb 21 2013
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Wed, 20 Feb 2013 12:53:23 +0100
"John Colvin" <john.loughran.colvin gmail.com> wrote:

 On Wednesday, 20 February 2013 at 11:12:30 UTC, S=F6nke Ludwig=20
 wrote:
 Since the discussion about yes or no regarding OS specific=20
 package
 managers still goes on, IMO there is one argument that is far=20
 more
 important than all technical or aesthetic aspects.

 A language specific, but cross-platform, package manager makes
 publishing and using published libraries a lot simpler for=20
 /developers/.
 And since D wants to grow, it's extremely important to provide=20
 new
 developers with the most comfortable and efficient development
 experience, so that they also stay and get productive after the=20
 first looks.

 I think that package managers in Ruby, Python,=20
 JavaScript/node.js were
 crucial in their growth. Without them, they probably wouldn't=20
 have that
 rich ecosystem of libraries and tools that is available today=20
 and is one
 of the key reasons why so many people choose those languages.

 Implementing an export function to turn a D package into a=20
 variety of
 platform specific package formats is a possible option that=20
 could close
 the gap and make installing applications also comfortable for=20
 the end user.


+1
 I agree. In the end, you need developers before you can have=20
 end-users!
=20

Developers, developers, developers! (and giant arm-pit stains...)
 Also, developers often want to micro-manage the experience the=20
 end-user gets,

I really hate that. That's exactly the reason we have so much crapware (absolutely flooded with it on Windows) that completely disregards any and all of system settings and convention standards for which they have enough resources to badly reinvent.
=20
 Look at python. Python has good package management, but it only=20
 gets used by developers. No end-user reaches for pip/easy_install=20
 to get the dependencies for Blender and no-one will, it all gets=20
 taken care of by OS-level package managers or is bundled with the=20
 installer. The end user of a piece of software should never have=20
 to know what language it is written in or have to get in involved=20
 in that languages own ecosystem.
=20

In my admittedly limited experience, pip was almost completely broken. It would install a few things ok, but for the majority of libs it would just crap out with a Traceback *during installation*. (I had said before it was gem, what I'd meant was pip.)
Feb 21 2013
prev sibling next sibling parent "Moritz Maxeiner" <moritz ucworks.org> writes:
On Thursday, 21 February 2013 at 11:09:47 UTC, Nick Sabalausky
wrote:
 Mobile is where all the buzz is, but I'm pretty sure most 
 computer
 usage is still desktop/laptop.

[OT] I agree with you there, but in deskop/laptop MS smply doesn't have to compete at present. Their sales there are currently guaranteed by the fact that virtually all assembled products (that aren't Mac) come with Windows preinstalled - and you having to pay for it, whether you want to or not. There are alternatives, of course (like build your desktop from parts), but most people don't use those alternatives. Anyway, we'll see how it turn out soon enough, away with the speculations and back to D^^ [/OT]
Feb 21 2013
prev sibling next sibling parent "Graham Fawcett" <fawcett uwindsor.ca> writes:
On Sunday, 17 February 2013 at 07:23:22 UTC, Sönke Ludwig wrote:
 You might want to list all the dependencies needed for dub or
 distribute them in a zip.
 

They are in the .zip now and I listed the dependencies on the download page. Sorry, the distribution stuff is still very much ad-hoc ATM. I'll make some installers once the build process is automated.
 Ah I didn't spot the download link 
 http://registry.vibed.org/download
 
 I guess this could be made more visible by adding a link to the
 download page from the github repository, and maybe putting 
 the { *
 Using DUB * Download * Publishing packages * Helping developme 
 }
 section at the top instead of the bottom.

There now is a link on the github page + a note for non-Windows that libevent/libssl are needed. I also added a short sentence how to build by hand. The dependencies will also likely change to just libcurl at some point with a make file or something to make bootstrapping as simple as possible.

Personally, I think that libcurl-only dependency is an important goal. Dub's third-party dependencies are far too "modern." For example, I have an older Ubuntu instance I use for testing (10.10), where libevent 2.x simply isn't available (can't run your binary, and can't compile your source). For Vibe, these may be acceptable requirements, but not for a general packaging tool. I would hope that a future version of Dub wouldn't have any dependencies on Vibe, either. That's an odd bootstrapping arrangement. Best, Graham
Feb 21 2013
prev sibling next sibling parent "Moritz Maxeiner" <moritz ucworks.org> writes:
On Friday, 22 February 2013 at 11:01:12 UTC, Sönke Ludwig wrote:
 Thanks! I've listed it on the github page:
 https://github.com/rejectedsoftware/dub#arch-linux

 BTW, the build process has been simplified now - dependencies 
 are just
 DMD+libcurl and building works using "./build.sh" instead of 
 using "vibe build".

Thanks for the news, I've updated both packages with the correct dependencies and bumped the release to 0.9.7. Btw. is there some way in Github to be notified (only) about new tags of a project (So I can update the AUR release package asap)?
Feb 22 2013
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Sunday, 17 February 2013 at 17:10:47 UTC, Russel Winder wrote:
 On Sun, 2013-02-17 at 16:08 +0100, Jacob Carlborg wrote:
 […]
 There are no package managers out of the box for Mac OS X or 
 Windows.

The MacPorts, Fink, and Brew folks almost certainly dispute the first of those claims. ;-)

None are out of the box. But they are still very useful !
Feb 22 2013
prev sibling next sibling parent "Graham Fawcett" <fawcett uwindsor.ca> writes:
On Friday, 22 February 2013 at 09:40:29 UTC, Sönke Ludwig wrote:
 Am 22.02.2013 07:56, schrieb Sönke Ludwig:
 I would hope that a future version of Dub wouldn't have any 
 dependencies
 on Vibe, either. That's an odd bootstrapping arrangement.


Done now on master.

Woah! that was fast. I look forward to trying this out! Regards, Graham
Feb 22 2013
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 16 February 2013 at 17:10:33 UTC, Sönke Ludwig wrote:
 With the recent talk about Orbit, I thought it is time to also 
 announce
 the package manager that we have been working out based on the 
 simple
 VPM system that has always been in vibe.d. I don't really like 
 stepping
 into competition with Jacob here (*), but the approach is 
 different
 enough that I think it should be put on the table.

 Some may already have noticed it as it's mentioned already on 
 the vibe.d
 website and is currently hosted on the same domain as the old 
 VPM registry:

 http://registry.vibed.org/


 DUB has two important development goals:

  - Simplicity:

    Making a DUB package, as well as using one as a dependency 
 should be
    as simple as possible to facilitate broad usage, also and 
 especially
    among language newcomers. Procedural build scripts often 
 scare away
    people, although their added complexity doesn't matter for 
 bigger
    projects. I think they should be left as an option rather 
 than the
    default.

    Turning a library/application into a DUB package can be as 
 simple as
    adding a package.json file with the following content 
 (mysql-native
    is automatically made available during the build in this 
 example):

    {
         "name": "my-library",
         "dependencies": {"mysql-native": ">=0.0.7"}
    }

    If the project is hosted on GitHub, it can be directly 
 registered on
    the registry site and is then available for anyone to use as 
 a
    dependency. Alternatively, it is also possible to use a local
    directory as the source for a particular package (e.g. for 
 closed
    source projects or when working on both the main project and 
 the
    dependency at the same time).

  - Full IDE support:

    Rather than focusing on performing the build by itself or 
 tying a
    package to a particular build tool, DUB translates a general
    build receipt to any supported project format (it can also 
 build
    by itself). Right now VisualD and MonoD are supported as 
 targets and
    rdmd is used for simple command line builds. Especially the 
 IDE
    support is really important to not simply lock out people 
 who prefer
    them.


 Apart from that we have tried to be as flexible as possible 
 regarding
 the way people can organize their projects (although by default 
 it
 assumes source code to be in "source/" and string imports in 
 "views/",
 if those folders exist).

 There are still a number of missing features, but apart from 
 those it is
 fully usable and tested on Windows, Linux, and Mac OS.


 GitHub repository:
 https://github.com/rejectedsoftware/dub
 https://github.com/rejectedsoftware/dub-registry

 Preliminary package format documentation:
 http://registry.vibed.org/package-format


 (*) Originally I looked into using Orbit as the package manager 
 for
 vibe.d packages, but it just wasn't far enough at the time and 
 had some
 traits that I wasn't fully comfortable with.

So I'm sorry if that appears completely stupid, but . . . DUB sounds kind of like dumb. As Orbit sounds very nice, especially since libraries are satellites of mars, so it make sense to see other libs as artificial satellites :D That is very poor, and have nothing to do with the actual capabilities of each of them. BTW, it's be great is we could avoid some king of phobos/tango split on that subject and settle on one package manager.
Feb 22 2013
prev sibling next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Friday, February 22, 2013 19:29:02 deadalnix wrote:
 So I'm sorry if that appears completely stupid, but . . .
 
 DUB sounds kind of like dumb. As Orbit sounds very nice,
 especially since libraries are satellites of mars, so it make
 sense to see other libs as artificial satellites :D
 
 That is very poor, and have nothing to do with the actual
 capabilities of each of them.

Really? I see no problem with either, and if anything, I like dub better because it's shorter (it also starts with d). I don't feel particularly strongly either way though. - Jonathan M Davis
Feb 22 2013
prev sibling next sibling parent "SomeDude" <lovelydear mailmetrash.com> writes:
On Friday, 22 February 2013 at 20:33:15 UTC, Jonathan M Davis 
wrote:
 On Friday, February 22, 2013 19:29:02 deadalnix wrote:
 So I'm sorry if that appears completely stupid, but . . .
 
 DUB sounds kind of like dumb. As Orbit sounds very nice,
 especially since libraries are satellites of mars, so it make
 sense to see other libs as artificial satellites :D
 
 That is very poor, and have nothing to do with the actual
 capabilities of each of them.

Really? I see no problem with either, and if anything, I like dub better because it's shorter (it also starts with d). I don't feel particularly strongly either way though. - Jonathan M Davis

I sense deadalnix attempted yet again at being sarcastic/humorous, and failed at it. Again. :)
Feb 23 2013
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Sat, 23 Feb 2013 11:20:50 +0000
Russel Winder <russel winder.org.uk> wrote:

 On Sat, 2013-02-23 at 10:20 +0100, SomeDude wrote:
 [=E2=80=A6]
 Well, in the Java world, there is ant. It does the trick, but=20
 it's quite ugly.

Anyone in the Java world still using Ant is just so last decade ;-) =20

Anyone still using Java is just so last decade ;)
Feb 23 2013
prev sibling next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday, February 16, 2013 18:10:21 S=C3=B6nke Ludwig wrote:
 With the recent talk about Orbit, I thought it is time to also announ=

 the package manager that we have been working out based on the simple=

 VPM system that has always been in vibe.d. I don't really like steppi=

 into competition with Jacob here (*), but the approach is different
 enough that I think it should be put on the table.

I justed messing around with dub on one of my projects, and at first gl= ance, I=20 like what I'm seeing. Hopefully, it'll be a good replacement for the ad= -hoc=20 build setups that I typically use. However, while the package format=20= documentation seems to be fairly complete, the usage documentation is s= till=20 sorely lacking: http://registry.vibed.org/usage As it stands, I don't even have a clue what the various directories tha= t get=20 generated are for, let alone something like how the docs target is supp= osed to=20 work (I just get errors from it about one of the files it generates not= being=20 executable). There should also probably clear examples of how to set up an applicati= on vs a=20 library. It seems to want to set up an application by default, and I as= sume=20 that you make it a library by mucking with dflags in the configuration = file, but=20 how that jives with having an executable with -unittest, I don't know. And if dub is supposed to work with build scripts or other build tools = as some=20 of your posts here imply, then that definitely needs to be documented, = because=20 I don't see anything of the sort. So, it looks like it has a good start, but without better instructions,= it's=20 going to be a bit hard to use it properly it seems. - Jonathan M Davis
Mar 02 2013
parent reply =?UTF-8?B?U8O2bmtlIEx1ZHdpZw==?= <sludwig outerproduct.org> writes:
Am 02.03.2013 09:19, schrieb Jonathan M Davis:
 On Saturday, February 16, 2013 18:10:21 Sönke Ludwig wrote:
 With the recent talk about Orbit, I thought it is time to also announce
 the package manager that we have been working out based on the simple
 VPM system that has always been in vibe.d. I don't really like stepping
 into competition with Jacob here (*), but the approach is different
 enough that I think it should be put on the table.

I justed messing around with dub on one of my projects, and at first glance, I like what I'm seeing. Hopefully, it'll be a good replacement for the ad-hoc build setups that I typically use. However, while the package format documentation seems to be fairly complete, the usage documentation is still sorely lacking: http://registry.vibed.org/usage

Agreed, there also needs to be a brief introduction how dub accomplishes to usual tasks.
 
 As it stands, I don't even have a clue what the various directories that get 
 generated are for, let alone something like how the docs target is supposed to 
 work (I just get errors from it about one of the files it generates not being 
 executable).

The "docs" target was just a quick draft added to have meaningful list of standard built types and hasn't really been tested. I'll fix it right away.
 
 There should also probably clear examples of how to set up an application vs a 
 library. It seems to want to set up an application by default, and I assume 
 that you make it a library by mucking with dflags in the configuration file,
but 
 how that jives with having an executable with -unittest, I don't know.

As it stands, there are just two modes of operation: 1. invoking dub on a project will build it as an application (any "source/app.d" file is assumed to contain the main() function) 2. any dependent package is assumed to be a library and gets compiled in without its "source/app.d" file. This is currently tied to the simplified workflow that I use. Although I find this to be a quite nice approach in general and it covers most uses nicely, support to specify explicit library types will be added later.
 
 And if dub is supposed to work with build scripts or other build tools as some 
 of your posts here imply, then that definitely needs to be documented, because 
 I don't see anything of the sort.

It's not yet implemented, although trivial.
 
 So, it looks like it has a good start, but without better instructions, it's 
 going to be a bit hard to use it properly it seems.
 

Right now everything has to be extended a bit to handle the different use cases and project structures that came up until now in a comfortable way. I hope this to settle down in one or two weeks and then I'll write up a proper introduction on that page and make a quick announcement.
Mar 02 2013
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday, March 02, 2013 09:36:37 Jonathan M Davis wrote:
 On Saturday, March 02, 2013 15:07:33 S=C3=B6nke Ludwig wrote:
 The "docs" target was just a quick draft added to have meaningful l=


 of standard built types and hasn't really been tested. I'll fix it =


 away.

Ah. I thought that it was for generating the documentation with ddoc.=

And I actually, I would have at least half expected for you to have the= =20 fancier documentation generation (ddox?) that you've been working on=20= integrated into dub so that it would be trivial to set up a new project= that=20 uses it. Maybe that's what you're already working on with the docs targ= et; I=20 don't know. But that would be a good feature IMHO. - Jonathan M Davis
Mar 02 2013
parent =?UTF-8?B?U8O2bmtlIEx1ZHdpZw==?= <sludwig outerproduct.org> writes:
Am 03.03.2013 08:55, schrieb Jonathan M Davis:
 On Saturday, March 02, 2013 09:36:37 Jonathan M Davis wrote:
 On Saturday, March 02, 2013 15:07:33 Sönke Ludwig wrote:
 The "docs" target was just a quick draft added to have meaningful list
 of standard built types and hasn't really been tested. I'll fix it right
 away.

Ah. I thought that it was for generating the documentation with ddoc.

And I actually, I would have at least half expected for you to have the fancier documentation generation (ddox?) that you've been working on integrated into dub so that it would be trivial to set up a new project that uses it. Maybe that's what you're already working on with the docs target; I don't know. But that would be a good feature IMHO. - Jonathan M Davis

It definitely is planned (I wanted to make it a different target "ddox" for now, though). I just simply had no time to implement it yet, but the next version will have it.
Mar 03 2013
prev sibling next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday, March 02, 2013 15:07:33 S=C3=B6nke Ludwig wrote:
 The "docs" target was just a quick draft added to have meaningful lis=

 of standard built types and hasn't really been tested. I'll fix it ri=

 away.

Ah. I thought that it was for generating the documentation with ddoc. - Jonathan M Davis
Mar 02 2013
prev sibling parent "Vadim Lopatin" <coolreader.org gmail.com> writes:
Hello,

I've sent pull request with fix of configurations/buildTarget 
support.
https://github.com/rejectedsoftware/dub/pull/65

Could you review it?

Best regards,
     Vadim
Apr 17 2013