www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Article discussing Go, could well be D

reply "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
http://www.reddit.com/r/programming/comments/hudvd/
the_go_programming_language_or_why_all_clike/

The author presents a "wish list" for his perfect systems programming 
language, and claims that Go is the only one (somewhat) fulfilling it.  
With the exception of item 7, the list could well be an advertisement for 
D.

-Lars
Jun 07 2011
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> wrote in message 
news:isn5rr$134r$1 digitalmars.com...
 http://www.reddit.com/r/programming/comments/hudvd/
 the_go_programming_language_or_why_all_clike/

 The author presents a "wish list" for his perfect systems programming
 language, and claims that Go is the only one (somewhat) fulfilling it.
 With the exception of item 7, the list could well be an advertisement for
 D.
"Some of them enjoy a phase of hype, but then fade again, others stay in the spheres of irrelevancy, while yet others will be ready for public consumption any decade now." ...With "stay in the spheres of irrelevancy" linking to the D homepage. That annoys the hell out of me. D matches his wishlist better than Issue 9 does, and it's dismissed for "irrelevency"? Heck, I could actually buy that if there were any actual basis at all for saying such a thing. But what could even *possibly* be considered reasons for saying so? Is it "irrelevent" just because it doesn't have some corporation behind it? That'd be some real broken reasoning. Is it "irrelevant" because it wasn't created by some guy who did something significant 40 years ago and hasn't done a damn thing of note since? Hell, unless you're pretending it's 197x, Andre's far more noteworthy than that Issue 9 guy. It sure as hell can't be "irrelevent" for lack of use. So what else could it be besides just having his head up his ass? Not that I think his head's up there. From what I read, I'm convinced the real reason is just that he's far too much a fan of ad hominem reasoning.
Jun 08 2011
next sibling parent reply Russel Winder <russel russel.org.uk> writes:
On Wed, 2011-06-08 at 04:29 -0400, Nick Sabalausky wrote:
[ . . . ]
 I'm convinced the real reason is just that he's far too much a fan of ad=
=20
 hominem reasoning.
Assuming the author knew what ad hominem meant! Of course hatred of Latin could be a good argument for ignoring the term. ;-) --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jun 08 2011
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Russel Winder" <russel russel.org.uk> wrote in message 
news:mailman.704.1307525385.14074.digitalmars-d puremagic.com...
On Wed, 2011-06-08 at 04:29 -0400, Nick Sabalausky wrote:
[ . . . ]
 I'm convinced the real reason is just that he's far too much a fan of ad
 hominem reasoning.
Assuming the author knew what ad hominem meant! Of course hatred of Latin could be a good argument for ignoring the term. ;-)
Heh :) I wonder what the Scientific/Latin term for "Fear of Latin" would be?
Jun 08 2011
parent Alix Pexton <alix.DOT.pexton gmail.DOT.com> writes:
On 08/06/2011 11:34, Nick Sabalausky wrote:
 "Russel Winder"<russel russel.org.uk>  wrote in message
 news:mailman.704.1307525385.14074.digitalmars-d puremagic.com...
 On Wed, 2011-06-08 at 04:29 -0400, Nick Sabalausky wrote:
 [ . . . ]
 I'm convinced the real reason is just that he's far too much a fan of ad
 hominem reasoning.
Assuming the author knew what ad hominem meant! Of course hatred of Latin could be a good argument for ignoring the term. ;-)
Heh :) I wonder what the Scientific/Latin term for "Fear of Latin" would be?
I believe most (if not all) phobias are derived from the Greek, which was, oddly enough, the lingua franca of the Roman empire. So I'd guess the answer would be something like "Latinikaphobia", but I'm drawing on Google translate more than any residual knowledge of the Classics. A...
Jun 08 2011
prev sibling parent reply Mafi <mafi example.org> writes:
Am 08.06.2011 11:29, schrieb Russel Winder:
 Of course hatred of Latin could be a good argument for ignoring the
 term.;-)
But we shouldn't discuss this ad nauseam :-)
Jun 08 2011
parent Walter Bright <newshound2 digitalmars.com> writes:
On 6/8/2011 4:57 AM, Mafi wrote:
 But we shouldn't discuss this ad nauseam :-)
Time for some ad libbing!
Jun 08 2011
prev sibling next sibling parent reply Jeff Nowakowski <jeff dilacero.org> writes:
On 06/08/2011 04:29 AM, Nick Sabalausky wrote:
 From what I read, I'm convinced the real reason is just that he's far
 too much a fan of ad hominem reasoning.
Given all your Pike bashing, you shouldn't be throwing stones.
Jun 08 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"Jeff Nowakowski" <jeff dilacero.org> wrote in message 
news:isnqeg$2omb$1 digitalmars.com...
 On 06/08/2011 04:29 AM, Nick Sabalausky wrote:
 From what I read, I'm convinced the real reason is just that he's far
 too much a fan of ad hominem reasoning.
Given all your Pike bashing, you shouldn't be throwing stones.
It's not that I have anything against Pike or Thompson. I don't. I just think that as dumb as it is to use ad hominem reasoning in the first place, it's even dumber to invoke it in such an anachronistic way. And to see the kind of "buzz"-driven "reasoning" that I would only expect out of the fashion industry coming instead from so many *programmers*, of all people...well, to reference Family Guy, "that really grinds my gears."
Jun 08 2011
parent reply Jeff Nowakowski <jeff dilacero.org> writes:
On 06/08/2011 03:55 PM, Nick Sabalausky wrote:
 It's not that I have anything against Pike or Thompson. I don't. I
 just think that as dumb as it is to use ad hominem reasoning in the
 first place, it's even dumber to invoke it in such an anachronistic
 way.
Then don't do it yourself. Your Pike bashing was uncalled for. The author didn't say anything about Pike except to mention him as one of the original developers. If he was gushing over the man or saying Go is worthwhile because of him, then you'd have a point.
Jun 08 2011
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Jeff Nowakowski" <jeff dilacero.org> wrote in message 
news:ispo9o$f4e$1 digitalmars.com...
 On 06/08/2011 03:55 PM, Nick Sabalausky wrote:
 It's not that I have anything against Pike or Thompson. I don't. I
 just think that as dumb as it is to use ad hominem reasoning in the
 first place, it's even dumber to invoke it in such an anachronistic
 way.
Then don't do it yourself. Your Pike bashing was uncalled for. The author didn't say anything about Pike except to mention him as one of the original developers. If he was gushing over the man or saying Go is worthwhile because of him, then you'd have a point.
Once again, I wasn't Pike-bashing. You're misinterpreting my words and assuming I did. Billions of people, obviously myself included, have never done *anything* of real significant note whether recently or 800 years ago. And everyone knows that. So how the heck can saying "some guy who did something significant 40 years ago and hasn't done a damn thing of note since" even *possibly* be taken as an insult? So what if he hasn't? Most people don't do a damn thing of note their entire lives. I'll probably never do a damn thing of note my entire life (even as much as I try). Who cares? It's just fact (well, aside from the definition of "noteworthy" being a bit vague). The only thing my statement *could* rationally be taken as is that I'm just simply *not* praising him. Not praising someone is hardly the same as insulting them (unless you consider them some deity). Again, my entire point for even bringing it up was that the association with "Google/Pike/Thompson" (plus all the "buzz" around the language, which all boils down to little more than "It's Google/Pike/Thompson!" anyway) seems to be his only real reason for giving Issue 9 a real chance and dismissing D outright. Which is, yes, anachronistic ad hominem reasoning. And considering the actual realities of both D and Issue 9, I can't think of anything else besides that "buzz/fame factor" for why he would reject D as being so much more "irrelevant" than Issue 9.
Jun 09 2011
parent reply Jeff Nowakowski <jeff dilacero.org> writes:
On 06/09/2011 03:27 AM, Nick Sabalausky wrote:
 Once again, I wasn't Pike-bashing. You're misinterpreting my words
 and assuming I did. Billions of people, obviously myself included,
 have never done *anything* of real significant note whether recently
 or 800 years ago. And everyone knows that. So how the heck can saying
 "some guy who did something significant 40 years ago and hasn't done
 a damn thing of note since" even *possibly* be taken as an insult?
Maybe because he has done things of note since? Who is it for you to judge? And then comparing him against Andrei, to boot. I don't want to get into a debate on his career, or turn this into an Andre vs Pike flamefest, so I won't drag up what people have worked on. My point is you didn't have to go there. Yes, what you did was inflammatory and ad hominem.
Jun 10 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"Jeff Nowakowski" <jeff dilacero.org> wrote in message 
news:ist7oe$17ge$1 digitalmars.com...
 On 06/09/2011 03:27 AM, Nick Sabalausky wrote:
 Once again, I wasn't Pike-bashing. You're misinterpreting my words
 and assuming I did. Billions of people, obviously myself included,
 have never done *anything* of real significant note whether recently
 or 800 years ago. And everyone knows that. So how the heck can saying
 "some guy who did something significant 40 years ago and hasn't done
 a damn thing of note since" even *possibly* be taken as an insult?
Maybe because he has done things of note since? Who is it for you to judge? And then comparing him against Andrei, to boot. I don't want to get into a debate on his career, or turn this into an Andre vs Pike flamefest, so I won't drag up what people have worked on. My point is you didn't have to go there.
Yea, I didn't *have* to go there. I don't *have* post anything here at all. So what? That obviously that doesn't imply that I can't or shouldn't.
 Yes, what you did was inflammatory and ad hominem.
No, what I'm starting to do *now* is inflammatory. What I did before is make an observational statement which you twisted around into a condemnation. If I made a statement about someone (who apperently seems to be some demigod) and that statement that didn't involve any gushing over the guy, then tough shit, them's the breaks. As for ad hominem, you don't seem to even understand the concept. What makes something an ad hominem fallacy is assigning truth value based on *who* agrees with, disagrees with, or is otherwise associated with it. What *I* did was make a statement *about* a person. No, that is *not* an ad hominem fallacy. And no, just because it wasn't a *good* statement doesn't imply it was a *bad* statement. And even if it *were* a bad statement, which it clearly wasn't, I don't have to be an elected or appointed judge, or God, or anything like that to be entited to have that viewpoint and voice it.
Jun 10 2011
parent Jeff Nowakowski <jeff dilacero.org> writes:
On 06/10/2011 05:27 PM, Nick Sabalausky wrote:
 No, what I'm starting to do *now* is inflammatory. What I did before is make
 an observational statement which you twisted around into a condemnation.
Bullshit. It's not an "observational statement" to pass judgment on a man's later career. You make it sound like you're recording astronomical position of the stars. Factually, people have noted what both Pike and Thompson have done past the 1970s, so your "observational statement" is just an opinion, and a demeaning one.
 As for ad hominem, you don't seem to even understand the concept. What makes
 something an ad hominem fallacy is assigning truth value based on *who*
 agrees with, disagrees with, or is otherwise associated with it. What *I*
 did was make a statement *about* a person.
That statement added nothing to the argument. Instead, it subtracted from the argument by introducing a contentious point, just like the original author did with his "sphere of irrelevancy" comment. Pike and Thompson are notable figures in the history of C and Unix, and there's nothing wrong with having more interest in a systems language from them because of that. Whether their careers were notable after that is a moot and inflammatory point.
 I don't have to be an elected or appointed judge, or God, or
 anything like that to be entited to have that viewpoint and voice it.
Of course you don't. I never suggested otherwise. However, that doesn't make it beyond reproach. Anyways, this is my last post on the matter.
Jun 10 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Jeff Nowakowski" <jeff dilacero.org> wrote in message 
news:ispo9o$f4e$1 digitalmars.com...
 On 06/08/2011 03:55 PM, Nick Sabalausky wrote:
 It's not that I have anything against Pike or Thompson. I don't. I
 just think that as dumb as it is to use ad hominem reasoning in the
 first place, it's even dumber to invoke it in such an anachronistic
 way.
Then don't do it yourself. Your Pike bashing was uncalled for. The author didn't say anything about Pike except to mention him as one of the original developers. If he was gushing over the man or saying Go is worthwhile because of him, then you'd have a point.
And no, the author wasn't *just* mentioning him as one one the original creators: "I was a bit skeptical when I read about Google's new programming language. I simply ignored the news. After all, the next New Great Language is just around the corner. Some of them enjoy a phase of hype, but then fade again, others stay in the spheres of irrelevancy [Linked to the D homepage], while yet others will be ready for public consumption any decade now. Some time later I stumbled over it again. This time I took a closer look. One thing I didn't notice at first: One of the inventors is Ken Thompson of Unix and Plan9 fame, and he was indirectly involved with C as well. Now if a new programming language is designed by someone who already served in the trenches of the great mainframe era, maybe there is something to it." So he flat out *states* that he passed over D and gave Issue 9 a try *because* of what was done decades ago by one of the people involved.
Jun 09 2011
parent reply Jeff Nowakowski <jeff dilacero.org> writes:
On 06/09/2011 03:36 AM, Nick Sabalausky wrote:
 So he flat out *states* that he passed over D and gave Issue 9 a try
 *because* of what was done decades ago by one of the people involved.
OK, I missed that, because I searched for "Pike" in the article, and he mentioned Thompson. Your post didn't mention anybody explicitly by name, except for "that Issue 9 guy". Considering that Pike has been the face of Go, it was a reasonable assumption. You still didn't need to pass judgment on what is notable or not in their later careers. It's enough to say that dismissing D as being "irrelevant" without justification is the problem. Also, there's nothing wrong with taking a look at a C-like language because the inventors were heavily involved with the original C and Unix environments. Much like people are encouraged to look at D because of Walter's past work with a C++ compiler and Andrei's C++ experience. As a way to pique interest, it's valid. However, that should not be a determination of a language's actual merit.
Jun 10 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"Jeff Nowakowski" <jeff dilacero.org> wrote in message 
news:ist8n0$1952$1 digitalmars.com...
 You still didn't need to pass judgment on what is notable or not in their 
 later careers.
Why the hell can't I? Is there some "thought police" I don't know about? *He* passed judgement on the guy's earlier career. And *you* said "Maybe because he has done things of note since?" so clearly, *you're* passing judgement on his later career. Oh, I see, passing judgement is only ok when the verdict happens to be "thumbs up"... If you're unimpressed with something then that's "passing judgement", but if you are impressed then that's not a judgement at all. What the hell did I step into, some "New Age/Flower Child"-Bizarro-World where only "positive uplifting" ideas are valid ones? Bunch of hippocritical bullcrap.
 It's enough to say that dismissing D as being "irrelevant" without 
 justification is the problem.

 Also, there's nothing wrong with taking a look at a C-like language 
 because the inventors were heavily involved with the original C and Unix 
 environments. Much like people are encouraged to look at D because of 
 Walter's past work with a C++ compiler and Andrei's C++ experience. As a 
 way to pique interest, it's valid. However, that should not be a 
 determination of a language's actual merit.
Perhaps, but that's not the full extent of the situation here. He labeled D as "stay[ing] in the spheres of irrelevancy", and the *only* conceivable reason for him to have made such an assesement is that D lacks Go's "Google, Pike, and Thompson". I'm not allowed to be annoyed by that and voice my reasons?
Jun 10 2011
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/10/11 4:08 PM, Nick Sabalausky wrote:
 "Jeff Nowakowski"<jeff dilacero.org>  wrote in message
 news:ist8n0$1952$1 digitalmars.com...
 You still didn't need to pass judgment on what is notable or not in their
 later careers.
Why the hell can't I? Is there some "thought police" I don't know about? *He* passed judgement on the guy's earlier career. And *you* said "Maybe because he has done things of note since?" so clearly, *you're* passing judgement on his later career. Oh, I see, passing judgement is only ok when the verdict happens to be "thumbs up"...
Fair point. It corroborates well with the advice I got from a specialist in public speaking - avoid saying "Good question" in preface to your response to a question. His argument was that that's a signal you pass judgment on the question itself (albeit positively), and others may feel uncomfortable that you'd judge their own question poorly. Anyway, perhaps it's not worth to escalate this any further. My opinion ("judgment" :o)) is that there are things one says over a beer to a friend, things that one says to a near-stranger (notorious or not) in a social setting, and things that one shares on the net. I think it's reasonable to ascribe to human nature that the three sets are different without a thought police being necessary. Andrei
Jun 10 2011
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/8/11 3:29 AM, Nick Sabalausky wrote:
 "Lars T. Kyllingstad"<public kyllingen.NOSPAMnet>  wrote in message
 news:isn5rr$134r$1 digitalmars.com...
 http://www.reddit.com/r/programming/comments/hudvd/
 the_go_programming_language_or_why_all_clike/

 The author presents a "wish list" for his perfect systems programming
 language, and claims that Go is the only one (somewhat) fulfilling it.
 With the exception of item 7, the list could well be an advertisement for
 D.
"Some of them enjoy a phase of hype, but then fade again, others stay in the spheres of irrelevancy, while yet others will be ready for public consumption any decade now." ...With "stay in the spheres of irrelevancy" linking to the D homepage. That annoys the hell out of me. D matches his wishlist better than Issue 9 does, and it's dismissed for "irrelevency"? Heck, I could actually buy that if there were any actual basis at all for saying such a thing. But what could even *possibly* be considered reasons for saying so? Is it "irrelevent" just because it doesn't have some corporation behind it? That'd be some real broken reasoning. Is it "irrelevant" because it wasn't created by some guy who did something significant 40 years ago and hasn't done a damn thing of note since? Hell, unless you're pretending it's 197x, Andre's far more noteworthy than that Issue 9 guy. It sure as hell can't be "irrelevent" for lack of use. So what else could it be besides just having his head up his ass? Not that I think his head's up there. From what I read, I'm convinced the real reason is just that he's far too much a fan of ad hominem reasoning.
You're complaining in the wrong place. What you need to do is answer on reddit. Andrei
Jun 08 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:isnvfi$h9$1 digitalmars.com...
 On 6/8/11 3:29 AM, Nick Sabalausky wrote:
 "Lars T. Kyllingstad"<public kyllingen.NOSPAMnet>  wrote in message
 news:isn5rr$134r$1 digitalmars.com...
 http://www.reddit.com/r/programming/comments/hudvd/
 the_go_programming_language_or_why_all_clike/

 The author presents a "wish list" for his perfect systems programming
 language, and claims that Go is the only one (somewhat) fulfilling it.
 With the exception of item 7, the list could well be an advertisement 
 for
 D.
"Some of them enjoy a phase of hype, but then fade again, others stay in the spheres of irrelevancy, while yet others will be ready for public consumption any decade now." ...With "stay in the spheres of irrelevancy" linking to the D homepage. That annoys the hell out of me. D matches his wishlist better than Issue 9 does, and it's dismissed for "irrelevency"? Heck, I could actually buy that if there were any actual basis at all for saying such a thing. But what could even *possibly* be considered reasons for saying so? Is it "irrelevent" just because it doesn't have some corporation behind it? That'd be some real broken reasoning. Is it "irrelevant" because it wasn't created by some guy who did something significant 40 years ago and hasn't done a damn thing of note since? Hell, unless you're pretending it's 197x, Andre's far more noteworthy than that Issue 9 guy. It sure as hell can't be "irrelevent" for lack of use. So what else could it be besides just having his head up his ass? Not that I think his head's up there. From what I read, I'm convinced the real reason is just that he's far too much a fan of ad hominem reasoning.
You're complaining in the wrong place. What you need to do is answer on reddit.
What, and publically make D users look like jackasses that take things too personally? :)
Jun 08 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"Nick Sabalausky" <a a.a> wrote in message 
news:isojjt$1a8c$1 digitalmars.com...
 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
 news:isnvfi$h9$1 digitalmars.com...
 On 6/8/11 3:29 AM, Nick Sabalausky wrote:
 "Lars T. Kyllingstad"<public kyllingen.NOSPAMnet>  wrote in message
 news:isn5rr$134r$1 digitalmars.com...
 http://www.reddit.com/r/programming/comments/hudvd/
 the_go_programming_language_or_why_all_clike/

 The author presents a "wish list" for his perfect systems programming
 language, and claims that Go is the only one (somewhat) fulfilling it.
 With the exception of item 7, the list could well be an advertisement 
 for
 D.
"Some of them enjoy a phase of hype, but then fade again, others stay in the spheres of irrelevancy, while yet others will be ready for public consumption any decade now." ...With "stay in the spheres of irrelevancy" linking to the D homepage. That annoys the hell out of me. D matches his wishlist better than Issue 9 does, and it's dismissed for "irrelevency"? Heck, I could actually buy that if there were any actual basis at all for saying such a thing. But what could even *possibly* be considered reasons for saying so? Is it "irrelevent" just because it doesn't have some corporation behind it? That'd be some real broken reasoning. Is it "irrelevant" because it wasn't created by some guy who did something significant 40 years ago and hasn't done a damn thing of note since? Hell, unless you're pretending it's 197x, Andre's far more noteworthy than that Issue 9 guy. It sure as hell can't be "irrelevent" for lack of use. So what else could it be besides just having his head up his ass? Not that I think his head's up there. From what I read, I'm convinced the real reason is just that he's far too much a fan of ad hominem reasoning.
You're complaining in the wrong place. What you need to do is answer on reddit.
What, and publically make D users look like jackasses that take things too personally? :)
More seriously though, I've come close to commenting on reddit in the past. But then the signup form didn't work without JS, so then I turned JS on, and then it rejected mailinator, so instead of jumping through the hoop of setting up yet another throwaway address on my mail server (which I'm not actually opposed to doing), I always ended up deciding, "Meh, I don't actually care *that* much about posting my stupid little bullshit ramblings, I've got better things to do..." But yea, maybe I will go ahead and just do it...(and then rephrase my comment about the article to be less contentious ;) ) Hell it's not like the rest of the article was all that bad (although I disagree with the "the more orthogonality, the better"). I did love this bit: "The [Java] web services offer some really nice abstractions. Up until you look under the hood and discover a Rube Goldberg machine. Each layer builds upon last year's favourite abstraction layer." 'Course, he's only talking about Java, and only part of Java, but that would be a brilliant summary of the whole damn Web.
Jun 08 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 6/8/2011 1:12 PM, Nick Sabalausky wrote:
 But yea, maybe I will go ahead and just do it...
I think it's important that you do.
Jun 08 2011
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:isopmf$1lov$1 digitalmars.com...
 On 6/8/2011 1:12 PM, Nick Sabalausky wrote:
 But yea, maybe I will go ahead and just do it...
I think it's important that you do.
Hmm, well, I'm trying, but this time I can't even get the "register" link to work at all. I turn on JS, reload, it sits there "loading" for about three minutes (as opposed to a few seconds without JS), and after all that, clicking the "register" link still doesn't do anything at all. Seriously, how completely incompetent do they have to be to screw up something as incredibly basic as a link? Bunch of morons over there. Meh, maybe when someone makes a reddit alternative that actually fucking works I'll use it...
Jun 08 2011
parent reply Robert Clipsham <robert octarineparrot.com> writes:
On 08/06/2011 22:46, Nick Sabalausky wrote:
 "Walter Bright"<newshound2 digitalmars.com>  wrote in message
 news:isopmf$1lov$1 digitalmars.com...
 On 6/8/2011 1:12 PM, Nick Sabalausky wrote:
 But yea, maybe I will go ahead and just do it...
I think it's important that you do.
Hmm, well, I'm trying, but this time I can't even get the "register" link to work at all. I turn on JS, reload, it sits there "loading" for about three minutes (as opposed to a few seconds without JS), and after all that, clicking the "register" link still doesn't do anything at all. Seriously, how completely incompetent do they have to be to screw up something as incredibly basic as a link? Bunch of morons over there. Meh, maybe when someone makes a reddit alternative that actually fucking works I'll use it...
To be fair, they probably don't test in Firefox 2 any more, given that there have been numerous releases since and it's now unsupported. -- Robert http://octarineparrot.com/
Jun 08 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"Robert Clipsham" <robert octarineparrot.com> wrote in message 
news:isovbb$20ea$1 digitalmars.com...
 On 08/06/2011 22:46, Nick Sabalausky wrote:
 "Walter Bright"<newshound2 digitalmars.com>  wrote in message
 news:isopmf$1lov$1 digitalmars.com...
 On 6/8/2011 1:12 PM, Nick Sabalausky wrote:
 But yea, maybe I will go ahead and just do it...
I think it's important that you do.
Hmm, well, I'm trying, but this time I can't even get the "register" link to work at all. I turn on JS, reload, it sits there "loading" for about three minutes (as opposed to a few seconds without JS), and after all that, clicking the "register" link still doesn't do anything at all. Seriously, how completely incompetent do they have to be to screw up something as incredibly basic as a link? Bunch of morons over there. Meh, maybe when someone makes a reddit alternative that actually fucking works I'll use it...
To be fair, they probably don't test in Firefox 2 any more, given that there have been numerous releases since and it's now unsupported.
1. It's a link for fuck's sake. It doesn't take IE9/Opera11/FF4 for a trivial damn <a href="...">...</a> to work properly. The fucking things have worked *exactly the same* since Mosaic (save for the "frame" extensions that nobody should ever use anyway). There *is no* cross-browser testing needed to get it right. There's barely any *thinking* needed to get it right. I could teach my sister to get it right. 2. If Mozilla ever decides to put out a successor to FF2 that isn't shit, and actually *does* follow the "customizability" that they constantly *pretend* to have, then I'll happily upgrade. Besides, every time I come across some self-important asshole of a site that feels it's their duty to try to tell me what fucking browser I should be using, it makes me want to stick with FF2 that much more. They're so anxious to cram half-assed under-engineered so-called-"technologies" down my throat? Well then fuck them. I'd go back to FF1 if I thought doing so would give them a hard time. (At least it looks nicer out-of-the-box without having to install winestripe.) It's not my fault everyone insists on making their software worse with each release.
Jun 08 2011
parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 09.06.2011 07:43, schrieb Nick Sabalausky:
 "Robert Clipsham" <robert octarineparrot.com> wrote in message 
 news:isovbb$20ea$1 digitalmars.com...
 On 08/06/2011 22:46, Nick Sabalausky wrote:
 "Walter Bright"<newshound2 digitalmars.com>  wrote in message
 news:isopmf$1lov$1 digitalmars.com...
 On 6/8/2011 1:12 PM, Nick Sabalausky wrote:
 But yea, maybe I will go ahead and just do it...
I think it's important that you do.
Hmm, well, I'm trying, but this time I can't even get the "register" link to work at all. I turn on JS, reload, it sits there "loading" for about three minutes (as opposed to a few seconds without JS), and after all that, clicking the "register" link still doesn't do anything at all. Seriously, how completely incompetent do they have to be to screw up something as incredibly basic as a link? Bunch of morons over there. Meh, maybe when someone makes a reddit alternative that actually fucking works I'll use it...
To be fair, they probably don't test in Firefox 2 any more, given that there have been numerous releases since and it's now unsupported.
1. It's a link for fuck's sake. It doesn't take IE9/Opera11/FF4 for a trivial damn <a href="...">...</a> to work properly. The fucking things have worked *exactly the same* since Mosaic (save for the "frame" extensions that nobody should ever use anyway). There *is no* cross-browser testing needed to get it right. There's barely any *thinking* needed to get it right. I could teach my sister to get it right. 2. If Mozilla ever decides to put out a successor to FF2 that isn't shit, and actually *does* follow the "customizability" that they constantly *pretend* to have, then I'll happily upgrade. Besides, every time I come across some self-important asshole of a site that feels it's their duty to try to tell me what fucking browser I should be using, it makes me want to stick with FF2 that much more. They're so anxious to cram half-assed under-engineered so-called-"technologies" down my throat? Well then fuck them. I'd go back to FF1 if I thought doing so would give them a hard time. (At least it looks nicer out-of-the-box without having to install winestripe.) It's not my fault everyone insists on making their software worse with each release.
I thought you were switching to Arora? FF2 is not just outdated and incompatible with some websites, it's not maintained anymore (since 2008 I think) and most probably contains known (and actively exploited) security holes (some of them may even work with JS disabled). Cheers, - Daniel
Jun 08 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"Daniel Gibson" <metalcaedes gmail.com> wrote in message 
news:ispnrd$1rc$10 digitalmars.com...
 Am 09.06.2011 07:43, schrieb Nick Sabalausky:
 "Robert Clipsham" <robert octarineparrot.com> wrote in message
 news:isovbb$20ea$1 digitalmars.com...
 On 08/06/2011 22:46, Nick Sabalausky wrote:
 "Walter Bright"<newshound2 digitalmars.com>  wrote in message
 news:isopmf$1lov$1 digitalmars.com...
 On 6/8/2011 1:12 PM, Nick Sabalausky wrote:
 But yea, maybe I will go ahead and just do it...
I think it's important that you do.
Hmm, well, I'm trying, but this time I can't even get the "register" link to work at all. I turn on JS, reload, it sits there "loading" for about three minutes (as opposed to a few seconds without JS), and after all that, clicking the "register" link still doesn't do anything at all. Seriously, how completely incompetent do they have to be to screw up something as incredibly basic as a link? Bunch of morons over there. Meh, maybe when someone makes a reddit alternative that actually fucking works I'll use it...
To be fair, they probably don't test in Firefox 2 any more, given that there have been numerous releases since and it's now unsupported.
1. It's a link for fuck's sake. It doesn't take IE9/Opera11/FF4 for a trivial damn <a href="...">...</a> to work properly. The fucking things have worked *exactly the same* since Mosaic (save for the "frame" extensions that nobody should ever use anyway). There *is no* cross-browser testing needed to get it right. There's barely any *thinking* needed to get it right. I could teach my sister to get it right. 2. If Mozilla ever decides to put out a successor to FF2 that isn't shit, and actually *does* follow the "customizability" that they constantly *pretend* to have, then I'll happily upgrade. Besides, every time I come across some self-important asshole of a site that feels it's their duty to try to tell me what fucking browser I should be using, it makes me want to stick with FF2 that much more. They're so anxious to cram half-assed under-engineered so-called-"technologies" down my throat? Well then fuck them. I'd go back to FF1 if I thought doing so would give them a hard time. (At least it looks nicer out-of-the-box without having to install winestripe.) It's not my fault everyone insists on making their software worse with each release.
I thought you were switching to Arora? FF2 is not just outdated and incompatible with some websites, it's not maintained anymore (since 2008 I think) and most probably contains known (and actively exploited) security holes (some of them may even work with JS disabled).
Meh, "In the process of", really. Which is unfortunate. I often use Arora for GitHub and BitBucket (I really don't like BitBucket though, GitHub and Gitorious are much better even though I'm an Hg guy, but now I'm digressing even more...) But Arora is still really lacking in some things, for instance, *really* bad handling of SSL certs that aren't 100% perfect (such as self-signed ones, which prevents me from using it for a lot of dev work on my local machine). And even though it's WebKit, which I thought was supposed to be really fast, it's actually just about as slow to load a page as FF2. Surprisingly slow. And I've come across a number of missing settings that, well, that I really miss. It's by far the most promising-looking browser out there (Hell, it's the only one still around with a UI that doesn't completely look like ass - first it was music players and disc authoring that all turned fisher-price, and now browsers, too, especially Chrome: my god what were they thinking on that horrid mess? Damn thing manages to make WinAmp look good). But there's a lot of contributions I'd want to make to Arora before I'd want to completely switch from FF2. Of course, that's easier said than done since 1. I'm spread way too thin already, and 2. It's C++ instead of something nice and modern like D (which probably has something to do with the occasional crashes I get with Arora).
Jun 08 2011
parent reply Kagamin <spam here.lot> writes:
Nick Sabalausky Wrote:

 But Arora is still really lacking in some things, for instance, *really* bad 
 handling of SSL certs that aren't 100% perfect (such as self-signed ones, 
 which prevents me from using it for a lot of dev work on my local machine).
Maybe you should import your CA certificate as trusted?
 And even though it's WebKit, which I thought was supposed to be really fast, 
 it's actually just about as slow to load a page as FF2. Surprisingly slow.
WebKit is not meant to be fast, it's slower than gecko (at least on windows).
 It's by far the most promising-looking browser out there (Hell, it's the 
 only one still around with a UI that doesn't completely look like ass
I don't remember, did you try Orca? Or you can write skin/plugin/extension for FF4 to make your sweet interface :)
Jun 15 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"Kagamin" <spam here.lot> wrote in message 
news:it9ti2$2etv$1 digitalmars.com...
 Nick Sabalausky Wrote:

 But Arora is still really lacking in some things, for instance, *really* 
 bad
 handling of SSL certs that aren't 100% perfect (such as self-signed ones,
 which prevents me from using it for a lot of dev work on my local 
 machine).
Maybe you should import your CA certificate as trusted?
Arora's SSL capabilities are extremely limited. As far as I could tell, it didn't even have a way to do that. I've even had it crash when the cert wasn't perfect.
 And even though it's WebKit, which I thought was supposed to be really 
 fast,
 it's actually just about as slow to load a page as FF2. Surprisingly 
 slow.
WebKit is not meant to be fast, it's slower than gecko (at least on windows).
Isn't WebKit what Chrome uses? I thought that was supposed to be fast.
 It's by far the most promising-looking browser out there (Hell, it's the
 only one still around with a UI that doesn't completely look like ass
I don't remember, did you try Orca?
First I've heard of it. I'll look it up.
 Or you can write skin/plugin/extension for FF4 to make your sweet 
 interface :)
Yea, but that's the problem with FF. Everything about it sucks out-of-the-box, and it gives you no way to disable most of the suck. So you *have to* cram it full of add-ons just to make the damn thing usable. (At least it actually *has* a good range of addons available...)
Jun 15 2011
next sibling parent Kagamin <spam here.lot> writes:
Nick Sabalausky Wrote:

 WebKit is not meant to be fast, it's slower than gecko (at least on 
 windows).
Isn't WebKit what Chrome uses? I thought that was supposed to be fast.
I heard they hacked in their js engine which is supposed to be fast. Ever saw googlesyndication.js?
 Or you can write skin/plugin/extension for FF4 to make your sweet 
 interface :)
Yea, but that's the problem with FF. Everything about it sucks out-of-the-box, and it gives you no way to disable most of the suck. So you *have to* cram it full of add-ons just to make the damn thing usable. (At least it actually *has* a good range of addons available...)
That's what's meant by extensibility: if default doesn't suit your preferences, you can customize it. Opera used to follow an opposite approach: mix up all usable stuff so that the user has no need to extend it, but now it supports ff-style extensions.
Jun 15 2011
prev sibling parent =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
Nick Sabalausky wrote:
 Isn't WebKit what Chrome uses? I thought that was supposed to be fast.
=20
Hah, you fell for their advertising, you of all people! It is "fast" for javascript heavy sites. For static HTML, all engines are pretty close to each other. Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Jun 15 2011
prev sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
You don't need to provide a real e-mail address to register on reddit, afaik.
Jun 08 2011
prev sibling next sibling parent reply Brad Anderson <eco gnuk.net> writes:
On Wed, Jun 8, 2011 at 12:46 AM, Lars T. Kyllingstad
<public kyllingen.nospamnet> wrote:

 http://www.reddit.com/r/programming/comments/hudvd/
 the_go_programming_language_or_why_all_clike/

 The author presents a "wish list" for his perfect systems programming
 language, and claims that Go is the only one (somewhat) fulfilling it.
 With the exception of item 7, the list could well be an advertisement for
 D.

 -Lars
I found the comments on the Hacker News post < http://news.ycombinator.com/item?id=2631964> about this article more interesting. Regards, Brad Anderson
Jun 08 2011
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/8/11 4:38 PM, Brad Anderson wrote:
 On Wed, Jun 8, 2011 at 12:46 AM, Lars T. Kyllingstad
 <public kyllingen.nospamnet> wrote:

     http://www.reddit.com/r/programming/comments/hudvd/
     the_go_programming_language_or_why_all_clike/

     The author presents a "wish list" for his perfect systems programming
     language, and claims that Go is the only one (somewhat) fulfilling it.
     With the exception of item 7, the list could well be an
     advertisement for
     D.

     -Lars


 I found the comments on the Hacker News post
 <http://news.ycombinator.com/item?id=2631964> about this article more
 interesting.

 Regards,
 Brad Anderson
Agreed. The top poster does repeat a point made by others: D does fail on point 7. Allow me to paste it: ============= 7. Module Library and Repository I want all the niceties I have grown used to in scripting languages built-in or part of the standard library. A public package repository with a decent portable package manager is even better. Typical packages include internet protocols, parsing of common syntaxes, GUI, crypto, common mathematical algorithms, data processing and so on. (Example: Perl 5 CPAN) ============= That's it. We need a package management expert on board to either revive dsss or another similar project, or define a new package manager altogether. No "yeah I have some code somewhere feel free to copy from it"; we need professional execution. Then we need to make that tool part of the standard distribution such that library discovery, installation, and management is as easy as running a command. I'm putting this up for grabs. It's an important project of high impact. Wondering what you could do to help D? Take this to completion. Andrei
Jun 08 2011
next sibling parent reply Caligo <iteronvexor gmail.com> writes:
On Wed, Jun 8, 2011 at 6:06 PM, Andrei Alexandrescu
<SeeWebsiteForEmail erdani.org> wrote:
 That's it. We need a package management expert on board to either revive
 dsss or another similar project, or define a new package manager altogether.
 No "yeah I have some code somewhere feel free to copy from it"; we need
 professional execution. Then we need to make that tool part of the standard
 distribution such that library discovery, installation, and management is as
 easy as running a command.

 I'm putting this up for grabs. It's an important project of high impact.
 Wondering what you could do to help D? Take this to completion.


 Andrei
Andrei, I have to respectfully disagree with you on that, sorry. D is supposed to be a system programming language, not some scripting language like Ruby. Besides, the idea of some kind of package management for a programming language is one of the worst ideas ever, specially when it's a system programming language. You have no idea how much pain and suffering it's going to cause the OS developers and package maintainers. I can see how the idea might be attractive to non-*nix users, but most other *nix OSs have some kind of package management system and searching for, installing, and managing software is as easy as running a command.
Jun 10 2011
next sibling parent Lutger Blijdestijn <lutger.blijdestijn gmail.com> writes:
Caligo wrote:

 On Wed, Jun 8, 2011 at 6:06 PM, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:
 That's it. We need a package management expert on board to either revive
 dsss or another similar project, or define a new package manager
 altogether. No "yeah I have some code somewhere feel free to copy from
 it"; we need professional execution. Then we need to make that tool part
 of the standard distribution such that library discovery, installation,
 and management is as easy as running a command.

 I'm putting this up for grabs. It's an important project of high impact.
 Wondering what you could do to help D? Take this to completion.


 Andrei
Andrei, I have to respectfully disagree with you on that, sorry. D is supposed to be a system programming language, not some scripting language like Ruby. Besides, the idea of some kind of package management for a programming language is one of the worst ideas ever, specially when it's a system programming language. You have no idea how much pain and suffering it's going to cause the OS developers and package maintainers. I can see how the idea might be attractive to non-*nix users, but most other *nix OSs have some kind of package management system and searching for, installing, and managing software is as easy as running a command.
For software libraries it is a different case imho, for the following reasons: - for most software development needs, not enough libraries get packaged by the major distro's - there's no way library authors are going to maintain packages of their libs for all the popular distro's with their incompatible systems - distro maintainers often package older versions, sometimes they are years behind - most, if not all native package management systems deal poorly with the need for having several versions of a library available. So there is still a need for tools like virtualenv. With dsss it's also trivial to setup multiple installations to manage version requirements - language specific package management can span across operating systems The net result is that languages which have package managers (python, ruby, haskell, perl and now also .net) have in fact far more and up to date libraries available than any distro will ever be able to manage.
Jun 10 2011
prev sibling next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Caligo:

 Besides, the idea of some kind of package
 management for a programming language is one of the worst ideas ever,
 specially when it's a system programming language.
D seems acceptable as an application programming language too. And in Haskell I am appreciating Cabal with Hackage: http://www.haskell.org/cabal/ Bye, bearophile
Jun 10 2011
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/10/11 11:29 AM, Caligo wrote:
 On Wed, Jun 8, 2011 at 6:06 PM, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org>  wrote:
 That's it. We need a package management expert on board to either revive
 dsss or another similar project, or define a new package manager altogether.
 No "yeah I have some code somewhere feel free to copy from it"; we need
 professional execution. Then we need to make that tool part of the standard
 distribution such that library discovery, installation, and management is as
 easy as running a command.

 I'm putting this up for grabs. It's an important project of high impact.
 Wondering what you could do to help D? Take this to completion.


 Andrei
Andrei, I have to respectfully disagree with you on that, sorry. D is supposed to be a system programming language, not some scripting language like Ruby. Besides, the idea of some kind of package management for a programming language is one of the worst ideas ever, specially when it's a system programming language. You have no idea how much pain and suffering it's going to cause the OS developers and package maintainers. I can see how the idea might be attractive to non-*nix users, but most other *nix OSs have some kind of package management system and searching for, installing, and managing software is as easy as running a command.
I don't find this counterargument very strong but am attracted to me because it entails no work on my part :o). FWIW other language distributions that position themselves as system languages do embed package management. I personally don't think the two notions exclude one another (without being an expert). At least, a variety of non-standard libraries should avail themselves of a simple "just works" package and versioning amenity. Andrei
Jun 10 2011
prev sibling next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On 2011-06-10 09:29, Caligo wrote:
 On Wed, Jun 8, 2011 at 6:06 PM, Andrei Alexandrescu
 
 <SeeWebsiteForEmail erdani.org> wrote:
 That's it. We need a package management expert on board to either revive
 dsss or another similar project, or define a new package manager
 altogether. No "yeah I have some code somewhere feel free to copy from
 it"; we need professional execution. Then we need to make that tool part
 of the standard distribution such that library discovery, installation,
 and management is as easy as running a command.
 
 I'm putting this up for grabs. It's an important project of high impact.
 Wondering what you could do to help D? Take this to completion.
 
 
 Andrei
Andrei, I have to respectfully disagree with you on that, sorry. D is supposed to be a system programming language, not some scripting language like Ruby. Besides, the idea of some kind of package management for a programming language is one of the worst ideas ever, specially when it's a system programming language. You have no idea how much pain and suffering it's going to cause the OS developers and package maintainers. I can see how the idea might be attractive to non-*nix users, but most other *nix OSs have some kind of package management system and searching for, installing, and managing software is as easy as running a command.
Personally, I don't care much. However, there _are_ other *nix languages which do this sort of thing - e.g. Perl has CPAN. And even if the distro doesn't use it, it's likely to make their job in getting their packages set up properly. And if having a packaging system for D libraries helps boost the language, then I have no problem with it. But ultimately, how much value it adds will depend on what it does and how it works, and we won't know that until it's been design and implemented, which obviously hasn't been done yet. - Jonathan M Davis
Jun 10 2011
prev sibling next sibling parent reply Andrew Wiley <wiley.andrew.j gmail.com> writes:
On Fri, Jun 10, 2011 at 9:29 AM, Caligo <iteronvexor gmail.com> wrote:

 On Wed, Jun 8, 2011 at 6:06 PM, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:
 That's it. We need a package management expert on board to either revive
 dsss or another similar project, or define a new package manager
altogether.
 No "yeah I have some code somewhere feel free to copy from it"; we need
 professional execution. Then we need to make that tool part of the
standard
 distribution such that library discovery, installation, and management is
as
 easy as running a command.

 I'm putting this up for grabs. It's an important project of high impact.
 Wondering what you could do to help D? Take this to completion.


 Andrei
Andrei, I have to respectfully disagree with you on that, sorry. D is supposed to be a system programming language, not some scripting language like Ruby. Besides, the idea of some kind of package management for a programming language is one of the worst ideas ever, specially when it's a system programming language. You have no idea how much pain and suffering it's going to cause the OS developers and package maintainers. I can see how the idea might be attractive to non-*nix users, but most other *nix OSs have some kind of package management system and searching for, installing, and managing software is as easy as running a command.
It doesn't have to be hard if you build the package manager in such a way that it can be integrated into the OS package manager, whether that means letting the OS package manager modify the language package manager's database or just adding a switch that turns your package manager into a dumb build tool so dependency checks can be left to the OS package manager. That's my theory, anyway.
Jun 10 2011
next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
I think we should go for immutable packages. It makes the
package managed infinitely simpler: if the file is there, use it.
If not, download it, then use it. Since it's immutable, you can
always use your file.

How do you push updates then? Easy - change the name. Put the version
number in the module name.
Jun 10 2011
next sibling parent Andrew Wiley <wiley.andrew.j gmail.com> writes:
On Fri, Jun 10, 2011 at 11:10 AM, Adam D. Ruppe
<destructionator gmail.com>wrote:

 I think we should go for immutable packages. It makes the
 package managed infinitely simpler: if the file is there, use it.
 If not, download it, then use it. Since it's immutable, you can
 always use your file.

 How do you push updates then? Easy - change the name. Put the version
 number in the module name.
I agree, and this is the approach of Java tools like Maven and Ant/Ivy. Package a release and label it with the version, then never ever modify it. This makes it hard to work with the "trunk" of a project (though an exceptional case could be made if there's enough justification), but with enough developmental releases, it seems to work pretty well.
Jun 10 2011
prev sibling parent Russel Winder <russel russel.org.uk> writes:
On Fri, 2011-06-10 at 11:29 -0700, Andrew Wiley wrote:
[ . . . ]
=20
 I agree, and this is the approach of Java tools like Maven and
 Ant/Ivy. Package a release and label it with the version, then never
 ever modify it. This makes it hard to work with the "trunk" of a
 project (though an exceptional case could be made if there's enough
 justification), but with enough developmental releases, it seems to
 work pretty well.
Working with trunk is easy, there is the snapshots repository (at least at Codehaus). artefacts in this repository are NOT immutable and build frameworks have a responsibility to check and act accordingly. Gradle does, Maven sometimes does, Ant/Ivy tends to get it wrong without extra help. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jun 10 2011
prev sibling next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Andrew Wiley" <wiley.andrew.j gmail.com> wrote in message 
news:mailman.776.1307728872.14074.digitalmars-d puremagic.com...
 On Fri, Jun 10, 2011 at 9:29 AM, Caligo <iteronvexor gmail.com> wrote:

 On Wed, Jun 8, 2011 at 6:06 PM, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:
 That's it. We need a package management expert on board to either 
 revive
 dsss or another similar project, or define a new package manager
altogether.
 No "yeah I have some code somewhere feel free to copy from it"; we need
 professional execution. Then we need to make that tool part of the
standard
 distribution such that library discovery, installation, and management 
 is
as
 easy as running a command.

 I'm putting this up for grabs. It's an important project of high 
 impact.
 Wondering what you could do to help D? Take this to completion.


 Andrei
Andrei, I have to respectfully disagree with you on that, sorry. D is supposed to be a system programming language, not some scripting language like Ruby. Besides, the idea of some kind of package management for a programming language is one of the worst ideas ever, specially when it's a system programming language. You have no idea how much pain and suffering it's going to cause the OS developers and package maintainers. I can see how the idea might be attractive to non-*nix users, but most other *nix OSs have some kind of package management system and searching for, installing, and managing software is as easy as running a command.
It doesn't have to be hard if you build the package manager in such a way that it can be integrated into the OS package manager, whether that means letting the OS package manager modify the language package manager's database or just adding a switch that turns your package manager into a dumb build tool so dependency checks can be left to the OS package manager. That's my theory, anyway.
I'd say one critical requirement for a package manager is that it be based around the idea of supporting multiple versins of the same lib at the same time. If you're just going to re-invent your own little DLL hell you'd almost be better off just going with the OS package manager.
Jun 10 2011
next sibling parent Caligo <iteronvexor gmail.com> writes:
On Fri, Jun 10, 2011 at 5:48 PM, Nick Sabalausky <a a.a> wrote:
 "Andrew Wiley" <wiley.andrew.j gmail.com> wrote in message
 news:mailman.776.1307728872.14074.digitalmars-d puremagic.com...
 On Fri, Jun 10, 2011 at 9:29 AM, Caligo <iteronvexor gmail.com> wrote:

 On Wed, Jun 8, 2011 at 6:06 PM, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:
 That's it. We need a package management expert on board to either
 revive
 dsss or another similar project, or define a new package manager
altogether.
 No "yeah I have some code somewhere feel free to copy from it"; we ne=
ed
 professional execution. Then we need to make that tool part of the
standard
 distribution such that library discovery, installation, and managemen=
t
 is
as
 easy as running a command.

 I'm putting this up for grabs. It's an important project of high
 impact.
 Wondering what you could do to help D? Take this to completion.


 Andrei
Andrei, I have to respectfully disagree with you on that, sorry. D is supposed to be a system programming language, not some scripting language like Ruby. =A0Besides, the idea of some kind of package management for a programming language is one of the worst ideas ever, specially when it's a system programming language. =A0You have no idea how much pain and suffering it's going to cause the OS developers and package maintainers. =A0I can see how the idea might be attractive to non-*nix users, but most other *nix OSs have some kind of package management system and searching for, installing, and managing software is as easy as running a command.
It doesn't have to be hard if you build the package manager in such a wa=
y
 that it can be integrated into the OS package manager, whether that mean=
s
 letting the OS package manager modify the language package manager's
 database or just adding a switch that turns your package manager into a
 dumb
 build tool so dependency checks can be left to the OS package manager.
 That's my theory, anyway.
I'd say one critical requirement for a package manager is that it be base=
d
 around the idea of supporting multiple versins of the same lib at the sam=
e
 time. If you're just going to re-invent your own little DLL hell you'd
 almost be better off just going with the OS package manager.
I think what you are describing is called atomic builds. I think any package manager should have it. I only know of Nix that supports such a feature. http://nixos.org/nix/
Jun 10 2011
prev sibling parent Andrew Wiley <wiley.andrew.j gmail.com> writes:
On Fri, Jun 10, 2011 at 3:48 PM, Nick Sabalausky <a a.a> wrote:

 "Andrew Wiley" <wiley.andrew.j gmail.com> wrote in message
 news:mailman.776.1307728872.14074.digitalmars-d puremagic.com...
 On Fri, Jun 10, 2011 at 9:29 AM, Caligo <iteronvexor gmail.com> wrote:

 On Wed, Jun 8, 2011 at 6:06 PM, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:
 That's it. We need a package management expert on board to either
 revive
 dsss or another similar project, or define a new package manager
altogether.
 No "yeah I have some code somewhere feel free to copy from it"; we
need
 professional execution. Then we need to make that tool part of the
standard
 distribution such that library discovery, installation, and management
 is
as
 easy as running a command.

 I'm putting this up for grabs. It's an important project of high
 impact.
 Wondering what you could do to help D? Take this to completion.


 Andrei
Andrei, I have to respectfully disagree with you on that, sorry. D is supposed to be a system programming language, not some scripting language like Ruby. Besides, the idea of some kind of package management for a programming language is one of the worst ideas ever, specially when it's a system programming language. You have no idea how much pain and suffering it's going to cause the OS developers and package maintainers. I can see how the idea might be attractive to non-*nix users, but most other *nix OSs have some kind of package management system and searching for, installing, and managing software is as easy as running a command.
It doesn't have to be hard if you build the package manager in such a way that it can be integrated into the OS package manager, whether that means letting the OS package manager modify the language package manager's database or just adding a switch that turns your package manager into a dumb build tool so dependency checks can be left to the OS package manager. That's my theory, anyway.
I'd say one critical requirement for a package manager is that it be based around the idea of supporting multiple versins of the same lib at the same time. If you're just going to re-invent your own little DLL hell you'd almost be better off just going with the OS package manager.
Well, yes, but if the OS package manager can't handle multiple versions of the same lib (and so far, I haven't seem one that can), making that work isn't a necessary part of OS package manager integration. I agree that the language package manager should be able to manage multiple versions in whatever local stores it maintains. The trick is that if I install a package through the OS package manager, there needs to be a way for the language package manager to know what was installed and use that if possible. And, when an application is released, it needs to be possible for it to be built as an OS package depending entirely on other OS packages instead of the language package manager's local stores, and if the language package manager is built in such a way that this is feasible, it should become much more useful. This is all just hand-waving at this point, but it seems like if a sane method can be devised to make this sort of thing happen, the end result will be much better.
Jun 11 2011
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2011-06-10 20:00, Andrew Wiley wrote:
 On Fri, Jun 10, 2011 at 9:29 AM, Caligo <iteronvexor gmail.com
 <mailto:iteronvexor gmail.com>> wrote:

     On Wed, Jun 8, 2011 at 6:06 PM, Andrei Alexandrescu
     <SeeWebsiteForEmail erdani.org
     <mailto:SeeWebsiteForEmail erdani.org>> wrote:
      > That's it. We need a package management expert on board to either
     revive
      > dsss or another similar project, or define a new package manager
     altogether.
      > No "yeah I have some code somewhere feel free to copy from it";
     we need
      > professional execution. Then we need to make that tool part of
     the standard
      > distribution such that library discovery, installation, and
     management is as
      > easy as running a command.
      >
      > I'm putting this up for grabs. It's an important project of high
     impact.
      > Wondering what you could do to help D? Take this to completion.
      >
      >
      > Andrei
      >

     Andrei, I have to respectfully disagree with you on that, sorry.

     D is supposed to be a system programming language, not some scripting
     language like Ruby.  Besides, the idea of some kind of package
     management for a programming language is one of the worst ideas ever,
     specially when it's a system programming language.  You have no idea
     how much pain and suffering it's going to cause the OS developers and
     package maintainers.  I can see how the idea might be attractive to
     non-*nix users, but most other *nix OSs have some kind of package
     management system and searching for, installing, and managing software
     is as easy as running a command.

 It doesn't have to be hard if you build the package manager in such a
 way that it can be integrated into the OS package manager, whether that
 means letting the OS package manager modify the language package
 manager's database or just adding a switch that turns your package
 manager into a dumb build tool so dependency checks can be left to the
 OS package manager. That's my theory, anyway.
Windows doesn't have a OS package and Mac OS X doesn't have one out of the box. Only on Linux there are several package managers to integrate with, seems to be a lot of work. I think it's easier to build a custom, specific for D. -- /Jacob Carlborg
Jun 16 2011
prev sibling next sibling parent Andrew Wiley <wiley.andrew.j gmail.com> writes:
On Wed, Jun 8, 2011 at 4:06 PM, Andrei Alexandrescu <
SeeWebsiteForEmail erdani.org> wrote:

 On 6/8/11 4:38 PM, Brad Anderson wrote:

 On Wed, Jun 8, 2011 at 12:46 AM, Lars T. Kyllingstad
 <public kyllingen.nospamnet> wrote:

    http://www.reddit.com/r/programming/comments/hudvd/
    the_go_programming_language_or_why_all_clike/

    The author presents a "wish list" for his perfect systems programming
    language, and claims that Go is the only one (somewhat) fulfilling it.
    With the exception of item 7, the list could well be an
    advertisement for
    D.

    -Lars


 I found the comments on the Hacker News post
 <http://news.ycombinator.com/item?id=2631964> about this article more
 interesting.

 Regards,
 Brad Anderson
Agreed. The top poster does repeat a point made by others: D does fail on point 7. Allow me to paste it: ============= 7. Module Library and Repository I want all the niceties I have grown used to in scripting languages built-in or part of the standard library. A public package repository with a decent portable package manager is even better. Typical packages include internet protocols, parsing of common syntaxes, GUI, crypto, common mathematical algorithms, data processing and so on. (Example: Perl 5 CPAN) ============= That's it. We need a package management expert on board to either revive dsss or another similar project, or define a new package manager altogether. No "yeah I have some code somewhere feel free to copy from it"; we need professional execution. Then we need to make that tool part of the standard distribution such that library discovery, installation, and management is as easy as running a command. I'm putting this up for grabs. It's an important project of high impact. Wondering what you could do to help D? Take this to completion.
I'm not an expert, but I've been quietly working on a build tool that I'm hoping to make into a drop-in replacement for dsss with the incremental build advantages of xfbuild. I'll toss it on github when it can parse a dsss config and build from that. Right now, it's basically a very simple xfbuild. As for the packaging aspect of dsss, I'll have to take a closer look at how it was originally implemented.
Jun 10 2011
prev sibling next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:isovj2$2133$1 digitalmars.com...
 That's it. We need a package management expert on board to either revive 
 dsss or another similar project, or define a new package manager 
 altogether. No "yeah I have some code somewhere feel free to copy from 
 it"; we need professional execution. Then we need to make that tool part 
 of the standard distribution such that library discovery, installation, 
 and management is as easy as running a command.

 I'm putting this up for grabs. It's an important project of high impact. 
 Wondering what you could do to help D? Take this to completion.
Just a thought: DVM is already set up to handle managing multiple versions of DMD. And I don't think it's a stretch to figure that support for GDC and LDC would be natural extensions at some point, plus some dmd.conf/sc.ini management functionality (which would be needed for installing D libraries). So maybe DVM could be expanded to handle arbitrary D packages as well?
Jun 10 2011
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/10/11 6:14 PM, Nick Sabalausky wrote:
 "Andrei Alexandrescu"<SeeWebsiteForEmail erdani.org>  wrote in message
 news:isovj2$2133$1 digitalmars.com...
 That's it. We need a package management expert on board to either revive
 dsss or another similar project, or define a new package manager
 altogether. No "yeah I have some code somewhere feel free to copy from
 it"; we need professional execution. Then we need to make that tool part
 of the standard distribution such that library discovery, installation,
 and management is as easy as running a command.

 I'm putting this up for grabs. It's an important project of high impact.
 Wondering what you could do to help D? Take this to completion.
Just a thought: DVM is already set up to handle managing multiple versions of DMD. And I don't think it's a stretch to figure that support for GDC and LDC would be natural extensions at some point, plus some dmd.conf/sc.ini management functionality (which would be needed for installing D libraries). So maybe DVM could be expanded to handle arbitrary D packages as well?
I think that's an excellent idea. Jacob, would you be interested in working on that? Andrei
Jun 10 2011
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:isu8vs$d2f$1 digitalmars.com...
 On 6/10/11 6:14 PM, Nick Sabalausky wrote:
 "Andrei Alexandrescu"<SeeWebsiteForEmail erdani.org>  wrote in message
 news:isovj2$2133$1 digitalmars.com...
 That's it. We need a package management expert on board to either revive
 dsss or another similar project, or define a new package manager
 altogether. No "yeah I have some code somewhere feel free to copy from
 it"; we need professional execution. Then we need to make that tool part
 of the standard distribution such that library discovery, installation,
 and management is as easy as running a command.

 I'm putting this up for grabs. It's an important project of high impact.
 Wondering what you could do to help D? Take this to completion.
Just a thought: DVM is already set up to handle managing multiple versions of DMD. And I don't think it's a stretch to figure that support for GDC and LDC would be natural extensions at some point, plus some dmd.conf/sc.ini management functionality (which would be needed for installing D libraries). So maybe DVM could be expanded to handle arbitrary D packages as well?
I think that's an excellent idea. Jacob, would you be interested in working on that?
If he isn't interested and no one else wants to jump in, then I'd be willing to volunteer for it. I'm already familiar with DVM's internals from doing the Windows port. Or maybe Andrew Wiley would want to be involved. He did say that he hoped to turn his xfbuild-like tool into a DSSS replacement. Might be good not to duplicate efforts.
Jun 11 2011
parent Jacob Carlborg <doob me.com> writes:
On 2011-06-11 09:28, Nick Sabalausky wrote:
 "Andrei Alexandrescu"<SeeWebsiteForEmail erdani.org>  wrote in message
 news:isu8vs$d2f$1 digitalmars.com...
 On 6/10/11 6:14 PM, Nick Sabalausky wrote:
 "Andrei Alexandrescu"<SeeWebsiteForEmail erdani.org>   wrote in message
 news:isovj2$2133$1 digitalmars.com...
 That's it. We need a package management expert on board to either revive
 dsss or another similar project, or define a new package manager
 altogether. No "yeah I have some code somewhere feel free to copy from
 it"; we need professional execution. Then we need to make that tool part
 of the standard distribution such that library discovery, installation,
 and management is as easy as running a command.

 I'm putting this up for grabs. It's an important project of high impact.
 Wondering what you could do to help D? Take this to completion.
Just a thought: DVM is already set up to handle managing multiple versions of DMD. And I don't think it's a stretch to figure that support for GDC and LDC would be natural extensions at some point, plus some dmd.conf/sc.ini management functionality (which would be needed for installing D libraries). So maybe DVM could be expanded to handle arbitrary D packages as well?
I think that's an excellent idea. Jacob, would you be interested in working on that?
If he isn't interested and no one else wants to jump in, then I'd be willing to volunteer for it. I'm already familiar with DVM's internals from doing the Windows port. Or maybe Andrew Wiley would want to be involved. He did say that he hoped to turn his xfbuild-like tool into a DSSS replacement. Might be good not to duplicate efforts.
See my reply to Andrei. -- /Jacob Carlborg
Jun 16 2011
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2011-06-11 01:18, Andrei Alexandrescu wrote:
 On 6/10/11 6:14 PM, Nick Sabalausky wrote:
 "Andrei Alexandrescu"<SeeWebsiteForEmail erdani.org> wrote in message
 news:isovj2$2133$1 digitalmars.com...
 That's it. We need a package management expert on board to either revive
 dsss or another similar project, or define a new package manager
 altogether. No "yeah I have some code somewhere feel free to copy from
 it"; we need professional execution. Then we need to make that tool part
 of the standard distribution such that library discovery, installation,
 and management is as easy as running a command.

 I'm putting this up for grabs. It's an important project of high impact.
 Wondering what you could do to help D? Take this to completion.
Just a thought: DVM is already set up to handle managing multiple versions of DMD. And I don't think it's a stretch to figure that support for GDC and LDC would be natural extensions at some point, plus some dmd.conf/sc.ini management functionality (which would be needed for installing D libraries). So maybe DVM could be expanded to handle arbitrary D packages as well?
I think that's an excellent idea. Jacob, would you be interested in working on that? Andrei
I'm already working on another tool for handling D packages, DVM will never be able to do that. To elaborate, I'm building this tool (and later rebuilding DVM) as a library with a thin wrapper consisting of the command line interface. So they could be build into one single tool (from the users point of view) if people would want this. -- /Jacob Carlborg
Jun 16 2011
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2011-06-11 01:14, Nick Sabalausky wrote:
 "Andrei Alexandrescu"<SeeWebsiteForEmail erdani.org>  wrote in message
 news:isovj2$2133$1 digitalmars.com...
 That's it. We need a package management expert on board to either revive
 dsss or another similar project, or define a new package manager
 altogether. No "yeah I have some code somewhere feel free to copy from
 it"; we need professional execution. Then we need to make that tool part
 of the standard distribution such that library discovery, installation,
 and management is as easy as running a command.

 I'm putting this up for grabs. It's an important project of high impact.
 Wondering what you could do to help D? Take this to completion.
Just a thought: DVM is already set up to handle managing multiple versions of DMD. And I don't think it's a stretch to figure that support for GDC and LDC would be natural extensions at some point, plus some dmd.conf/sc.ini management functionality (which would be needed for installing D libraries). So maybe DVM could be expanded to handle arbitrary D packages as well?
DVM will never handle arbitrary D packages. I'm working another tool for that though: https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D I'm working on both of the tools mentioned in the above link. -- /Jacob Carlborg
Jun 17 2011
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-06-09 01:06, Andrei Alexandrescu wrote:
 On 6/8/11 4:38 PM, Brad Anderson wrote:
 On Wed, Jun 8, 2011 at 12:46 AM, Lars T. Kyllingstad
 <public kyllingen.nospamnet> wrote:

 http://www.reddit.com/r/programming/comments/hudvd/
 the_go_programming_language_or_why_all_clike/

 The author presents a "wish list" for his perfect systems programming
 language, and claims that Go is the only one (somewhat) fulfilling it.
 With the exception of item 7, the list could well be an
 advertisement for
 D.

 -Lars


 I found the comments on the Hacker News post
 <http://news.ycombinator.com/item?id=2631964> about this article more
 interesting.

 Regards,
 Brad Anderson
Agreed. The top poster does repeat a point made by others: D does fail on point 7. Allow me to paste it: ============= 7. Module Library and Repository I want all the niceties I have grown used to in scripting languages built-in or part of the standard library. A public package repository with a decent portable package manager is even better. Typical packages include internet protocols, parsing of common syntaxes, GUI, crypto, common mathematical algorithms, data processing and so on. (Example: Perl 5 CPAN) ============= That's it. We need a package management expert on board to either revive dsss or another similar project, or define a new package manager altogether. No "yeah I have some code somewhere feel free to copy from it"; we need professional execution. Then we need to make that tool part of the standard distribution such that library discovery, installation, and management is as easy as running a command. I'm putting this up for grabs. It's an important project of high impact. Wondering what you could do to help D? Take this to completion. Andrei
I'm already working on a package management tool for D. -- /Jacob Carlborg
Jun 16 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/16/11 4:19 PM, Jacob Carlborg wrote:
 I'm already working on a package management tool for D.
Excellent. Suggestion: at the risk of getting flooded with suggestions, post your design early and often. Andrei
Jun 16 2011
next sibling parent Jacob Carlborg <doob me.com> writes:
On 2011-06-16 23:27, Andrei Alexandrescu wrote:
 On 6/16/11 4:19 PM, Jacob Carlborg wrote:
 I'm already working on a package management tool for D.
Excellent. Suggestion: at the risk of getting flooded with suggestions, post your design early and often. Andrei
I usually post late, when I actually have something useful to show. As you've said in another thread, perhaps in the log thread, that we should stop the discussion and just implement something. -- /Jacob Carlborg
Jun 17 2011
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-06-16 23:27, Andrei Alexandrescu wrote:
 On 6/16/11 4:19 PM, Jacob Carlborg wrote:
 I'm already working on a package management tool for D.
Excellent. Suggestion: at the risk of getting flooded with suggestions, post your design early and often. Andrei
Posting my ideas here as well: https://github.com/jacob-carlborg/orbit/wiki/Orbit-Package-Manager-for-D -- /Jacob Carlborg
Jun 17 2011
next sibling parent reply Russel Winder <russel russel.org.uk> writes:
On Fri, 2011-06-17 at 21:29 +0200, Jacob Carlborg wrote:
 On 2011-06-16 23:27, Andrei Alexandrescu wrote:
 On 6/16/11 4:19 PM, Jacob Carlborg wrote:
 I'm already working on a package management tool for D.
Excellent. Suggestion: at the risk of getting flooded with suggestions, post your design early and often. Andrei
=20 Posting my ideas here as well:=20 https://github.com/jacob-carlborg/orbit/wiki/Orbit-Package-Manager-for-D
Just to chip in that when Groovy added the Grapes subsystem so that the Grab annotation could be used to specify dependencies, the usability of Groovy for writing scripts shot up markedly. The default resolver is the Maven repository, but other resolvers can be added using the GrabResolver annotation. For Dake/Orb it might be wise to allow for alternate repositories as well as the central one. Lessons from the Debian/Ubuntu/PPA systems can be picked up here as well. The central repository is great for authorized and accepted packages (by whatever authority authorizes) but having PPAs gives Ubuntu an edge over Debian in the flexibility and ability to run specialist configurations. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jun 18 2011
parent Jacob Carlborg <doob me.com> writes:
On 2011-06-18 11:17, Russel Winder wrote:
 On Fri, 2011-06-17 at 21:29 +0200, Jacob Carlborg wrote:
 On 2011-06-16 23:27, Andrei Alexandrescu wrote:
 On 6/16/11 4:19 PM, Jacob Carlborg wrote:
 I'm already working on a package management tool for D.
Excellent. Suggestion: at the risk of getting flooded with suggestions, post your design early and often. Andrei
Posting my ideas here as well: https://github.com/jacob-carlborg/orbit/wiki/Orbit-Package-Manager-for-D
Just to chip in that when Groovy added the Grapes subsystem so that the Grab annotation could be used to specify dependencies, the usability of Groovy for writing scripts shot up markedly. The default resolver is the Maven repository, but other resolvers can be added using the GrabResolver annotation. For Dake/Orb it might be wise to allow for alternate repositories as well as the central one.
Yes, of course. I'll provide a "source" function that sets the repository. I didn't go in to every detail on the wiki, specially not which functions are available in the config/spec files.
 Lessons from the Debian/Ubuntu/PPA systems can be picked up here as
 well.  The central repository is great for authorized and accepted
 packages (by whatever authority authorizes) but having PPAs gives Ubuntu
 an edge over Debian in the flexibility and ability to run specialist
 configurations.
-- /Jacob Carlborg
Jun 18 2011
prev sibling parent reply Johannes Pfau <spam example.com> writes:
Jacob Carlborg wrote:
On 2011-06-16 23:27, Andrei Alexandrescu wrote:
 On 6/16/11 4:19 PM, Jacob Carlborg wrote:
 I'm already working on a package management tool for D.
Excellent. Suggestion: at the risk of getting flooded with suggestions, post your design early and often. Andrei
Posting my ideas here as well: https://github.com/jacob-carlborg/orbit/wiki/Orbit-Package-Manager-for-D
Some comments to start the suggestion flood ;-) It seems like building orb packages would only work with one specific build system, Dake. I understand that we need a standard way to build packages to allow automated package builds, but I think it should also work with other build systems (waf, make, autotools...). The solution most linux package managers use is to let the 'source package' provide 'build' and 'package' methods. Those methods are then required to store the files in a specific temporary folder and all files from that folder will form the package. Having the config files in Ruby seems like a perfect fit for this approach. Also: why does a orbspec have to specify its imports? I think it should rather specify the packages it uses. With some more work it could be possible to even let dmd find all needed imports and guess the needed packages from these imports. Another detail: I wouldn't use .orb for package extensions. We might want to change the compression type later (tar+lzma for example), so .orb.zip would be better. Then we could just use .orb.tar.xz with the new compression. (This is also how archlinux works, for example .pkg.tar.xz) It seems like C libraries would also be packaged with orb (the sqlite example). This might be needed, but it will be a major pita for linux packagers, as it'll likely cause conflicts. I think it should be possible for those linux packages to hook into orb. Orb should recognize something like 'orb --external libsqlite:library --version 3.7.0' and then just assume that sqlite is installed (but it should not assume that sqlites dependencies are installed - those would have to be registered with --external again). This approach should work well for D packages (so a D package is in Orb first, but some distribution decides to package it. In this case they can add the orb hooks to their packages). It's unlikely that a distribution will change all C packages though. Probably at some time orb should interact with 'pkg-config' to look for already installed C packages, I'm not sure what's a good solution for this problem. 'type :library' in the orbspec suggests that there'll be different package types. I think this is a good idea so we don't have to use package name hacks like 'libsqlite' 'libsqlite-dev' (debian) Package types which make sense: :doc --> documentation. Later possibly in a specific format? :lib --> shared libraries (.so/.dll) when available :slib --> static lib (.a/.lib) :dev --> header files (.di) :src --> source package used to build other packages We should also think about how the versioning scheme would interact with git/hg/svn whatever snapshots and alpha/beta/rc releases. The debian package system doesn't have explicit support for this which leads to strange version numbers. Archlinux even uses different packages for git versions (libsqlite-git) which also isn't a good solution. Also here's a list of variables a source package can set in Archlinux: https://wiki.archlinux.org/index.php/PKGBUILD It might be a good idea to have a look at this and take some inspiration from it. Most of these variables are also useful to orbit. Here's an example Archlinux PKGBUILD: http://pastebin.com/MeXiLDV9 The archlinux package system has the easiest source package syntax I know. -- Johannes Pfau
Jun 18 2011
next sibling parent reply Russel Winder <russel russel.org.uk> writes:
On Sat, 2011-06-18 at 21:00 +0200, Johannes Pfau wrote:
[ . . . ]
 It seems like building orb packages would only work with one specific
 build system, Dake. I understand that we need a standard way to build
 packages to allow automated package builds, but I think it should also
 work with other build systems (waf, make, autotools...).
Or SCons.
 The solution most linux package managers use is to let the 'source
 package' provide 'build' and 'package' methods. Those methods are then
 required to store the files in a specific temporary folder and all files
 from that folder will form the package. Having the config files in Ruby
 seems like a perfect fit for this approach.
=20
 Also: why does a orbspec have to specify its imports? I think it
 should rather specify the packages it uses. With some more work it
 could be possible to even let dmd find all needed imports and guess the
 needed packages from these imports.
If the source code contains imports then surely Dake can deduce this fact rather than it having to be specified a second time in the Dakefile -- replication of data is a bad thing. [ . . . ] --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jun 18 2011
parent Jacob Carlborg <doob me.com> writes:
On 2011-06-18 21:25, Russel Winder wrote:
 On Sat, 2011-06-18 at 21:00 +0200, Johannes Pfau wrote:
 [ . . . ]
 It seems like building orb packages would only work with one specific
 build system, Dake. I understand that we need a standard way to build
 packages to allow automated package builds, but I think it should also
 work with other build systems (waf, make, autotools...).
Or SCons.
 The solution most linux package managers use is to let the 'source
 package' provide 'build' and 'package' methods. Those methods are then
 required to store the files in a specific temporary folder and all files
 from that folder will form the package. Having the config files in Ruby
 seems like a perfect fit for this approach.

 Also: why does a orbspec have to specify its imports? I think it
 should rather specify the packages it uses. With some more work it
 could be possible to even let dmd find all needed imports and guess the
 needed packages from these imports.
If the source code contains imports then surely Dake can deduce this fact rather than it having to be specified a second time in the Dakefile -- replication of data is a bad thing. [ . . . ]
Read my reply to Johannes. -- /Jacob Carlborg
Jun 19 2011
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-06-18 21:00, Johannes Pfau wrote:
 Jacob Carlborg wrote:
 On 2011-06-16 23:27, Andrei Alexandrescu wrote:
 On 6/16/11 4:19 PM, Jacob Carlborg wrote:
 I'm already working on a package management tool for D.
Excellent. Suggestion: at the risk of getting flooded with suggestions, post your design early and often. Andrei
Posting my ideas here as well: https://github.com/jacob-carlborg/orbit/wiki/Orbit-Package-Manager-for-D
Some comments to start the suggestion flood ;-) It seems like building orb packages would only work with one specific build system, Dake. I understand that we need a standard way to build packages to allow automated package builds, but I think it should also work with other build systems (waf, make, autotools...). The solution most linux package managers use is to let the 'source package' provide 'build' and 'package' methods. Those methods are then required to store the files in a specific temporary folder and all files from that folder will form the package. Having the config files in Ruby seems like a perfect fit for this approach.
The default build system is Dake. Other build systems are supported as well, like make, dsss, cmake and others. I been thinking about either having methods for different events, like before/after building/installing. Or having a "shell" build method available. For example, if autotools is not an available build method you could run that any way. The tool would basically just execute a string in the shell. You can also read at the top: "These tools can work individually on their own but work much better together.". Also note that this is kind of like an overview of my ideas. I can go into much more detail if necessary. But I though it was too much detail (for now) if I wrote down basically the complete orbspec specification.
 Also: why does a orbspec have to specify its imports? I think it
 should rather specify the packages it uses. With some more work it
 could be possible to even let dmd find all needed imports and guess the
 needed packages from these imports.
I didn't think straight when I wrote that at first. I was thinking about having pre-built libraries in the package and that listed the import files needed by the library. But I have already changed that to just "files", meaning all files necessary to build the package.
 Another detail: I wouldn't use .orb for package extensions. We might
 want to change the compression type later (tar+lzma for example),
 so .orb.zip would be better. Then we could just use .orb.tar.xz with
 the new compression. (This is also how archlinux works, for
 example .pkg.tar.xz)
Ok, I can do that. I chose zip because that is available in Tango and Phobos.
 It seems like C libraries would also be packaged with orb (the sqlite
 example). This might be needed, but it will be a major pita for linux
 packagers, as it'll likely cause conflicts. I think it should be
 possible for those linux packages to hook into orb. Orb should
 recognize something like 'orb --external libsqlite:library
 --version 3.7.0' and then just assume that sqlite is installed (but it
 should not assume that sqlites dependencies are installed - those would
 have to be registered with --external again). This approach should work
 well for D packages (so a D package is in Orb first, but some
 distribution decides to package it. In this case they can add the orb
 hooks to their packages). It's unlikely that a distribution will change
 all C packages though. Probably at some time orb should interact with
 'pkg-config' to look for already installed C packages, I'm not sure
 what's a good solution for this problem.
With the sqlite example, I was actually thinking about bindings. But as you say, it would be good to be able to specify external dependencies, like C libraries.
 'type :library' in the orbspec suggests that there'll be different
 package types. I think this is a good idea so we don't have to use
 package name hacks like 'libsqlite' 'libsqlite-dev' (debian)
 Package types which make sense:
 :doc -->  documentation. Later possibly in a specific format?
 :lib -->  shared libraries (.so/.dll) when available
 :slib -->  static lib (.a/.lib)
 :dev -->  header files (.di)
 :src -->  source package used to build other packages
Yes, exactly.
 We should also think about how the versioning scheme would interact with
 git/hg/svn whatever snapshots and alpha/beta/rc releases. The debian
 package system doesn't have explicit support for this which leads to
 strange version numbers. Archlinux even uses different packages for git
 versions (libsqlite-git) which also isn't a good solution.
If you have any ideas I'm listening.
 Also here's a list of variables a source package can set in Archlinux:
 https://wiki.archlinux.org/index.php/PKGBUILD
 It might be a good idea to have a look at this and take some
 inspiration from it. Most of these variables are also useful to orbit.

 Here's an example Archlinux PKGBUILD:
 http://pastebin.com/MeXiLDV9
 The archlinux package system has the easiest source package syntax I
 know.
I'll have a look. -- /Jacob Carlborg
Jun 19 2011
parent reply Johannes Pfau <spam example.com> writes:
Jacob Carlborg wrote:
On 2011-06-18 21:00, Johannes Pfau wrote:
 Jacob Carlborg wrote:
 On 2011-06-16 23:27, Andrei Alexandrescu wrote:
 On 6/16/11 4:19 PM, Jacob Carlborg wrote:
 I'm already working on a package management tool for D.
Excellent. Suggestion: at the risk of getting flooded with suggestions, post your design early and often. Andrei
Posting my ideas here as well: https://github.com/jacob-carlborg/orbit/wiki/Orbit-Package-Manager-for-D
Some comments to start the suggestion flood ;-) It seems like building orb packages would only work with one specific build system, Dake. I understand that we need a standard way to build packages to allow automated package builds, but I think it should also work with other build systems (waf, make, autotools...). The solution most linux package managers use is to let the 'source package' provide 'build' and 'package' methods. Those methods are then required to store the files in a specific temporary folder and all files from that folder will form the package. Having the config files in Ruby seems like a perfect fit for this approach.
The default build system is Dake. Other build systems are supported as well, like make, dsss, cmake and others. I been thinking about either having methods for different events, like before/after building/installing. Or having a "shell" build method available. For example, if autotools is not an available build method you could run that any way. The tool would basically just execute a string in the shell. You can also read at the top: "These tools can work individually on their own but work much better together.".
Sorry, I must have missed that sentence.
Also note that this is kind of like an overview of my ideas. I can go 
into much more detail if necessary. But I though it was too much
detail (for now) if I wrote down basically the complete orbspec
specification.
I understand that. I just wanted to post my thoughts about some things I consider important to package management.
 Also: why does a orbspec have to specify its imports? I think it
 should rather specify the packages it uses. With some more work it
 could be possible to even let dmd find all needed imports and guess
 the needed packages from these imports.
I didn't think straight when I wrote that at first. I was thinking about having pre-built libraries in the package and that listed the import files needed by the library. But I have already changed that to just "files", meaning all files necessary to build the package.
I still don't understand that completely. So does it list the files which will be contained in the package later, or file dependencies contained in other packages? (I'm asking because I'm not familiar with file-dependencies in package management systems. Most package management systems make a package depend on other packages, but not on the files in the packages)
 Another detail: I wouldn't use .orb for package extensions. We might
 want to change the compression type later (tar+lzma for example),
 so .orb.zip would be better. Then we could just use .orb.tar.xz with
 the new compression. (This is also how archlinux works, for
 example .pkg.tar.xz)
Ok, I can do that. I chose zip because that is available in Tango and Phobos.
Yes, right now, zip is seems to be the best choice, but at some point the small size difference between zip and lzma could matter.
 It seems like C libraries would also be packaged with orb (the sqlite
 example). This might be needed, but it will be a major pita for linux
 packagers, as it'll likely cause conflicts. I think it should be
 possible for those linux packages to hook into orb. Orb should
 recognize something like 'orb --external libsqlite:library
 --version 3.7.0' and then just assume that sqlite is installed (but
 it should not assume that sqlites dependencies are installed - those
 would have to be registered with --external again). This approach
 should work well for D packages (so a D package is in Orb first, but
 some distribution decides to package it. In this case they can add
 the orb hooks to their packages). It's unlikely that a distribution
 will change all C packages though. Probably at some time orb should
 interact with 'pkg-config' to look for already installed C packages,
 I'm not sure what's a good solution for this problem.
With the sqlite example, I was actually thinking about bindings. But as you say, it would be good to be able to specify external dependencies, like C libraries.
I totally forgot about bindings! I'm quite uncertain about packaging C libs though: If we don't package C libs windows users will have to acquire all C libs manually. But if we do provide C libraries we have to decide if we also ship the C headers, etc. and it will be some more maintenance work.
 'type :library' in the orbspec suggests that there'll be different
 package types. I think this is a good idea so we don't have to use
 package name hacks like 'libsqlite' 'libsqlite-dev' (debian)
 Package types which make sense:
 :doc -->  documentation. Later possibly in a specific format?
 :lib -->  shared libraries (.so/.dll) when available
 :slib -->  static lib (.a/.lib)
 :dev -->  header files (.di)
 :src -->  source package used to build other packages
Yes, exactly.
Sound great. Orb could be the first package management system to get that right. One more question is where .h headers and .di files would go for C packages. Both in :dev or .h headers into additional :cdev packages, or something like that.
 We should also think about how the versioning scheme would interact
 with git/hg/svn whatever snapshots and alpha/beta/rc releases. The
 debian package system doesn't have explicit support for this which
 leads to strange version numbers. Archlinux even uses different
 packages for git versions (libsqlite-git) which also isn't a good
 solution.
If you have any ideas I'm listening.
OK, my proposal follows, but be warned, it's a little longer than I first thought :-) Regarding alpha/beta/rc's a simple scheme could help: all those releases are pre-releases, so consider a fictional libjson as an example. libjson 0.0.1 is released (a final release) libjson 0.0.2 alpha1 is released --> prerelease 1 libjson 0.0.2 alpha2 is released --> prerelease 2 libjson 0.0.2 beta1 is released --> prerelease 3 libjson 0.0.2 beta2 is released --> prerelease 4 libjson 0.0.2 rc1 is released --> prerelease 5 libjson 0.0.2 is releases (final release) So in this case 0.0.1 < 0.0.2 pre1 < 0.0.2 pre2 < 0.0.2 pre3 < 0.0.2 pre4 < 0.0.2 pre5 < 0.0.2 pre[X] < 0.0.2 The end user should specify for which packages he'd like to use prereleases. A standard upgrade for libjson would look like this: 0.0.1 --> 0.0.2 so no prereleases should be installed be default. If prereleases were enabled for that package, prereleases should be upgraded automatically: 0.0.1 --> 0.0.2 pre1 --> 0.0.2 pre2 ... --> 0.0.2 It's also possible that a version 0.0.1.1 is released somewhere in between: libjson 0.0.1 is released (a final release) libjson 0.0.2 alpha1 is released --> prerelease 1 libjson 0.0.1.1 rc1 is released --> prerelease 1 libjson 0.0.1.1 is released (a final release) libjson 0.0.2 alpha2 is released --> prerelease 2 ... in this case: 0.0.1 < 0.0.1.1 pre 1 < 0.0.1.1 < 0.0.2 pre1 < 0.0.2 pre2 < ... < 0.0.2 It should also be possible to skip 0.0.2 (people actually do such things ;-)) libjson 0.0.1 is released (a final release) libjson 0.0.2 alpha1 is released --> prerelease 1 libjson 0.0.2 alpha2 is released --> prerelease 2 libjson 0.0.2 beta1 is released --> prerelease 3 libjson 0.0.2 beta2 is released --> prerelease 4 libjson 0.0.2 rc1 is released --> prerelease 5 libjson 0.0.3 rc1 is released -->prerelease 1 libjson 0.0.3 is released --> final release so: 0.0.1 < 0.0.2 pre1 < 0.0.2 pre2 < ... < 0.0.3 pre1 < 0.0.3 a user not wanting to use prereleases would just skip all preX in the above examples. Now regarding snapshot versions: First we have to simplify the problem: I think we should only support a linear system, so we assume there's only one master repository and one branch where packages are created from. Now we still have the problem that git/hg etc revision numbers cannot be easily compared (is 5363aed42ff7f2edd796 more recent than 882cc02a58797a313a62 ?). So I suppose the following: A git/hg/... snapshot always has a 'base' release. This is the release the snapshot is based on. A snapshot is always more recent than it's base release: release1 0.0.1 snapshot1 5363aed42ff7f2edd796 base:0.0.1 so release 1 < snapshot1 snapshots can be based on pre-relases release1 0.0.1 pre-release1 0.0.2-pre1 snapshot1 5363aed42ff7f2edd796 base:0.0.2-pre1 pre-release2 0.0.2-pre2 so release1 < pre-release1 < snapshot1 < pre-release2 A git snapshot always only replaces it's base release! If there's a newer base release, the git snapshot is considered to be 'old'. No how do we sort multiple snapshots based on the same base release? I think a date based approach makes sense(with second or minute resolution?) So we now have this final (and complex) example: 0.0.2-pre2 < 0.0.2-pre3 < 0.0.2-pre4 < 0.0.2-pre5 < 0.0.3-pre1 < 0.0.3 Someone with snapshots enabled will get updates like presented in the above chain, with one exception: "0.0.2-pre1" is released and "0.0.2-pre1" will be skipped! With snapshots disabled the update path looks like this: 0.0.1 < 0.0.1.1 < 0.0.3
 Also here's a list of variables a source package can set in
 Archlinux: https://wiki.archlinux.org/index.php/PKGBUILD
 It might be a good idea to have a look at this and take some
 inspiration from it. Most of these variables are also useful to
 orbit.

 Here's an example Archlinux PKGBUILD:
 http://pastebin.com/MeXiLDV9
 The archlinux package system has the easiest source package syntax I
 know.
I'll have a look.
-- Johannes Pfau
Jun 19 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-06-19 19:02, Johannes Pfau wrote:
 I still don't understand that completely. So does it list the files
 which will be contained in the package later, or file dependencies
 contained in other packages?
 (I'm asking because I'm not familiar
 with file-dependencies in package management systems. Most package
 management systems make a package depend on other packages, but not on
 the files in the packages)
Ok, let me explain. When developing a package management system the first thing one has do decide is if the package should contain pre-built binaries/libraries, we can call these binary packages, or the necessary files to build the package when installing, we can call these source package (not to be confused with the source type you've mentioned below). As a third option, one could have a mixed package system containing both binary and source packages. Maybe even mixed packages could be possible. When I first started thinking about Orbit I decided for source packages. The reason for this is that the developer only have to create one package or doesn't have to build the app/lib for all supported platforms when releasing a new version of the package (although it would be good to know that it works on all supported platforms). When I first wrote down my ideas about Orbit on the wiki I was, incorrectly, thinking it should have binary packages, hence the confusion (even I'm confused). This was when the orbspec examples contained the "imports" method. I then corrected this error and changed "imports" to "files" to reflect that the packages are source packages. Now, "files" contains all the necessary files to build the package when installing it. These files most somehow be referenced in the orbspec because the tool needs to know what files to put in the package. This is for a package with no dependencies on other packages. Note that usually you don't need to explicitly specify all the necessary files, thanks to Ruby's fleshed out standard library. For example, you could do like this: imports Dir["**/*.d"] Which will, recursively, include all *.d files in the current directory. If you use Orbit together with Dake you don't have to specify any files at all since Dake will know what files to include (this is noted in the wiki). If the package depends on other package this needs to be listed as well in the orbspec. This is not showed in any of the examples on the wiki. I hope this explains most of the things and I'm sorry for any confusion I may have caused.
 Another detail: I wouldn't use .orb for package extensions. We might
 want to change the compression type later (tar+lzma for example),
 so .orb.zip would be better. Then we could just use .orb.tar.xz with
 the new compression. (This is also how archlinux works, for
 example .pkg.tar.xz)
Ok, I can do that. I chose zip because that is available in Tango and Phobos.
Yes, right now, zip is seems to be the best choice, but at some point the small size difference between zip and lzma could matter.
 It seems like C libraries would also be packaged with orb (the sqlite
 example). This might be needed, but it will be a major pita for linux
 packagers, as it'll likely cause conflicts. I think it should be
 possible for those linux packages to hook into orb. Orb should
 recognize something like 'orb --external libsqlite:library
 --version 3.7.0' and then just assume that sqlite is installed (but
 it should not assume that sqlites dependencies are installed - those
 would have to be registered with --external again). This approach
 should work well for D packages (so a D package is in Orb first, but
 some distribution decides to package it. In this case they can add
 the orb hooks to their packages). It's unlikely that a distribution
 will change all C packages though. Probably at some time orb should
 interact with 'pkg-config' to look for already installed C packages,
 I'm not sure what's a good solution for this problem.
With the sqlite example, I was actually thinking about bindings. But as you say, it would be good to be able to specify external dependencies, like C libraries.
I totally forgot about bindings! I'm quite uncertain about packaging C libs though: If we don't package C libs windows users will have to acquire all C libs manually. But if we do provide C libraries we have to decide if we also ship the C headers, etc. and it will be some more maintenance work.
Yeah, I don't know exactly what to do with external dependencies. The easiest would be to have it just as information for the user of the package.
 'type :library' in the orbspec suggests that there'll be different
 package types. I think this is a good idea so we don't have to use
 package name hacks like 'libsqlite' 'libsqlite-dev' (debian)
 Package types which make sense:
 :doc -->   documentation. Later possibly in a specific format?
 :lib -->   shared libraries (.so/.dll) when available
 :slib -->   static lib (.a/.lib)
 :dev -->   header files (.di)
 :src -->   source package used to build other packages
Yes, exactly.
Sound great. Orb could be the first package management system to get that right. One more question is where .h headers and .di files would go for C packages. Both in :dev or .h headers into additional :cdev packages, or something like that.
I haven't though that far. The first step have to be to get regular, D only, packages to work first.
 We should also think about how the versioning scheme would interact
 with git/hg/svn whatever snapshots and alpha/beta/rc releases. The
 debian package system doesn't have explicit support for this which
 leads to strange version numbers. Archlinux even uses different
 packages for git versions (libsqlite-git) which also isn't a good
 solution.
If you have any ideas I'm listening.
OK, my proposal follows, but be warned, it's a little longer than I first thought :-) Regarding alpha/beta/rc's a simple scheme could help: all those releases are pre-releases, so consider a fictional libjson as an example. libjson 0.0.1 is released (a final release) libjson 0.0.2 alpha1 is released --> prerelease 1 libjson 0.0.2 alpha2 is released --> prerelease 2 libjson 0.0.2 beta1 is released --> prerelease 3 libjson 0.0.2 beta2 is released --> prerelease 4 libjson 0.0.2 rc1 is released --> prerelease 5 libjson 0.0.2 is releases (final release) So in this case 0.0.1< 0.0.2 pre1< 0.0.2 pre2< 0.0.2 pre3< 0.0.2 pre4< 0.0.2 pre5< 0.0.2 pre[X]< 0.0.2 The end user should specify for which packages he'd like to use prereleases. A standard upgrade for libjson would look like this: 0.0.1 --> 0.0.2 so no prereleases should be installed be default. If prereleases were enabled for that package, prereleases should be upgraded automatically: 0.0.1 --> 0.0.2 pre1 --> 0.0.2 pre2 ... --> 0.0.2 It's also possible that a version 0.0.1.1 is released somewhere in between: libjson 0.0.1 is released (a final release) libjson 0.0.2 alpha1 is released --> prerelease 1 libjson 0.0.1.1 rc1 is released --> prerelease 1 libjson 0.0.1.1 is released (a final release) libjson 0.0.2 alpha2 is released --> prerelease 2 ... in this case: 0.0.1< 0.0.1.1 pre 1< 0.0.1.1< 0.0.2 pre1< 0.0.2 pre2 < ...< 0.0.2 It should also be possible to skip 0.0.2 (people actually do such things ;-)) libjson 0.0.1 is released (a final release) libjson 0.0.2 alpha1 is released --> prerelease 1 libjson 0.0.2 alpha2 is released --> prerelease 2 libjson 0.0.2 beta1 is released --> prerelease 3 libjson 0.0.2 beta2 is released --> prerelease 4 libjson 0.0.2 rc1 is released --> prerelease 5 libjson 0.0.3 rc1 is released -->prerelease 1 libjson 0.0.3 is released --> final release so: 0.0.1< 0.0.2 pre1< 0.0.2 pre2< ...< 0.0.3 pre1< 0.0.3 a user not wanting to use prereleases would just skip all preX in the above examples.
Ok, I think I understand so far. I was thinking something similar. But is a four digit version really necessary?
 Now regarding snapshot versions: First we have to simplify the problem:
 I think we should only support a linear system, so we assume there's
 only one master repository and one branch where packages are created
 from.
 Now we still have the problem that git/hg etc revision numbers cannot
 be easily compared (is 5363aed42ff7f2edd796 more recent than
 882cc02a58797a313a62 ?).

 So I suppose the following: A git/hg/... snapshot always has a 'base'
 release. This is the release the snapshot is based on. A snapshot is
 always more recent than it's base release:

 release1 0.0.1
 snapshot1 5363aed42ff7f2edd796 base:0.0.1

 so release 1<  snapshot1
 snapshots can be based on pre-relases

 release1 0.0.1
 pre-release1 0.0.2-pre1
 snapshot1 5363aed42ff7f2edd796 base:0.0.2-pre1
 pre-release2 0.0.2-pre2

 so release1<  pre-release1<  snapshot1<  pre-release2

 A git snapshot always only replaces it's base release! If there's a
 newer base release, the git snapshot is considered to be 'old'.

 No how do we sort multiple snapshots based on the same base release?
 I think a date based approach makes sense(with second or minute
 resolution?)

 So we now have this final (and complex) example:




















 0.0.2-pre2<  0.0.2-pre3<  0.0.2-pre4<  0.0.2-pre5<  0.0.3-pre1<  0.0.3

 Someone with snapshots enabled will get updates like presented in the
 above chain, with one exception: "0.0.2-pre1" is released

 and "0.0.2-pre1" will be skipped!

 With snapshots disabled the update path looks like this:
 0.0.1<  0.0.1.1<  0.0.3
This got quite complex. When I was thinking about SCM integration I was thinking about you only specify the address to the repository, which will mean the latest commit on the main branch. Then you could also specify tags, branches and perhaps specific commits. But you could never specify, for example, a release (or commit) newer then another commit. This wouldn't work: orb "dwt", "~> 0.3.4", :git => "git://github.com/jacob-carlborg/libjson.git" I see now that I've specified a version in the git example on the wiki. This was a mistake, I removed the version now. -- /Jacob Carlborg
Jun 19 2011
next sibling parent reply Jose Armando Garcia <jsancio gmail.com> writes:
On Sun, Jun 19, 2011 at 4:19 PM, Jacob Carlborg <doob me.com> wrote:
 On 2011-06-19 19:02, Johannes Pfau wrote:
 I still don't understand that completely. So does it list the files
 which will be contained in the package later, or file dependencies
 contained in other packages?
 (I'm asking because I'm not familiar
 with file-dependencies in package management systems. Most package
 management systems make a package depend on other packages, but not on
 the files in the packages)
Ok, let me explain. When developing a package management system the first thing one has do decide is if the package should contain pre-built binaries/libraries, we can call these binary packages, or the necessary files to build the package when installing, we can call these source package (not to be confused with the source type you've mentioned below). As a third option, one could have a mixed package system containing both binary and source packages. Maybe even mixed packages could be possible.
Why decide on "file" package? This only works with packages that can be compiled. Think non-D source code packages and close source packages. Even one of the most commonly known "file" package manager (Gentoo's portage) allows for binary packages. Another example is caching. Many software development organization keep internal library/program repository that have been clear by the organization for many reasons (e.g. licensing, security, support, etc). Our packaging solution should work such an environment. -Jose
Jun 19 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-06-19 21:59, Jose Armando Garcia wrote:
 On Sun, Jun 19, 2011 at 4:19 PM, Jacob Carlborg<doob me.com>  wrote:
 On 2011-06-19 19:02, Johannes Pfau wrote:
 I still don't understand that completely. So does it list the files
 which will be contained in the package later, or file dependencies
 contained in other packages?
 (I'm asking because I'm not familiar
 with file-dependencies in package management systems. Most package
 management systems make a package depend on other packages, but not on
 the files in the packages)
Ok, let me explain. When developing a package management system the first thing one has do decide is if the package should contain pre-built binaries/libraries, we can call these binary packages, or the necessary files to build the package when installing, we can call these source package (not to be confused with the source type you've mentioned below). As a third option, one could have a mixed package system containing both binary and source packages. Maybe even mixed packages could be possible.
Why decide on "file" package? This only works with packages that can be compiled. Think non-D source code packages and close source packages. Even one of the most commonly known "file" package manager (Gentoo's portage) allows for binary packages.
I guess we could have a mixed system, with both source and binary packages.
 Another example is caching. Many software development organization
 keep internal library/program repository that have been clear by the
 organization for many reasons (e.g. licensing, security, support,
 etc). Our packaging solution should work such an environment.

 -Jose
-- /Jacob Carlborg
Jun 20 2011
parent reply Johannes Pfau <spam example.com> writes:
Jacob Carlborg wrote:
On 2011-06-19 21:59, Jose Armando Garcia wrote:
 On Sun, Jun 19, 2011 at 4:19 PM, Jacob Carlborg<doob me.com>  wrote:
 On 2011-06-19 19:02, Johannes Pfau wrote:
 I still don't understand that completely. So does it list the files
 which will be contained in the package later, or file dependencies
 contained in other packages?
 (I'm asking because I'm not familiar
 with file-dependencies in package management systems. Most package
 management systems make a package depend on other packages, but
 not on the files in the packages)
Ok, let me explain. When developing a package management system the first thing one has do decide is if the package should contain pre-built binaries/libraries, we can call these binary packages, or the necessary files to build the package when installing, we can call these source package (not to be confused with the source type you've mentioned below). As a third option, one could have a mixed package system containing both binary and source packages. Maybe even mixed packages could be possible.
Why decide on "file" package? This only works with packages that can be compiled. Think non-D source code packages and close source packages. Even one of the most commonly known "file" package manager (Gentoo's portage) allows for binary packages.
I guess we could have a mixed system, with both source and binary packages.
Definitely. Standardised source packages allow automated binary package building, even for different architectures. Users should also be able to make small changes to source packages and create their own binary packages easily. Source packages only wouldn't work either, think of users on embedded systems. Compiling everything on a machine with 16MB ram and 200mhz isn't fun. Also binary packages are quite convenient. -- Johannes Pfau
Jun 20 2011
parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 20.06.2011 10:52, schrieb Johannes Pfau:
 Jacob Carlborg wrote:
 On 2011-06-19 21:59, Jose Armando Garcia wrote:
 On Sun, Jun 19, 2011 at 4:19 PM, Jacob Carlborg<doob me.com>  wrote:
 On 2011-06-19 19:02, Johannes Pfau wrote:
 I still don't understand that completely. So does it list the files
 which will be contained in the package later, or file dependencies
 contained in other packages?
 (I'm asking because I'm not familiar
 with file-dependencies in package management systems. Most package
 management systems make a package depend on other packages, but
 not on the files in the packages)
Ok, let me explain. When developing a package management system the first thing one has do decide is if the package should contain pre-built binaries/libraries, we can call these binary packages, or the necessary files to build the package when installing, we can call these source package (not to be confused with the source type you've mentioned below). As a third option, one could have a mixed package system containing both binary and source packages. Maybe even mixed packages could be possible.
Why decide on "file" package? This only works with packages that can be compiled. Think non-D source code packages and close source packages. Even one of the most commonly known "file" package manager (Gentoo's portage) allows for binary packages.
I guess we could have a mixed system, with both source and binary packages.
Definitely. Standardised source packages allow automated binary package building, even for different architectures. Users should also be able to make small changes to source packages and create their own binary packages easily. Source packages only wouldn't work either, think of users on embedded systems. Compiling everything on a machine with 16MB ram and 200mhz isn't fun. Also binary packages are quite convenient.
1. Will you develop or compile your own software (that uses software from the package manager) on the embedded system? I guess it's more common to develop the software on a PC or whatever and upload it to the embedded system. 2. Will an embedded system with such restricted resources have a x86 arch - or will it more likely be ARM or even something completely different? Should there be binaries available for any architecture (that's hard, because most developers probably only have x86/amd64)? If not, you'd have to compile yourself anyway. (And of course we need a working compiler for that architecture first) Cheers, - Daniel
Jun 20 2011
parent Johannes Pfau <spam example.com> writes:
Daniel Gibson wrote:
Am 20.06.2011 10:52, schrieb Johannes Pfau:
 Jacob Carlborg wrote:
 On 2011-06-19 21:59, Jose Armando Garcia wrote:
 On Sun, Jun 19, 2011 at 4:19 PM, Jacob Carlborg<doob me.com>
 wrote:
 On 2011-06-19 19:02, Johannes Pfau wrote:
 I still don't understand that completely. So does it list the
 files which will be contained in the package later, or file
 dependencies contained in other packages?
 (I'm asking because I'm not familiar
 with file-dependencies in package management systems. Most
 package management systems make a package depend on other
 packages, but not on the files in the packages)
Ok, let me explain. When developing a package management system the first thing one has do decide is if the package should contain pre-built binaries/libraries, we can call these binary packages, or the necessary files to build the package when installing, we can call these source package (not to be confused with the source type you've mentioned below). As a third option, one could have a mixed package system containing both binary and source packages. Maybe even mixed packages could be possible.
Why decide on "file" package? This only works with packages that can be compiled. Think non-D source code packages and close source packages. Even one of the most commonly known "file" package manager (Gentoo's portage) allows for binary packages.
I guess we could have a mixed system, with both source and binary packages.
Definitely. Standardised source packages allow automated binary package building, even for different architectures. Users should also be able to make small changes to source packages and create their own binary packages easily. Source packages only wouldn't work either, think of users on embedded systems. Compiling everything on a machine with 16MB ram and 200mhz isn't fun. Also binary packages are quite convenient.
1. Will you develop or compile your own software (that uses software from the package manager) on the embedded system? I guess it's more common to develop the software on a PC or whatever and upload it to the embedded system.
Maybe I misunderstood something, but I thought orbit will also manage shared libraries once supported by the d compilers. Even on resource limited embedded systems it's likely that a library is needed by more than one program, so it can't really be shipped with the program. Static libraries, documentation and d headers are not needed on these platforms. Of course package managers for embedded systems (something like openembedded) can be used, but then all libraries have to be packaged again into a different package format.
2. Will an embedded system with such restricted resources have a x86
arch - or will it more likely be ARM or even something completely
different? Should there be binaries available for any architecture
(that's hard, because most developers probably only have x86/amd64)?
If not, you'd have to compile yourself anyway.
(And of course we need a working compiler for that architecture first)
Arm, mips (popular in internet routers), sh4 (settop boxes), ppc Ideally we'd have a package build system like launchpad: The developer (or packager) creates a source package, uploads it to the build service, build service transfers the source package to buildbot machines, those build binary packages for different architectures. The binary packages are then added to a repository. We won't have something like that from the beginning, but in a few years such a build service might be useful.
Cheers,
- Daniel
-- Johannes Pfau
Jun 20 2011
prev sibling next sibling parent reply Johannes Pfau <spam example.com> writes:
Jacob Carlborg wrote:
[...]
I hope this explains most of the things and I'm sorry for any
confusion I may have caused.
Thanks for that detailed explanation, I think I understand. This system also seems more flexible than the traditional 'this directory will be the root of the package, copy all files to be packaged into this directory'
Ok, I think I understand so far. I was thinking something similar. But 
is a four digit version really necessary?
I thought of variable length version numbers, this is what most package management systems use. Whats wrong with variable length versions? Look at 'compareBaseVer' in the source linked later for an example of how to compare such versions.
This got quite complex. When I was thinking about SCM integration I
was thinking about you only specify the address to the repository,
which will mean the latest commit on the main branch. Then you could
also specify tags, branches and perhaps specific commits. But you
could never specify, for example, a release (or commit) newer then
another commit. This wouldn't work:

orb "dwt", "~> 0.3.4", :git =>
"git://github.com/jacob-carlborg/libjson.git"

I see now that I've specified a version in the git example on the
wiki. This was a mistake, I removed the version now.
I think we look at 2 different approaches here: If I understood correctly you want to allow the _user_ to grab the lastest git version. Whenever he wants to update, he has to do that manually. He also always downloads the source code and compiles it on his machine (no binary git packages). My approach let's the _packager_ create git packages. From these source packages binary packages can be build and distributed to end users like any other package (release, prerelease). Snapshots are 'first class packages', which means everything working with releases and other packages will also work with snapshots. The downside of this approach is that it complicates things a lot. It needs a versioning scheme capable of sorting snapshots, releases and prereleases reliably. Here's some proof of concept code: https://gist.github.com/1035294 200 LOC for a versioning scheme seems to be alot though. -- Johannes Pfau
Jun 20 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-06-20 10:46, Johannes Pfau wrote:
 Jacob Carlborg wrote:
 [...]
 I hope this explains most of the things and I'm sorry for any
 confusion I may have caused.
Thanks for that detailed explanation, I think I understand. This system also seems more flexible than the traditional 'this directory will be the root of the package, copy all files to be packaged into this directory'
 Ok, I think I understand so far. I was thinking something similar. But
 is a four digit version really necessary?
I thought of variable length version numbers, this is what most package management systems use. Whats wrong with variable length versions? Look at 'compareBaseVer' in the source linked later for an example of how to compare such versions.
Currently I have the three-part-version as default and then a custom version (which basically can contain anything). The reason for the three-part-version scheme is explained in the wiki.
 This got quite complex. When I was thinking about SCM integration I
 was thinking about you only specify the address to the repository,
 which will mean the latest commit on the main branch. Then you could
 also specify tags, branches and perhaps specific commits. But you
 could never specify, for example, a release (or commit) newer then
 another commit. This wouldn't work:

 orb "dwt", "~>  0.3.4", :git =>
 "git://github.com/jacob-carlborg/libjson.git"

 I see now that I've specified a version in the git example on the
 wiki. This was a mistake, I removed the version now.
I think we look at 2 different approaches here: If I understood correctly you want to allow the _user_ to grab the lastest git version. Whenever he wants to update, he has to do that manually. He also always downloads the source code and compiles it on his machine (no binary git packages).
Yes.
 My approach let's the _packager_ create git packages. From these source
 packages binary packages can be build and distributed to end users like
 any other package (release, prerelease). Snapshots are 'first
 class packages', which means everything working with releases and other
 packages will also work with snapshots.
 The downside of this approach is that it complicates things a lot. It
 needs a versioning scheme capable of sorting snapshots, releases and
 prereleases reliably.

 Here's some proof of concept code:
 https://gist.github.com/1035294
 200 LOC for a versioning scheme seems to be alot though.
I don't think I understand your approach. -- /Jacob Carlborg
Jun 20 2011
parent reply Johannes Pfau <spam example.com> writes:
Jacob Carlborg wrote:
On 2011-06-20 10:46, Johannes Pfau wrote:
 Jacob Carlborg wrote:
 [...]
 I hope this explains most of the things and I'm sorry for any
 confusion I may have caused.
Thanks for that detailed explanation, I think I understand. This system also seems more flexible than the traditional 'this directory will be the root of the package, copy all files to be packaged into this directory'
 Ok, I think I understand so far. I was thinking something similar.
 But is a four digit version really necessary?
I thought of variable length version numbers, this is what most package management systems use. Whats wrong with variable length versions? Look at 'compareBaseVer' in the source linked later for an example of how to compare such versions.
Currently I have the three-part-version as default and then a custom version (which basically can contain anything). The reason for the three-part-version scheme is explained in the wiki.
So it's to have defined semantics for version changes, to standardize thing like api breakage. I think this makes sense, although it forces a special versioning scheme on users it might be worth it.
 This got quite complex. When I was thinking about SCM integration I
 was thinking about you only specify the address to the repository,
 which will mean the latest commit on the main branch. Then you could
 also specify tags, branches and perhaps specific commits. But you
 could never specify, for example, a release (or commit) newer then
 another commit. This wouldn't work:

 orb "dwt", "~>  0.3.4", :git =>
 "git://github.com/jacob-carlborg/libjson.git"

 I see now that I've specified a version in the git example on the
 wiki. This was a mistake, I removed the version now.
I think we look at 2 different approaches here: If I understood correctly you want to allow the _user_ to grab the lastest git version. Whenever he wants to update, he has to do that manually. He also always downloads the source code and compiles it on his machine (no binary git packages).
Yes.
It probably comes down to the questions if binary 'git' packages are worth the effort. I only know linux distribution package management systems where it's common to package snapshots. But it might be overkill for a package system dealing mostly with libraries.
 My approach let's the _packager_ create git packages. From these
 source packages binary packages can be build and distributed to end
 users like any other package (release, prerelease). Snapshots are
 'first class packages', which means everything working with releases
 and other packages will also work with snapshots.
 The downside of this approach is that it complicates things a lot. It
 needs a versioning scheme capable of sorting snapshots, releases and
 prereleases reliably.

 Here's some proof of concept code:
 https://gist.github.com/1035294
 200 LOC for a versioning scheme seems to be alot though.
I don't think I understand your approach.
It might really be overkill. But consider this example: package FOO requires libjson >= 0.0.1 as a dynamic library. package BAR requires latest libjson from git as a dynamic library. now FOO could use libjson-git, but how does the package manager know that? It cannot know whether the git version is more recent than 0.0.1. It's also not possible to install both libraries at a time, as both are dynamic libraries with the same name. We now have a conflict where you can only install FOO or BAR, but not both. -- Johannes Pfau
Jun 20 2011
parent Jacob Carlborg <doob me.com> writes:
On 2011-06-20 14:02, Johannes Pfau wrote:
 Jacob Carlborg wrote:
 Currently I have the three-part-version as default and then a custom
 version (which basically can contain anything). The reason for the
 three-part-version scheme is explained in the wiki.
So it's to have defined semantics for version changes, to standardize thing like api breakage. I think this makes sense, although it forces a special versioning scheme on users it might be worth it.
It doesn't force a version scheme, you can always use the a custom version but then you won't be able to use the "~>" operator. Which is the whole reason for using this version scheme.
 It might really be overkill. But consider this example:
 package FOO requires libjson>= 0.0.1 as a dynamic library.
 package BAR requires latest libjson from git as a dynamic library.

 now FOO could use libjson-git, but how does the package manager know
 that? It cannot know whether the git version is more recent than 0.0.1.
 It's also not possible to install both libraries at a time, as both are
 dynamic libraries with the same name.
 We now have a conflict where you can only install FOO or BAR, but not
 both.
Ok, I think I understand now. Thanks for the explanation. -- /Jacob Carlborg
Jun 20 2011
prev sibling parent reply Russel Winder <russel russel.org.uk> writes:
On Sun, 2011-06-19 at 21:19 +0200, Jacob Carlborg wrote:
[ . . . ]
 When I first started thinking about Orbit I decided for source packages.=
=20
 The reason for this is that the developer only have to create one=20
 package or doesn't have to build the app/lib for all supported platforms=
=20
 when releasing a new version of the package (although it would be good=
=20
 to know that it works on all supported platforms).
[ . . . ] OS-level package manages have this issue, Ports went for source and compiling as needed on the grounds that this is most flexible, Debian, Fedora, etc. went for binary on the grounds it is far, far easier for the users. I find that most of the time MacPorts is fine as long as you only own one computer, but for things like Boost, MacQt, etc. my machines takes hours and hours to upgrade which really, really pisses me off. I find Debian package far more straightforward and furthermore binary packages can be cached locally so I only have to download once for all 4 machines I have. With source download I end up compiling twice one for each Mac OS X machine. So overall source packages suck -- even though they are reputedly safer against security attacks. Ubuntu has introduced the idea of personal build farms, aka PPAs, which work very well. This handles creating packages for all the version of Ubuntu still in support. Using something like Buildbot, which although supposedly a CI system can easily be "subverted" into being a package creation farm. I guess the question is really should the package manager be easy for developers or easy for users? If there are no packages because it is too hard for developers to package then no users either. If developers can do things easily, but it is hard for users, then no users so no point in creating packages. It's worth noting that there is massive move in the Java arena to issue binary, source and documentation artefacts -- where originally only binary artefacts were released. This is for supporting IDEs. Clearly source only packaging gets round this somewhat, but this means compilation on the user's machine during install, and that leads to suckiness -- see above for mild rant. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jun 20 2011
next sibling parent Johannes Pfau <spam example.com> writes:
Russel Winder wrote:
On Sun, 2011-06-19 at 21:19 +0200, Jacob Carlborg wrote:
[ . . . ]
 When I first started thinking about Orbit I decided for source
 packages. The reason for this is that the developer only have to
 create one package or doesn't have to build the app/lib for all
 supported platforms when releasing a new version of the package
 (although it would be good to know that it works on all supported
 platforms).
[ . . . ] OS-level package manages have this issue, Ports went for source and compiling as needed on the grounds that this is most flexible, Debian, Fedora, etc. went for binary on the grounds it is far, far easier for the users. I find that most of the time MacPorts is fine as long as you only own one computer, but for things like Boost, MacQt, etc. my machines takes hours and hours to upgrade which really, really pisses me off. I find Debian package far more straightforward and furthermore binary packages can be cached locally so I only have to download once for all 4 machines I have. With source download I end up compiling twice one for each Mac OS X machine. So overall source packages suck -- even though they are reputedly safer against security attacks. Ubuntu has introduced the idea of personal build farms, aka PPAs, which work very well. This handles creating packages for all the version of Ubuntu still in support. Using something like Buildbot, which although supposedly a CI system can easily be "subverted" into being a package creation farm. I guess the question is really should the package manager be easy for developers or easy for users? If there are no packages because it is too hard for developers to package then no users either. If developers can do things easily, but it is hard for users, then no users so no point in creating packages. It's worth noting that there is massive move in the Java arena to issue binary, source and documentation artefacts -- where originally only binary artefacts were released. This is for supporting IDEs. Clearly source only packaging gets round this somewhat, but this means compilation on the user's machine during install, and that leads to suckiness -- see above for mild rant.
It's possible to combine binary and source packages. Archlinux did that: by default you install prebuilt binary packages, but you can specify that you want to build certain packages by yourself. Archlinux also has a huge repository of source-only packages which always need to be build by the end user. AFAIK this system works quite well. -- Johannes Pfau
Jun 20 2011
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2011-06-20 13:07, Russel Winder wrote:
 On Sun, 2011-06-19 at 21:19 +0200, Jacob Carlborg wrote:
 [ . . . ]
 When I first started thinking about Orbit I decided for source packages.
 The reason for this is that the developer only have to create one
 package or doesn't have to build the app/lib for all supported platforms
 when releasing a new version of the package (although it would be good
 to know that it works on all supported platforms).
[ . . . ] OS-level package manages have this issue, Ports went for source and compiling as needed on the grounds that this is most flexible, Debian, Fedora, etc. went for binary on the grounds it is far, far easier for the users. I find that most of the time MacPorts is fine as long as you only own one computer, but for things like Boost, MacQt, etc. my machines takes hours and hours to upgrade which really, really pisses me off. I find Debian package far more straightforward and furthermore binary packages can be cached locally so I only have to download once for all 4 machines I have. With source download I end up compiling twice one for each Mac OS X machine. So overall source packages suck -- even though they are reputedly safer against security attacks. Ubuntu has introduced the idea of personal build farms, aka PPAs, which work very well. This handles creating packages for all the version of Ubuntu still in support. Using something like Buildbot, which although supposedly a CI system can easily be "subverted" into being a package creation farm. I guess the question is really should the package manager be easy for developers or easy for users? If there are no packages because it is too hard for developers to package then no users either. If developers can do things easily, but it is hard for users, then no users so no point in creating packages. It's worth noting that there is massive move in the Java arena to issue binary, source and documentation artefacts -- where originally only binary artefacts were released. This is for supporting IDEs. Clearly source only packaging gets round this somewhat, but this means compilation on the user's machine during install, and that leads to suckiness -- see above for mild rant.
Both source and binary packages have their weaknesses and advantages. If you have a package only available on one platform then binary packages would probably be the best. Maybe it's best to support both binary and source packages. You mention that Java packages are getting distributed with the sources as well to support IDEs. For D, compared with Java, you need to at least distribute imports, *.di, files to be able to use libraries. -- /Jacob Carlborg
Jun 20 2011
prev sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
I find it interesting that so many people complain about the lack
of libraries.


software library: a way to waste a programmer's time

reinventing wheels: a lie some programmers, who are paid by the
hour, perpetuated so they can justify the "use" of software libraries

The project could have been done in one day if he just sat down and
got to work. Instead, he made up some bullshit about how reinventing
wheels is bad.

Thus, he now spends 3 days searching for a library. Another 5
days trying to make it work. Another 10 days reading the
godawful documentation. A day bitching about the suckiness on the
internet.

Then, finally, two days to integrate the library into his project.
For bonus points, force the end users to install it too, because
the more time wasted, the better.
Jun 08 2011
next sibling parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 09.06.2011 01:17, schrieb Adam D. Ruppe:
 I find it interesting that so many people complain about the lack
 of libraries.
 
 
 software library: a way to waste a programmer's time
 
 reinventing wheels: a lie some programmers, who are paid by the
 hour, perpetuated so they can justify the "use" of software libraries
 
 The project could have been done in one day if he just sat down and
 got to work. Instead, he made up some bullshit about how reinventing
 wheels is bad.
 
 Thus, he now spends 3 days searching for a library. Another 5
 days trying to make it work. Another 10 days reading the
 godawful documentation. A day bitching about the suckiness on the
 internet.
 
 Then, finally, two days to integrate the library into his project.
 For bonus points, force the end users to install it too, because
 the more time wasted, the better.
*g* This really depends on what you want the library to do. If you can implement it yourself in a day.. great. Especially using fat libraries for trivial features (=> you just use a very small part of it) is stupid. OTOH libraries are (hopefully) tested and stable - even if you can hack together similar functionality in a day or two, you may still have bugs that could be avoided by using a library. And if what you want is non-trivial it gets much worse.. you'd maybe need weeks to implement it and much much longer until it's really stable. I guess few people really want to reimplement OpenSSLs functionality (even though the library is a PITA to use as far as I know) - it'd take a lot of effort and you'd rather want to use a well tested library for critical security stuff. Another example is GUI libraries - sure, maybe you can write one yourself, but you usually want one that integrates with the desktop environment you're using, so you probably either end up either using the C-bindings of an existent GUI library or a D-wrapper thereof. So I can understand that people want better library support for D - especially for non-trivial stuff. Is there any crossplatform GUI lib that is really ready to use yet? (OK, maybe GtkD, but many people dislike Gtk, especially when they're using Windows, OSX or KDE. QtD and DWT aren't that stable yet AFAIK.) What about crypto stuff? And there certainly are other examples for libraries providing features you usually don't want to implement yourself. Cheers, - Daniel
Jun 08 2011
parent reply Adam D. Ruppe <destructionator gmail.com> writes:
 OTOH libraries are (hopefully) tested and stable
That's what they *want* you to think! :-P Of course, I'm exaggerating a little, but I stand by it in many cases: yeah, there's some hard stuff like crypto and gui, but most stuff isn't that bad. For that hard stuff though, there's always C libraries. The popular C libs are generally fairly stable and not hard to use in D, license permitting. (Sometimes I think people forget that D has a *bigger* library ecosystem than C, since every C library is also usable from D! And thanks to D features, like scope guards and array ops, they tend to be pretty easy to use straight up too.)
 Another example is GUI libraries
Aye, GUI is the biggest example of hard stuff to implement well that's also hard to use from C. Crypto isn't bad since C libraries implement them with a pretty easy interface; it's generally just a handful of functions in my experience. (Contrast to guis where it's often hundreds of classes each with dozens of methods and callbacks... just writing out their prototypes can take a while!)
Jun 08 2011
next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
C libraries are great. They're flat and have little overhead.You can
build your own OOP/whatever-oriented interfaces around the library,
referencing just the functions that you need. Simple stuff compared to
some monsters like QtD.
Jun 08 2011
parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 09.06.2011 03:31, schrieb Andrej Mitrovic:
 C libraries are great. They're flat and have little overhead.You can
 build your own OOP/whatever-oriented interfaces around the library,
 referencing just the functions that you need. Simple stuff compared to
 some monsters like QtD.
You still have to generate/write the D bindings (and generating only works on windows) before you can use them. This may not be too hard in most cases, but certainly discourages newbies who just want to have ready to use libs. And in some cases it can be hard, like when non-trivial macros are used (like for some unix socket stuff), or when there are a lot of custom types (and you can't just generate the bindings but have to write them yourself). In these cases it may sometimes be easier to write the code in C, exposing an even simpler interface suitable for your needs and call your own C functions from D (that's what I've done for aforementioned unix socket stuff: passing socket/file descriptors to another process needs those cmsg macros and structs). But stuff like this probably discourages newbies even more, especially when they're not coming from C/C++ but Java or Python or something. Cheers, - Daniel
Jun 08 2011
next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On 2011-06-08 18:50, Daniel Gibson wrote:
 Am 09.06.2011 03:31, schrieb Andrej Mitrovic:
 C libraries are great. They're flat and have little overhead.You can
 build your own OOP/whatever-oriented interfaces around the library,
 referencing just the functions that you need. Simple stuff compared to
 some monsters like QtD.
You still have to generate/write the D bindings (and generating only works on windows) before you can use them. This may not be too hard in most cases, but certainly discourages newbies who just want to have ready to use libs. And in some cases it can be hard, like when non-trivial macros are used (like for some unix socket stuff), or when there are a lot of custom types (and you can't just generate the bindings but have to write them yourself). In these cases it may sometimes be easier to write the code in C, exposing an even simpler interface suitable for your needs and call your own C functions from D (that's what I've done for aforementioned unix socket stuff: passing socket/file descriptors to another process needs those cmsg macros and structs). But stuff like this probably discourages newbies even more, especially when they're not coming from C/C++ but Java or Python or something.
Yeah. It's fantastic that D can call C code, and it opens up a lot of libraries to us. But for a lot of programmers (particularly non-C/C++ programmers), that's a complete non-starter. They expect the libraries in D, not C, and requiring them to deal with C bindings is more than they're willing to put up with - particularly if they don't understand how easy it is to call C code, since you can call C code in many other languages, but it's generally much harder. And for many programmers from languages such as python and Java, they're used to having a _ton_ of libraries which do a ton of stuff for them where it's much harder to find C libraries which do it or it's much harder to do it with the C libraries than it is in those languages. So, being able to call C code is fantastic and buys us a lot, but for a lot of programmers, that just doesn't cut it. They want the libraries to be in D. - Jonathan M Davis
Jun 08 2011
next sibling parent Adam D. Ruppe <destructionator gmail.com> writes:
 They want the libraries to be in D.
I don't mind writing the prototypes myself, but I've thought in the past that simply putting some common C lib's bindings in etc.c.* - not the actual implementation or binary, just the header, to minimize copyright impact - might help this. While a lot of people still seem to fear the C interface, it might help lower the bar for both direct use and third party wrappers.
Jun 08 2011
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 6/8/2011 7:18 PM, Jonathan M Davis wrote:
 So, being able to call C code is fantastic and buys us a lot, but for a lot of
 programmers, that just doesn't cut it. They want the libraries to be in D.
I think we've got some good traction lately in providing interfaces to popular c libraries in etc.c. It's a great first step.
Jun 11 2011
prev sibling next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
It's a systems programming language, no? ^_^

 Daniel: I completely forgot that htod is only a windows goodie. But
can't it be ran via wine?
Jun 08 2011
parent Daniel Gibson <metalcaedes gmail.com> writes:
Am 09.06.2011 04:24, schrieb Andrej Mitrovic:
 It's a systems programming language, no? ^_^
 
Yeah, but it's also general purpose and supposed to be as easily usable
  Daniel: I completely forgot that htod is only a windows goodie. But
 can't it be ran via wine?
I don't know. I guess it won't work too well if the headers you're trying to convert include system headers with system specific defines etc (these defines AFAIK end up in really system specific gcc headers that gcc choses depending on your architecture and depending on if you use -m32 etc). Maybe it works if you do the preprocessing with gcc -E ("Stop after the preprocessing stage") and feed the result to htod. Cheers, - Daniel
Jun 08 2011
prev sibling parent Brad Roberts <braddr puremagic.com> writes:
On Jun 8, 2011, at 7:18 PM, Jonathan M Davis <jmdavisProg gmx.com> wrote:

 So, being able to call C code is fantastic and buys us a lot, but for a lo=
t of=20
 programmers, that just doesn't cut it. They want the libraries to be in D.=
=20
 - Jonathan M Davis
Close, but I think the real want / need is for the library to be easy to use= . Language is a distant second.=
Jun 08 2011
prev sibling next sibling parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 09.06.2011 03:24, schrieb Adam D. Ruppe:
 OTOH libraries are (hopefully) tested and stable
That's what they *want* you to think! :-P Of course, I'm exaggerating a little, but I stand by it in many cases: yeah, there's some hard stuff like crypto and gui, but most stuff isn't that bad.
I agree. Something else that comes to mind are database bindings - native D bindings that allow you to use D types and integrate well with ranges etc would certainly be preferable to using raw C bindings.
 For that hard stuff though, there's always C libraries. The
 popular C libs are generally fairly stable and not hard to use
 in D, license permitting.
 
As long as not too many macros or custom types are involved and getting the D bindings isn't too hard
 (Sometimes I think people forget that D has a *bigger* library
 ecosystem than C, since every C library is also usable from D!
 And thanks to D features, like scope guards and array ops, they
 tend to be pretty easy to use straight up too.)
 
 
 Another example is GUI libraries
Aye, GUI is the biggest example of hard stuff to implement well that's also hard to use from C. Crypto isn't bad since C libraries implement them with a pretty easy interface; it's generally just a handful of functions in my experience. (Contrast to guis where it's often hundreds of classes each with dozens of methods and callbacks... just writing out their prototypes can take a while!)
Ok
Jun 08 2011
parent Adam D. Ruppe <destructionator gmail.com> writes:
 Something else that comes to mind are database bindings - native D
 bindings that allow you to use D types and integrate well with
 ranges etc would certainly be preferable to using raw C bindings.
Yes, indeed. I wrapped the C database functions for my own use to get that kind of stuff. But that's not hard to do.
 As long as not too many macros or custom types are involved and
 getting the D bindings isn't too hard
Meh, it's like ten minutes of cut+paste, less if you give up type safety and use "in void*" everywhere or only use a fraction of the lib.
Jun 08 2011
prev sibling parent "Nick Sabalausky" <a a.a> writes:
"Adam D. Ruppe" <destructionator gmail.com> wrote in message 
news:isp7bj$2fmh$1 digitalmars.com...
 (Sometimes I think people forget that D has a *bigger* library
 ecosystem than C, since every C library is also usable from D!
 And thanks to D features, like scope guards and array ops, they
 tend to be pretty easy to use straight up too.)
Yea. Just recently I was porting DVM to Windows, which required accessing the registry. Tango didn't seem to have any real registry functions, just some Win32 bindings, and it didn't take long at all before I had both a heavily D-ified low-level wrapper (complete with a fairly robust conversion system for "windows error code" -> D exception), and a nice easy high-level RAII struct interface to go on top of it. And most of the work came from figuring out all the details of the Win32's registry API, which I had never even dealt with before. I ended up feeling doing the same thing in C/C++ would have been very painful indeed, and it's a C-native API!
Jun 08 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Adam D. Ruppe" <destructionator gmail.com> wrote in message 
news:isovu6$21k6$1 digitalmars.com...
I find it interesting that so many people complain about the lack
 of libraries.


 software library: a way to waste a programmer's time

 reinventing wheels: a lie some programmers, who are paid by the
 hour, perpetuated so they can justify the "use" of software libraries

 The project could have been done in one day if he just sat down and
 got to work. Instead, he made up some bullshit about how reinventing
 wheels is bad.

 Thus, he now spends 3 days searching for a library. Another 5
 days trying to make it work. Another 10 days reading the
 godawful documentation. A day bitching about the suckiness on the
 internet.

 Then, finally, two days to integrate the library into his project.
 For bonus points, force the end users to install it too, because
 the more time wasted, the better.
Yup. There's definitely a lot of cases where pre-made libs are a huge help and a much better option, but I've gotten really fed up with the Anti-NIH Holy Crusaders, largely for the reasons you stated above. (Another good reason to actually embrace NIH-syndrome instead knee-jerking away from it out of principle is for mission-critical things where you can't afford the possibility of being left at the mercey of some outside group.) What you said above is also why I *strongly* believe that good (and I mean *good*) documentation is every bit as important as actually writing/releasing a tool or library in the first place. I've seen so much "already-made" work that's rendered barely-usable due to less-than-stellar documentation (or even worse: bad or non-existant documentation). What's the point out of putting stuff out there if nobody knows how to use it? What's the point of using something if figuring it out and getting it to work takes about as much effort as DIY? That's why (for public projects anyway) I force myself, even if I don't want to, to put all the effort I need to into documentation to make things as easy as possible. Otherwise, all that effort writing the code would likely have been for nothing anyway.
Jun 08 2011
next sibling parent Adam Ruppe <destructionator gmail.com> writes:
Nick Sabalausky wrote:
 (or even worse: bad or non-existant documentation)
Actually, I think no doc is better than bad doc. At least with no docs, you're immediately informed to not waste your time trying to read it! I work with a lot of web apps. I wish they were completely undocumented; that'd be better than spending an hour reading something just to find it is completely wrong! (The worst part is we're conditioned to think other people's code is generally more correct than your code. So when a problem comes up, you blame yourself and spend hours tilting at windmills...)
Jun 09 2011
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 6/8/2011 11:56 PM, Nick Sabalausky wrote:
 What you said above is also why I *strongly* believe that good (and I mean
 *good*) documentation is every bit as important as actually
 writing/releasing a tool or library in the first place. I've seen so much
 "already-made" work that's rendered barely-usable due to less-than-stellar
 documentation (or even worse: bad or non-existant documentation). What's the
 point out of putting stuff out there if nobody knows how to use it? What's
 the point of using something if figuring it out and getting it to work takes
 about as much effort as DIY? That's why (for public projects anyway) I force
 myself, even if I don't want to, to put all the effort I need to into
 documentation to make things as easy as possible. Otherwise, all that effort
 writing the code would likely have been for nothing anyway.
Sure. I struggle with writing documentation myself, and Ddoc has cut my effort involved in doing it by more than half. (I even use Ddoc to build Kindle ebooks, 4 so far!)
Jun 11 2011
prev sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On 2011-06-08 19:32, Brad Roberts wrote:
 On Jun 8, 2011, at 7:18 PM, Jonathan M Davis <jmdavisProg gmx.com> wrote:
 So, being able to call C code is fantastic and buys us a lot, but for a
 lot of programmers, that just doesn't cut it. They want the libraries to
 be in D.
 
 - Jonathan M Davis
Close, but I think the real want / need is for the library to be easy to use. Language is a distant second.
True, but that generally means that the libraries be written in D. A well written, easy-to-use C library might be more desirable than a poorly written, hard-to-use D library, but in general, if the libraries are well written, then it's going to be easier to use pure D code than it is to use C code. And a lot of people are going to think that C code is harder to use in D than it actually is before they actually do it, so it's generally going to seem worse to D newbies that it really is. So, in general, people are going to be looking for libraries to be written in D rather than having to figure out how to interface with C libraries. - Jonathan M Davis
Jun 08 2011