www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.announce - DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
reddit: 
http://www.reddit.com/r/programming/comments/1gz40q/dconf_2013_closing_keynote_quo_vadis_by_andrei/

facebook: https://www.facebook.com/dlang.org/posts/662488747098143

twitter: https://twitter.com/D_Programming/status/349197737805373441

hackernews: https://news.ycombinator.com/item?id=5933818

youtube: http://youtube.com/watch?v=4M-0LFBP9AU


Andrei
Jun 24 2013
next sibling parent reply David Gileadi <gileadis NSPMgmail.com> writes:
Slides seem to be missing from 
http://dconf.org/2013/talks/alexandrescu.pdf; I get a 404.
Jun 24 2013
parent David Gileadi <gileadis NSPMgmail.com> writes:
On 6/24/13 9:19 AM, David Gileadi wrote:
 Slides seem to be missing from
 http://dconf.org/2013/talks/alexandrescu.pdf; I get a 404.
I posted too soon; they're there now. Sorry for the noise.
Jun 24 2013
prev sibling next sibling parent reply "QAston" <qaston gmail.com> writes:
This may be completely ridiculous - I'm a newcomer - please 
destroy me gently.

So, the idea is to make dlang.org a fundation. Here are some 
possible benefits of doing this:
-You get more people to "own" the language and therefore 
seriously care about it's future development.
--Two people are not enough.
---What if someone gets hit by a bus?
---Delegate some administrative tasks to other people, so you can 
focus on improving things
--Programmers are not all what's needed(the ability to write xml 
parser doesn't make you a good webdev)
---Get a real webdesigner involved
---Someone to do proffessional PR and advertising
---An admin to maintain all these things
-You could start taking donations and hire some people to work on 
D.
--Like for example, you could pay for a proffessional 
"enterprise'y" webdesign for dlang.org.
--Companies want to donate to support tools they're using
--Funding for DSoC
-You'd get more interest from companies
--Managers run companies, not programmers, github is not a 
collaboration for managers
--Increase in trust, things are formal and transparent, not done 
behind the scenes
--They may want to put a part-time developer to work on a 
compiler for example
---Much easier with a formal institution, where the dev would 
actually have something to say and can get things done

There are obviously some issues, like the design by comitee 
problem and possibly others. Still, python, perl, haskell and 
others have foundations. That's probably why those are much 
better   operational proffessionalism.
Jun 24 2013
next sibling parent "QAston" <qaston gmail.com> writes:
On Tuesday, 25 June 2013 at 04:34:02 UTC, QAston wrote:
 This may be completely ridiculous - I'm a newcomer - please 
 destroy me gently.

 So, the idea is to make dlang.org a fundation. Here are some 
 possible benefits of doing this:
 -You get more people to "own" the language and therefore 
 seriously care about it's future development.
 --Two people are not enough.
 ---What if someone gets hit by a bus?
 ---Delegate some administrative tasks to other people, so you 
 can focus on improving things
 --Programmers are not all what's needed(the ability to write 
 xml parser doesn't make you a good webdev)
 ---Get a real webdesigner involved
 ---Someone to do proffessional PR and advertising
 ---An admin to maintain all these things
 -You could start taking donations and hire some people to work 
 on D.
 --Like for example, you could pay for a proffessional 
 "enterprise'y" webdesign for dlang.org.
 --Companies want to donate to support tools they're using
 --Funding for DSoC
 -You'd get more interest from companies
 --Managers run companies, not programmers, github is not a 
 collaboration for managers
 --Increase in trust, things are formal and transparent, not 
 done behind the scenes
 --They may want to put a part-time developer to work on a 
 compiler for example
 ---Much easier with a formal institution, where the dev would 
 actually have something to say and can get things done

 There are obviously some issues, like the design by comitee 
 problem and possibly others. Still, python, perl, haskell and 
 others have foundations. That's probably why those are much 
 better   operational proffessionalism.
Note: I don't want to do a cargo-cult here - simple registering doesn't do magic, yet it's a valid consideration i think, especially if it may help solving some problems pointed out by Andrei.
Jun 25 2013
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/24/13 9:34 PM, QAston wrote:
 This may be completely ridiculous - I'm a newcomer - please destroy me
 gently.

 So, the idea is to make dlang.org a fundation.
That would be nice, and I discussed it with Walter prior to DConf 2013. We temporarily concluded that the overheads and headaches would undo the advantages. Of course, if we had an expert in running a foundation on board, the tradeoffs would change. Andrei
Jun 25 2013
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2013-06-25 06:34, QAston wrote:

 ---Get a real webdesigner involved
I would say, as long as the web site is written in ddoc, no real web designer will be interested. -- /Jacob Carlborg
Jun 25 2013
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 6/25/2013 1:09 PM, Jacob Carlborg wrote:
 On 2013-06-25 06:34, QAston wrote:

 ---Get a real webdesigner involved
I would say, as long as the web site is written in ddoc, no real web designer will be interested.
The dconf.org website was done by a real web designer who was paid real money, and it's in ddoc.
Jun 25 2013
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/25/13 1:16 PM, Walter Bright wrote:
 On 6/25/2013 1:09 PM, Jacob Carlborg wrote:
 On 2013-06-25 06:34, QAston wrote:

 ---Get a real webdesigner involved
I would say, as long as the web site is written in ddoc, no real web designer will be interested.
The dconf.org website was done by a real web designer who was paid real money, and it's in ddoc.
Truth be told the designer delivered HTML, which we converted to DDoc. Andrei
Jun 25 2013
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 6/25/2013 1:19 PM, Andrei Alexandrescu wrote:
 On 6/25/13 1:16 PM, Walter Bright wrote:
 On 6/25/2013 1:09 PM, Jacob Carlborg wrote:
 On 2013-06-25 06:34, QAston wrote:

 ---Get a real webdesigner involved
I would say, as long as the web site is written in ddoc, no real web designer will be interested.
The dconf.org website was done by a real web designer who was paid real money, and it's in ddoc.
Truth be told the designer delivered HTML, which we converted to DDoc.
Yup, but as I recall we specified it with an eye towards easy conversion.
Jun 25 2013
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-06-25 22:19, Andrei Alexandrescu wrote:

 Truth be told the designer delivered HTML, which we converted to DDoc.
Ok, I see that "web designer" was properly not the correct word(s). "Web developer" is perhaps better. The one who builds the final format. -- /Jacob Carlborg
Jun 26 2013
prev sibling next sibling parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Tuesday, 25 June 2013 at 20:09:46 UTC, Jacob Carlborg wrote:
 I would say, as long as the web site is written in ddoc, no 
 real web designer will be interested.
For my work sites, I often don't give the designer access to the html at all. They have one of two options: make it work with pure css, or send me an image of what it is supposed to look like, and I'll take it from there. Pure css doesn't always work, so sometimes I have to add or rearrange some html stuff for them, but it usually actually *does* work, after they get over their initial "this is impossible" stage. ddoc would be even easier than this, since the main website skeleton is just a piece of html anyway (see dmd2/src/phobos/std.ddoc and the macro DDOC = ).
Jun 25 2013
parent reply Jacob Carlborg <doob me.com> writes:
On 2013-06-25 23:45, Adam D. Ruppe wrote:

 For my work sites, I often don't give the designer access to the html at
 all. They have one of two options: make it work with pure css, or send
 me an image of what it is supposed to look like, and I'll take it from
 there.
"web designer" was properly not the best word(s). I would say that you're talking about the graphical designer I was talking about the one implementing the design, web developer/frontend developer or what to call it. I wouldn't give the graphical designer access to the code either. It needs to be integrated with the backend code (which is Ruby or similar) anyway, to fetch the correct data and so on. -- /Jacob Carlborg
Jun 26 2013
parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Wednesday, 26 June 2013 at 10:18:58 UTC, Jacob Carlborg wrote:
 that you're talking about the graphical designer I was talking 
 about the one implementing the design, web developer/frontend 
 developer or what to call it.
Ah yes. Still though, I don't think ddoc is that big of a deal, especially since there's a few of us here who can do the translations if needed.
 I wouldn't give the graphical designer access to the code 
 either. It needs to be integrated with the backend code (which 
 is Ruby or similar) anyway, to fetch the correct data and so on.
Right.
Jun 26 2013
prev sibling parent reply "Aleksandar Ruzicic" <aleksandar ruzicic.info> writes:
On Tuesday, 25 June 2013 at 20:09:46 UTC, Jacob Carlborg wrote:
 On 2013-06-25 06:34, QAston wrote:

 ---Get a real webdesigner involved
I would say, as long as the web site is written in ddoc, no real web designer will be interested.
There is no need for designer to know what DDOC is. For the past few years I have worked with many designers which had only basic knowledge about HTML and even less about CSS (most of them don't know anything about JavaScript but they "know jQuery a bit"). They just give me PSD and I do slicing and all coding. So if any redesign of dlang.org is going to happen I volunteer to do all coding, so there is no need to look for designer which is comfortable writing DDOC.
Jun 25 2013
parent Jacob Carlborg <doob me.com> writes:
On 2013-06-26 00:55, Aleksandar Ruzicic wrote:

 There is no need for designer to know what DDOC is. For the past few
 years I have worked with many designers which had only basic knowledge
 about HTML and even less about CSS (most of them don't know anything
 about JavaScript but they "know jQuery a bit"). They just give me PSD
 and I do slicing and all coding.
Again, "web designer" was not the correct word(s). Something more like web developer/frontend developer, who ever writes the final format.
 So if any redesign of dlang.org is going to happen I volunteer to do all
 coding, so there is no need to look for designer which is comfortable
 writing DDOC.
Ok, good. -- /Jacob Carlborg
Jun 26 2013
prev sibling parent reply "ixid" <nuaccount gmail.com> writes:
-You could start taking donations and hire some people to work on
D. This doesn't work as it's a volunteer project. Why should someone get paid when others give their time for free? It would create conflict while being a less effective application of funds, D already gets more than one or two people years of effort per year. A better use of the money is another D conference which has been a huge success and generated both ideas and much greater interest and exposure for D.
Jun 27 2013
next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 6/27/13, ixid <nuaccount gmail.com> wrote:
-You could start taking donations and hire some people to work on
A better use of the money is another D conference which has been a huge success and generated both ideas and much greater interest and exposure for D.
Yes, and some better glue for the microphones. :P
Jun 27 2013
prev sibling parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Thursday, 27 June 2013 at 21:39:28 UTC, ixid wrote:
-You could start taking donations and hire some people to work 
on
D. This doesn't work as it's a volunteer project. Why should someone get paid when others give their time for free? It would create conflict while being a less effective application of funds, D already gets more than one or two people years of effort per year. A better use of the money is another D conference which has been a huge success and generated both ideas and much greater interest and exposure for D.
I don't think paying people should be out of the question, if there was money available. There are often jobs that need doing on a project the size of D that aren't fun, stimlating or interesting, but are still absolutely necessary. Trawling through documentation for any errors/omissions etc. comes to mind. These tasks could be split up in to manageable chunks and offered as freelance contracts for a modest but reasonable wage, dependant on urgency, technical skill needed etc. E.g. If someone would pay me even as little as $15-20 an hour to go through phobos enforcing the style guide everywhere or chasing up old bugs in bugzilla, I would happily put in a few hours a week. For a student that sort of money goes a long way. (Not suggesting that paying per hour is a good idea necessarily, just illustrating the point with an example)
Jun 27 2013
prev sibling next sibling parent reply Peter Williams <pwil3058 bigpond.net.au> writes:
On 25/06/13 02:13, Andrei Alexandrescu wrote:
 reddit:
 http://www.reddit.com/r/programming/comments/1gz40q/dconf_2013_closing_keynote_quo_vadis_by_andrei/


 facebook: https://www.facebook.com/dlang.org/posts/662488747098143

 twitter: https://twitter.com/D_Programming/status/349197737805373441

 hackernews: https://news.ycombinator.com/item?id=5933818

 youtube: http://youtube.com/watch?v=4M-0LFBP9AU


 Andrei
Can you think of a better name than "D Summer Of Code"? It's very northern hemisphere centric and makes us southerners feel like the rest of the world doesn't know there is a southern hemisphere (or if they do that they don't know the seasons work) :-). Peter
Jun 24 2013
next sibling parent reply "Mike Parker" <aldacron gmail.com> writes:
On Tuesday, 25 June 2013 at 05:57:30 UTC, Peter Williams wrote:

 Can you think of a better name than "D Summer Of Code"?  It's 
 very northern hemisphere centric and makes us southerners feel 
 like the rest of the world doesn't know there is a southern 
 hemisphere (or if they do that they don't know the seasons 
 work) :-).

 Peter
D Season of Code! Then we don't have to restrict ourselves to one time of the year.
Jun 25 2013
parent reply "eles" <eles eles.com> writes:
On Tuesday, 25 June 2013 at 08:21:38 UTC, Mike Parker wrote:
 On Tuesday, 25 June 2013 at 05:57:30 UTC, Peter Williams wrote:
 D Season of Code! Then we don't have to restrict ourselves to 
 one time of the year.
D Seasons of Code! Why to restrict to a single season? Let's code all the year long! :)
Jun 26 2013
parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On 26 June 2013 15:04, eles <eles eles.com> wrote:
 On Tuesday, 25 June 2013 at 08:21:38 UTC, Mike Parker wrote:
 On Tuesday, 25 June 2013 at 05:57:30 UTC, Peter Williams wrote:
 D Season of Code! Then we don't have to restrict ourselves to one time of
 the year.
D Seasons of Code! Why to restrict to a single season? Let's code all the year long! :)
Programmers need to hibernate too, you know. ;) -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Jun 26 2013
prev sibling next sibling parent Leandro Lucarella <luca llucax.com.ar> writes:
Peter Williams, el 25 de June a las 15:57 me escribiste:
 On 25/06/13 02:13, Andrei Alexandrescu wrote:
reddit:
http://www.reddit.com/r/programming/comments/1gz40q/dconf_2013_closing_keynote_quo_vadis_by_andrei/


facebook: https://www.facebook.com/dlang.org/posts/662488747098143

twitter: https://twitter.com/D_Programming/status/349197737805373441

hackernews: https://news.ycombinator.com/item?id=5933818

youtube: http://youtube.com/watch?v=4M-0LFBP9AU


Andrei
Can you think of a better name than "D Summer Of Code"? It's very northern hemisphere centric and makes us southerners feel like the rest of the world doesn't know there is a southern hemisphere (or if they do that they don't know the seasons work) :-).
Or they know, but they just don't give a fuck :) -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- The average person laughs 13 times a day
Jun 25 2013
prev sibling next sibling parent =?UTF-8?B?QWxpIMOHZWhyZWxp?= <acehreli yahoo.com> writes:
On 06/24/2013 10:57 PM, Peter Williams wrote:

 Can you think of a better name than "D Summer Of Code"?  It's very
 northern hemisphere centric and makes us southerners feel like the rest
 of the world doesn't know there is a southern hemisphere
The only southern country is Mexico, which I am told is in the Northern hemisphere. :p Ali
Jun 25 2013
prev sibling parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Tue, 25 Jun 2013 15:57:18 +1000
Peter Williams <pwil3058 bigpond.net.au> wrote:
 
 Can you think of a better name than "D Summer Of Code"?  It's very 
 northern hemisphere centric and makes us southerners feel like the
 rest of the world doesn't know there is a southern hemisphere (or if
 they do that they don't know the seasons work) :-).
 
I'm pretty sure the southern hemisphere has summer too...It's just a lot colder ;) Nobody called it "D Warm-Summer of Code".
Jun 25 2013
next sibling parent Peter Williams <pwil3058 bigpond.net.au> writes:
On 26/06/13 06:14, Nick Sabalausky wrote:
 On Tue, 25 Jun 2013 15:57:18 +1000
 Peter Williams <pwil3058 bigpond.net.au> wrote:
 Can you think of a better name than "D Summer Of Code"?  It's very
 northern hemisphere centric and makes us southerners feel like the
 rest of the world doesn't know there is a southern hemisphere (or if
 they do that they don't know the seasons work) :-).
I'm pretty sure the southern hemisphere has summer too...It's just a lot colder ;) Nobody called it "D Warm-Summer of Code".
Not all of it. In tropical Australia, they have two seasons - the wet season (aka the suicide season) and the dry season :-). Peter
Jun 25 2013
prev sibling parent Manu <turkeyman gmail.com> writes:
On 26 June 2013 09:59, Peter Williams <pwil3058 bigpond.net.au> wrote:

 On 26/06/13 06:14, Nick Sabalausky wrote:

 On Tue, 25 Jun 2013 15:57:18 +1000
 Peter Williams <pwil3058 bigpond.net.au> wrote:

 Can you think of a better name than "D Summer Of Code"?  It's very
 northern hemisphere centric and makes us southerners feel like the
 rest of the world doesn't know there is a southern hemisphere (or if
 they do that they don't know the seasons work) :-).
I'm pretty sure the southern hemisphere has summer too...It's just a lot colder ;) Nobody called it "D Warm-Summer of Code".
Not all of it. In tropical Australia, they have two seasons - the wet season (aka the suicide season) and the dry season :-).
I like to think of it as the soaking bloody wet season, and the slightly less wet season ;) Slightly more tolerable than indonesia, which has only a single 'soaking wet at precisely 4pm every day, but otherwise lovely weather (if you like humidity) season'...
Jun 25 2013
prev sibling next sibling parent reply "Jonas Drewsen" <nospam4321 hotmail.com > writes:
I'm a Danish guy so there is a at least one dane using D :)

/Jonas
Jun 25 2013
parent Jacob Carlborg <doob me.com> writes:
On 2013-06-25 11:42, Jonas Drewsen wrote:

 I'm a Danish guy so there is a at least one dane using D :)
Tomas Lindquist Olsen, creator of LDC (LLVMDC back then) is Danish, if I recall correctly. -- /Jacob Carlborg
Jun 25 2013
prev sibling next sibling parent reply "Joakim" <joakim airpost.net> writes:
Just finished watching Andrei's talk, it was up to his usual high 
standard.

I found the bits about professionalism a bit weird though: can we 
really expect that from a volunteer effort?  I'm pretty sure the 
A/V guys at the conference weren't volunteers, ie they were paid.

Along the line that QAston started, if you want more 
professionalism, is there any interest in producing a commercial 
D compiler?  If not, why not?  I notice that Walter sells C and 
C++ compilers and source on digitalmars.com, but strangely not D. 
  There are interesting business/source models nowadays where you 
can be mostly open source and still sell a commercial product.

For example, Walter has often talked about optimizations in the 
compiler that he'd like to get to.  There could be two compilers: 
one where the source is fully publicly available, another made 
available to paying users, which has additional optimizations 
done either by Walter or others who he supervises, but the source 
for those optimizations would not be available publicly, though 
perhaps made available only to the buyers under a non-OSS 
license.  After enough time has passed for the optimization work 
to be paid for, the optimization patches would eventually be 
merged into the slower, non-paid version.  Android uses a similar 
hybrid model, which has obviously been enormously successful.

Another possibility is a bounty system, where users pledge money 
towards needed features or bug fixes.  It'd basically be a more 
distributed version of the hybrid approach I've outlined.

I wonder what the response would be to injecting some money and 
commercialism into the D ecosystem.
Jun 25 2013
parent reply "Joseph Rushton Wakeling" <joseph.wakeling webdrake.net> writes:
On Tuesday, 25 June 2013 at 15:44:02 UTC, Joakim wrote:
 Just finished watching Andrei's talk, it was up to his usual 
 high standard.

 I found the bits about professionalism a bit weird though: can 
 we really expect that from a volunteer effort?  I'm pretty sure 
 the A/V guys at the conference weren't volunteers, ie they were 
 paid.

 Along the line that QAston started, if you want more 
 professionalism, is there any interest in producing a 
 commercial D compiler?  If not, why not?  I notice that Walter 
 sells C and C++ compilers and source on digitalmars.com, but 
 strangely not D.
  There are interesting business/source models nowadays where 
 you can be mostly open source and still sell a commercial 
 product.

 For example, Walter has often talked about optimizations in the 
 compiler that he'd like to get to.  There could be two 
 compilers: one where the source is fully publicly available, 
 another made available to paying users, which has additional 
 optimizations done either by Walter or others who he 
 supervises, but the source for those optimizations would not be 
 available publicly, though perhaps made available only to the 
 buyers under a non-OSS license.  After enough time has passed 
 for the optimization work to be paid for, the optimization 
 patches would eventually be merged into the slower, non-paid 
 version.  Android uses a similar hybrid model, which has 
 obviously been enormously successful.

 Another possibility is a bounty system, where users pledge 
 money towards needed features or bug fixes.  It'd basically be 
 a more distributed version of the hybrid approach I've outlined.

 I wonder what the response would be to injecting some money and 
 commercialism into the D ecosystem.
Given how D's whole success stems from its community, I think an "open core" model (even with time-lapse) would be disastrous. It'd be like kicking everyone in the teeth after all the work they put in.
Jun 25 2013
parent reply "Joakim" <joakim airpost.net> writes:
On Tuesday, 25 June 2013 at 20:58:16 UTC, Joseph Rushton Wakeling 
wrote:
 I wonder what the response would be to injecting some money 
 and commercialism into the D ecosystem.
Given how D's whole success stems from its community, I think an "open core" model (even with time-lapse) would be disastrous. It'd be like kicking everyone in the teeth after all the work they put in.
I don't know the views of the key contributors, but I wonder if they would have such a knee-jerk reaction against any paid/closed work. The current situation would seem much more of a kick in the teeth to me: spending time trying to be "professional," as Andrei asks, and producing a viable, stable product used by a million developers, corporate users included, but never receiving any compensation for this great tool you've poured effort into, that your users are presumably often making money with. I understand that such a shift from being mostly OSS to having some closed components can be tricky, but that depends on the particular community. I don't think any OSS project has ever become popular without having some sort of commercial model attached to it. C++ would be nowhere without commercial compilers; linux would be unheard of without IBM and Red Hat figuring out a consulting/support model around it; and Android would not have put the linux kernel on hundreds of millions of computing devices without the hybrid model that Google employed, where they provide an open source core, paid for through increased ad revenue from Android devices, and the hardware vendors provide closed hardware drivers and UI skins on top of the OSS core. This talk prominently mentioned scaling to a million users and being professional: going commercial is the only way to get there.
Jun 25 2013
next sibling parent reply Bill Baxter <wbaxter gmail.com> writes:
On Tue, Jun 25, 2013 at 2:37 PM, Joakim <joakim airpost.net> wrote:

 On Tuesday, 25 June 2013 at 20:58:16 UTC, Joseph Rushton Wakeling wrote:

 I wonder what the response would be to injecting some money and
 commercialism into the D ecosystem.
Given how D's whole success stems from its community, I think an "open core" model (even with time-lapse) would be disastrous. It'd be like kicking everyone in the teeth after all the work they put in.
I don't know the views of the key contributors, but I wonder if they would have such a knee-jerk reaction against any paid/closed work. The current situation would seem much more of a kick in the teeth to me: spending time trying to be "professional," as Andrei asks, and producing a viable, stable product used by a million developers, corporate users included, but never receiving any compensation for this great tool you've poured effort into, that your users are presumably often making money with. I understand that such a shift from being mostly OSS to having some closed components can be tricky, but that depends on the particular community. I don't think any OSS project has ever become popular without having some sort of commercial model attached to it. C++ would be nowhere without commercial compilers; linux would be unheard of without IBM and Red Hat figuring out a consulting/support model around it; and Android would not have put the linux kernel on hundreds of millions of computing devices without the hybrid model that Google employed, where they provide an open source core, paid for through increased ad revenue from Android devices, and the hardware vendors provide closed hardware drivers and UI skins on top of the OSS core. This talk prominently mentioned scaling to a million users and being professional: going commercial is the only way to get there.
IDEs are something you can have a freemium model for. Core languages are not these days. If you have to pay to get the optimized version of the language there are just too many other places to look that don't charge. You want the best version of the language to be in everyone's hands. But there can be some tools you have to pay for. http://www.wingware.com/ is a good example of a commercial Python IDE that adds value to the community with a commercial offering. I paid for a copy back when I was doing a lot of python development. It is definitely not a business I would want to be in, though. I was surprised to see they are still alive, actually. Hard to make much money selling things to developers. --bb
Jun 25 2013
parent reply "Joakim" <joakim airpost.net> writes:
On Wednesday, 26 June 2013 at 01:25:42 UTC, Bill Baxter wrote:
 On Tue, Jun 25, 2013 at 2:37 PM, Joakim <joakim airpost.net> 
 wrote:
 This talk prominently mentioned scaling to a million users and 
 being
 professional: going commercial is the only way to get there.
IDEs are something you can have a freemium model for. Core languages are not these days. If you have to pay to get the optimized version of the language there are just too many other places to look that don't charge. You want the best version of the language to be in everyone's hands... Hard to make much money selling things to developers.
I agree that there is a lot of competition for programming languages. However, Visual Studio brought in $400 million in extensions alone a couple years back: http://blogs.msdn.com/b/somasegar/archive/2011/04/12/happy-1st-birthday-visual-studio-2010.aspx Microsoft doesn't break out numbers for Visual Studio itself, but it might be a billion+ dollars a year, not to mention all the other commercial C++ compilers out there. If the aim is to displace C++ and gain a million users, it is impossible to do so without commercial implementations. All the languages that you are thinking about that do no offer a single commercial implementation- remember, even Perl and Python have commercial options, eg ActiveState- have almost no usage compared to C++. It is true that there are large companies like Apple or Sun/Oracle that give away a lot of tooling for free, but D doesn't have such corporate backing. It is amazing how far D has gotten with no business model: money certainly isn't everything. But it is probably impossible to get to a million users or offer professionalism without commercial implementations. In any case, the fact that the D front-end is under the Artistic license and most of the rest of the code is released under similarly liberal licensing means that someone can do this on their own, without any other permission from the community, and I expect that if D is successful, someone will. I'm simply suggesting that the original developers jump-start that process by doing it themselves, in the hybrid form I've suggested, rather than potentially getting cut out of the decision-making process when somebody else does it.
Jun 25 2013
parent reply Leandro Lucarella <luca llucax.com.ar> writes:
Joakim, el 26 de June a las 08:33 me escribiste:
 It is amazing how far D has gotten with no business model: money
 certainly isn't everything.  But it is probably impossible to get to
 a million users or offer professionalism without commercial
 implementations.
Yeah, right, probably Python and Ruby have only 5k users... This argument is BS. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Are you such a dreamer? To put the world to rights? I'll stay home forever Where two & two always makes up five
Jun 26 2013
parent reply Jacob Carlborg <doob me.com> writes:
On 2013-06-26 12:16, Leandro Lucarella wrote:

 Yeah, right, probably Python and Ruby have only 5k users...
There are companies backing those languages, at least Ruby, to some extent. -- /Jacob Carlborg
Jun 26 2013
next sibling parent reply "Joseph Rushton Wakeling" <joseph.wakeling webdrake.net> writes:
On Wednesday, 26 June 2013 at 12:39:05 UTC, Jacob Carlborg wrote:
 On 2013-06-26 12:16, Leandro Lucarella wrote:

 Yeah, right, probably Python and Ruby have only 5k users...
There are companies backing those languages, at least Ruby, to some extent.
They don't own them, though -- they commit resources to them because the language's ongoing development serves their business needs.
Jun 26 2013
parent Jacob Carlborg <doob me.com> writes:
On 2013-06-26 15:18, Joseph Rushton Wakeling wrote:

 They don't own them, though -- they commit resources to them because the
 language's ongoing development serves their business needs.
Yes, exactly. -- /Jacob Carlborg
Jun 26 2013
prev sibling parent Leandro Lucarella <luca llucax.com.ar> writes:
Jacob Carlborg, el 26 de June a las 14:39 me escribiste:
 On 2013-06-26 12:16, Leandro Lucarella wrote:
 
Yeah, right, probably Python and Ruby have only 5k users...
There are companies backing those languages, at least Ruby, to some extent.
Read my other post, I won't repeat myself :) -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- JUNTAN FIRMAS Y HUELLAS POR EL CACHORRO CONDENADO A MUERTE... -- Crónica TV
Jun 26 2013
prev sibling next sibling parent reply Leandro Lucarella <luca llucax.com.ar> writes:
Joakim, el 25 de June a las 23:37 me escribiste:
 On Tuesday, 25 June 2013 at 20:58:16 UTC, Joseph Rushton Wakeling
 wrote:
I wonder what the response would be to injecting some money and
commercialism into the D ecosystem.
Given how D's whole success stems from its community, I think an "open core" model (even with time-lapse) would be disastrous. It'd be like kicking everyone in the teeth after all the work they put in.
I don't know the views of the key contributors, but I wonder if they would have such a knee-jerk reaction against any paid/closed work.
Against being paid no, against being closed YES. Please don't even think about it. It was a hell of a ride trying to make D more open to step back now. What we need is companies paying to people to improve the compiler and toolchain. This is slowly starting to happen, in Sociomantic we are already 2 people dedicating some time to improve D as part of our job (Don and me). We need more of this, and to get this, we need companies to start using D, and to get this, we need professionalism (I agree 100% with Andrei on this one). Is a bootstrap effort, and is not like volunteers need more time to be professional, is just that you have to want to make the jump. I think is way better to do less stuff but with higher quality, nobody is asking people for more time, is just changing the focus a bit, at least for some time. Again, this is only bootstrapping, and is always hard and painful. We need to make the jump to make companies comfortable using D, then things will start rolling by themselves.
 The current situation would seem much more of a kick in the teeth to
 me: spending time trying to be "professional," as Andrei asks, and
 producing a viable, stable product used by a million developers,
 corporate users included, but never receiving any compensation for
 this great tool you've poured effort into, that your users are
 presumably often making money with.
 
 I understand that such a shift from being mostly OSS to having some
 closed components can be tricky, but that depends on the particular
 community.  I don't think any OSS project has ever become popular
 without having some sort of commercial model attached to it.  C++
 would be nowhere without commercial compilers; linux would be
 unheard of without IBM and Red Hat figuring out a consulting/support
 model around it; and Android would not have put the linux kernel on
 hundreds of millions of computing devices without the hybrid model
 that Google employed, where they provide an open source core, paid
 for through increased ad revenue from Android devices, and the
 hardware vendors provide closed hardware drivers and UI skins on top
 of the OSS core.
First of all, your examples are completely wrong. The projects you are mentioning are 100% free, with no closed components (except for components done by third-party). Your examples are just reinforcing what I say above. Linux is completely GPL, so it's not even only open source. Is Free Software, meaning the license if more restrictive than, for example, phobos. This means is harder to adopt by companies and you can't possibly change it in a closed way if you want to distribute a binary. Same for C++, which is not a project, is a standards, but the most successful and widespread compiler, GCC, not only is free, is the battle horse of free software, of the GNU project and created by the most extremist free software advocate ever. Android might be the only valid case (but I'm not really familiar with Android model), but the kernel, since is based on Linux, has to have the source code when released. Maybe the drivers are closed source. You are missing more closely related projects, like Python, Haskel, Ruby, Perl, and probably 90% of the newish programming languages, which are all 100% open source. And very successful I might say. The key is always breaking into the corporate ground and make those corporations contribute. There are valid examples of project using hybrid models but they are usually software as a service models, not very applicable to a compiler/language, like Wordpress, or other web applications. Other valid examples are MySQL, or QT I think used an hybrid model at least once. Lots of them died and were resurrected as 100% free projects, like StarOffice -> OpenOffice -> LibreOffice. And finally making the *optimizer* (or some optimizations) closed will be hardly a good business, being that there are 2 other backends out there that usually kicks DMD backend ass already, so people needing more speed will probably just switch to gdc or ldc.
 This talk prominently mentioned scaling to a million users and being
 professional: going commercial is the only way to get there.
As in breaking into the commercial world? Then agreed. If you imply commercial == closing some parts of the source, then I think you are WAY OFF. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- According to several sources Dr. Harvey Kellogg tried to make a cure for masturbation When he made cornflakes
Jun 26 2013
next sibling parent "Dicebot" <public dicebot.lv> writes:
On Wednesday, 26 June 2013 at 11:08:17 UTC, Leandro Lucarella 
wrote:
 Android might be the only valid case (but I'm not really 
 familiar with Android model), but the kernel, since is based on 
 Linux, has to have the source code when
 released. Maybe the drivers are closed source.
It is perfectly open http://source.android.com/source/licenses.html ;) Drivers tend to be closed source, but drivers are not part fo Android project, they are private to vendors.
Jun 26 2013
prev sibling parent reply "Joakim" <joakim airpost.net> writes:
On Wednesday, 26 June 2013 at 11:08:17 UTC, Leandro Lucarella 
wrote:
 Joakim, el 25 de June a las 23:37 me escribiste:
 I don't know the views of the key contributors, but I wonder 
 if they
 would have such a knee-jerk reaction against any paid/closed 
 work.
Against being paid no, against being closed YES. Please don't even think about it. It was a hell of a ride trying to make D more open to step back now.
I suggest you read my original post more carefully. I have not suggested closing up the entire D toolchain, as you seem to imply. I have suggested working on optimization patches in a closed-source manner and providing two versions of the D compiler: one that is faster, closed, and paid, with these optimization patches, another that is slower, open, and free, without the optimization patches. Over time, the optimization patches are merged back to the free branch, so that the funding from the closed compiler makes even the free compiler faster, but only after some delay so that users who value performance will actually pay for the closed compiler. There can be a hard time limit, say nine months, so that you know any closed patches from nine months back will be opened and applied to the free compiler. I suspect that the money will be good enough so that any bugfixes or features added by the closed developers will be added to the free compiler right away, with no delay.
 What we need is companies paying to people to improve the
 compiler and toolchain. This is slowly starting to happen, in
 Sociomantic we are already 2 people dedicating some time to 
 improve D as
 part of our job (Don and me).
Thanks for the work that you and Don have done with Sociomantic. Why do you think more companies don't do this? My point is that if there were money coming in from a paid compiler, Walter could fund even more such work.
 We need more of this, and to get this, we need companies to 
 start using
 D, and to get this, we need professionalism (I agree 100% with 
 Andrei on
 this one). Is a bootstrap effort, and is not like volunteers 
 need more
 time to be professional, is just that you have to want to make 
 the jump.
I think this ignores the decades-long history we have with open source software by now. It is not merely "wanting to make the jump," most volunteers simply do not want to do painful tasks like writing documentation or cannot put as much time into development when no money is coming in. Simply saying "We have to try harder to be professional" seems naive to me.
 I think is way better to do less stuff but with higher quality, 
 nobody
 is asking people for more time, is just changing the focus a 
 bit, at
 least for some time. Again, this is only bootstrapping, and is 
 always
 hard and painful. We need to make the jump to make companies 
 comfortable
 using D, then things will start rolling by themselves.
If I understand your story right, the volunteers need to put a lot of effort into "bootstrapping" the project to be more professional, companies will see this and jump in, then they fund development from then on out? It's possible, but is there any example you have in mind? The languages that go this completely FOSS route tend not to have as much adoption as those with closed implementations, like C++.
 First of all, your examples are completely wrong. The projects 
 you are
 mentioning are 100% free, with no closed components (except for
 components done by third-party).
You are misstating what I said: I said "commercial," not "closed," and gave different examples of commercial models. But lets look at them.
 Your examples are just reinforcing what
 I say above. Linux is completely GPL, so it's not even only 
 open source.
 Is Free Software, meaning the license if more restrictive than, 
 for
 example, phobos. This means is harder to adopt by companies and 
 you
 can't possibly change it in a closed way if you want to 
 distribute
 a binary.
And yet the linux kernel ships with many binary blobs, almost all the time. I don't know how they legally do it, considering the GPL, yet it is much more common to run a kernel with binary blobs than a purely FOSS version. The vast majority of linux installs are due to Android and every single one has significant binary blobs and closed-source modifications to the Android source, which is allowed since most of Android is under the more liberal Apache license, with only the linux kernel under the GPL. Again, I don't know how they get away with all the binary drivers in the kernel, perhaps that is a grey area with the GPL. For example, even the most open source Android devices, the Nexus devices sold directly by Google and running stock Android, have many binary blobs: https://developers.google.com/android/nexus/drivers Other than Android, linux is really only popular on servers, where you can "change it in a closed way" because you are not "distributing a binary." Google takes advantage of this to run linux on a million servers powering their search engine, but does not release the proprietary patches for their linux kernel. So if one looks at linux in any detail, hybrid models are more the norm than the exception, even with the GPL. :)
 Same for C++, which is not a project, is a standards, but the
 most successful and widespread compiler, GCC, not only is free, 
 is the
 battle horse of free software, of the GNU project and created 
 by the
 most extremist free software advocate ever.
D is not just a project but a standard also. I wouldn't say gcc is the "most successful and widespread compiler," that's probably Microsoft's compiler, since Windows market share is still much more than linux. But yes, gcc is a very popular open-source implementation: I didn't say there wouldn't be an open-source D compiler also. But I don't think C++ would be where it is today if only open-source implementations like gcc supported it, which is the case today with D.
 Android might be the only
 valid case (but I'm not really familiar with Android model), 
 but the
 kernel, since is based on Linux, has to have the source code 
 when
 released. Maybe the drivers are closed source.
As I said earlier, most devices' drivers are almost always closed and the non-GPL parts of Android, which are the majority, are usually customized and the source is usually not released, because most of Android is Apache-licensed. I think this closed option is a key reason for the success of Android. Hell, the hardware vendors would never have adopted Android if not for this, as Google well knew.
 You are missing more closely related projects, like Python, 
 Haskel,
 Ruby, Perl, and probably 90% of the newish programming 
 languages, which
 are all 100% open source. And very successful I might say. The 
 key is
 always breaking into the corporate ground and make those 
 corporations
 contribute.
I believe all of these projects have commercial implementations, with the possible exception of Haskell. Still, all of them combined have much less market share than C++, possibly because they use the weaker consulting/support commercial model most of the time. One of the main reasons C++ is much more popular is that it has very high-performance closed implementations, do you disagree? I'm suggesting D will need something similar to get as popular.
 There are valid examples of project using hybrid models but 
 they are
 usually software as a service models, not very applicable to
 a compiler/language, like Wordpress, or other web applications. 
 Other
 valid examples are MySQL, or QT I think used an hybrid model at 
 least
 once. Lots of them died and were resurrected as 100% free 
 projects, like
 StarOffice -> OpenOffice -> LibreOffice.
There are all kinds of hybrid models out there, some would work for compilers also. I think it's instructive that you are listing some of the largest and most successful, mostly-OSS projects in this list. :)
 And finally making the *optimizer* (or some optimizations) 
 closed will
 be hardly a good business, being that there are 2 other 
 backends out
 there that usually kicks DMD backend ass already, so people 
 needing more
 speed will probably just switch to gdc or ldc.
Let me turn this argument around on you: if there is always competition from ldc and gdc, why are you so scared of another option of a slightly-closed, paid compiler? If it's not "a good business," it will fail and go away. I think it would be very successful.
 As in breaking into the commercial world? Then agreed. If you 
 imply
 commercial == closing some parts of the source, then I think 
 you are WAY
 OFF.
OK, so it looks like you are fine with commercial models that keep all the source open, but not with those that close _any_ of the source. The problem is that your favored consulting or support models are much weaker business models than a product model, which is much of the reason why Microsoft still makes almost two orders of magnitude more revenue with their software products than Red Hat makes with their consulting/support model. I am suggesting a unique hybrid product model because I think it will bring in the most money for the least discomfort. That ratio is one that D developers often talk about optimizing in technical terms, I'm suggesting the same in business terms. :)
 It is amazing how far D has gotten with no business model: 
 money
 certainly isn't everything.  But it is probably impossible to 
 get to
 a million users or offer professionalism without commercial
 implementations.
Yeah, right, probably Python and Ruby have only 5k users... This argument is BS.
First off, they both have commercial implementations. Second, they still only have a small fraction of the share as C++: part of this is probably because they don't have as many closed, performant implementations as C++ does. I realize this is a religious issue for some people and they cannot be convinced. In a complex, emerging field like this, it is easy to claim that if OSS projects just try harder, they can succeed. But after two decades, it has never happened, without stepping back and employing a hybrid model. I have examined the evidence and presented arguments for those who are willing to listen, as I'm just about pragmatically using whatever model works best. I think recent history has shown that hybrid models work very well, possibly the best. :)
Jun 26 2013
next sibling parent reply "Joseph Rushton Wakeling" <joseph.wakeling webdrake.net> writes:
On Wednesday, 26 June 2013 at 15:52:33 UTC, Joakim wrote:
 I suggest you read my original post more carefully.  I have not 
 suggested closing up the entire D toolchain, as you seem to 
 imply.  I have suggested working on optimization patches in a 
 closed-source manner and providing two versions of the D 
 compiler: one that is faster, closed, and paid, with these 
 optimization patches, another that is slower, open, and free, 
 without the optimization patches.

 Over time, the optimization patches are merged back to the free 
 branch, so that the funding from the closed compiler makes even 
 the free compiler faster, but only after some delay so that 
 users who value performance will actually pay for the closed 
 compiler.  There can be a hard time limit, say nine months, so 
 that you know any closed patches from nine months back will be 
 opened and applied to the free compiler.  I suspect that the 
 money will be good enough so that any bugfixes or features 
 added by the closed developers will be added to the free 
 compiler right away, with no delay.
Perhaps you'd like to explain to the maintainers of GDC and LDC why, after all they've done for D, you think it would be acceptable to turn to them and say: "Hey guys, we're going to make improvements and keep them from you for 9 months so we can make money" ... ? Or doesn't the cooperative relationship between the 3 main D compilers mean much to you?
 Thanks for the work that you and Don have done with 
 Sociomantic.  Why do you think more companies don't do this?  
 My point is that if there were money coming in from a paid 
 compiler, Walter could fund even more such work.
Leaving aside the moral issues, you might consider that any work paid for by revenues would be offset by a drop in voluntary contributions, including corporate contributors. And sensible companies will avoid "open core" solutions. A few articles worth reading on these factors: http://webmink.com/essays/monetisation/ http://webmink.com/essays/open-core/ http://webmink.com/essays/donating-money/
 I think this ignores the decades-long history we have with open 
 source software by now.  It is not merely "wanting to make the 
 jump," most volunteers simply do not want to do painful tasks 
 like writing documentation or cannot put as much time into 
 development when no money is coming in.  Simply saying "We have 
 to try harder to be professional" seems naive to me.
Odd that you talk about ignoring things, because the general trend we've seen in the decades-long history of free software is that the software business seems to getting more and more open with every year. These days there's a strong expectation of free licensing.
 If I understand your story right, the volunteers need to put a 
 lot of effort into "bootstrapping" the project to be more 
 professional, companies will see this and jump in, then they 
 fund development from then on out?  It's possible, but is there 
 any example you have in mind?  The languages that go this 
 completely FOSS route tend not to have as much adoption as 
 those with closed implementations, like C++.
It's hardly fair to compare languages without also taking into account their relative age. C++ has its large market share substantially due to historical factors -- it was a major "first mover", and until the advent of D, it was arguably the _only_ language that had that combination of power/flexibility and performance. So far as compiler implementations are concerned, I'd say that it was the fact that there were many different implementations that helped C++. On the other hand, proprietary implementations may in some ways have damaged adoption, as before standardization you'd have competing, incompatible proprietary versions which limited the portability of code.
 And yet the linux kernel ships with many binary blobs, almost 
 all the time.  I don't know how they legally do it, considering 
 the GPL, yet it is much more common to run a kernel with binary 
 blobs than a purely FOSS version.  The vast majority of linux 
 installs are due to Android and every single one has 
 significant binary blobs and closed-source modifications to the 
 Android source, which is allowed since most of Android is under 
 the more liberal Apache license, with only the linux kernel 
 under the GPL.
The binary blobs are nevertheless part of the vanilla kernel, not something "value added" that gets charged for. They're irrelevant to the development model of the kernel -- they are an irritation that's tolerated for practical reasons, rather than a design feature.
 Again, I don't know how they get away with all the binary 
 drivers in the kernel, perhaps that is a grey area with the 
 GPL.  For example, even the most open source Android devices, 
 the Nexus devices sold directly by Google and running stock 
 Android, have many binary blobs:

 https://developers.google.com/android/nexus/drivers

 Other than Android, linux is really only popular on servers, 
 where you can "change it in a closed way" because you are not 
 "distributing a binary."  Google takes advantage of this to run 
 linux on a million servers powering their search engine, but 
 does not release the proprietary patches for their linux kernel.

 So if one looks at linux in any detail, hybrid models are more 
 the norm than the exception, even with the GPL. :)
But no one is selling proprietary extensions to the kernel (not that they could anyway, but ...). They're building services that _use_ the kernel, and in building those services they sometimes write patches to serve particular needs. In a similar way, other companies are building services using D and sometimes they write replacements for existing D functionality that better serve their needs (e.g. Leandro's GC work).
 I believe all of these projects have commercial 
 implementations, with the possible exception of Haskell.  
 Still, all of them combined have much less market share than 
 C++, possibly because they use the weaker consulting/support 
 commercial model most of the time.  One of the main reasons C++ 
 is much more popular is that it has very high-performance 
 closed implementations, do you disagree?  I'm suggesting D will 
 need something similar to get as popular.
C++'s popularity is most likely largely down to historical contingency. It was a first-mover in its particular design, and use begets use. What you should rather be thinking of is: can you think of _any_ major new programming language (as in, rising to prominence in the last 10 years and enjoying significant cross-platform success) that wasn't open source? simply because if Microsoft makes a particular language a key tool for Windows development, that language will get used. But the others? The reference versions are all open.
 Let me turn this argument around on you: if there is always 
 competition from ldc and gdc, why are you so scared of another 
 option of a slightly-closed, paid compiler?  If it's not "a 
 good business," it will fail and go away.  I think it would be 
 very successful.
No one is scared of the idea of a slightly or even wholly closed, paid compiler -- anyone is free to implement one and try to sell it. People are objecting to the idea of the reference implementation of the D language being distributed in a two-tier version with advanced features only available to paying customers. You need to understand the difference between proprietary implementations of a language existing, versus the mainstream development work on the language following any kind of proprietary model.
 OK, so it looks like you are fine with commercial models that 
 keep all the source open, but not with those that close _any_ 
 of the source.
For the reference implementation of the D language? Absolutely.
 The problem is that your favored consulting or support models 
 are much weaker business models than a product model, which is 
 much of the reason why Microsoft still makes almost two orders 
 of magnitude more revenue with their software products than Red 
 Hat makes with their consulting/support model.
Microsoft is a virtual monopolist in at least two areas of commercial software -- desktop OS and office document suites. The business models that work for them are not necessarily going to bring success for other organizations. A large part of Red Hat's success comes from the fact that it offers customers a different deal from Microsoft.
 I am suggesting a unique hybrid product model because I think 
 it will bring in the most money for the least discomfort.  That 
 ratio is one that D developers often talk about optimizing in 
 technical terms, I'm suggesting the same in business terms. :)
I suggest that you have not thought through the variety of different business options available. It is a shame that your first thought for commercialization of D was "Let's close bits of it up!" instead of, "How can I make a successful commercial model of D that works _with_ its wonderful open community?"
 First off, they both have commercial implementations.  Second, 
 they still only have a small fraction of the share as C++: part 
 of this is probably because they don't have as many closed, 
 performant implementations as C++ does.
The reference implementations are free software, and if they weren't, they'd never have got any decent traction.
 I realize this is a religious issue for some people and they 
 cannot be convinced.  In a complex, emerging field like this, 
 it is easy to claim that if OSS projects just try harder, they 
 can succeed.  But after two decades, it has never happened, 
 without stepping back and employing a hybrid model.
I think your understanding of free software history is somewhat flawed, and that you conflate rather too many quite different business models under the single title of "hybrid". (To take just one of your examples, Red Hat's consulting/support model is not the same as paying for closed additions to the software. The _software_ of Red Hat Enterprise Linux is still free!) You also don't seem to readily appreciate the differences between what works for software services, versus what works for programming languages -- or the impact that free vs. non-free can have on adoption rates. (D might have gained more traction much earlier, if not for fears around the DMD backend.)
 I have examined the evidence and presented arguments for those 
 who are willing to listen, as I'm just about pragmatically 
 using whatever model works best.  I think recent history has 
 shown that hybrid models work very well, possibly the best. :)
Please name a recent successful programming language (other than those effectively imposed by diktat by big players like Microsoft or Apple) that did not build its success around a reference implementation that is free software. Then we'll talk. :-)
Jun 26 2013
parent reply "Joakim" <joakim airpost.net> writes:
On Wednesday, 26 June 2013 at 17:28:22 UTC, Joseph Rushton 
Wakeling wrote:
 Perhaps you'd like to explain to the maintainers of GDC and LDC 
 why, after all they've done for D, you think it would be 
 acceptable to turn to them and say: "Hey guys, we're going to 
 make improvements and keep them from you for 9 months so we can 
 make money" ... ?
Why are they guaranteed such patches? They have advantages because they use different compiler backends. If they think their backends are so great, let them implement their own optimizations and compete.
 Or doesn't the cooperative relationship between the 3 main D 
 compilers mean much to you?
As I've noted in an earlier response, LDC could also provide a closed version and license those patches.
 Leaving aside the moral issues, you might consider that any 
 work paid for by revenues would be offset by a drop in 
 voluntary contributions, including corporate contributors.  And 
 sensible companies will avoid "open core" solutions.
Or maybe the work paid by revenues would be far more and even more people would volunteer, when D becomes a more successful project through funding from the paid compiler. Considering how dominant "open core" and other hybrid models are these days, it is laughable that you suggest that anyone is avoiding it. :)
 A few articles worth reading on these factors:
 http://webmink.com/essays/monetisation/
 http://webmink.com/essays/open-core/
 http://webmink.com/essays/donating-money/
I have corresponded with the author of that blog before. I found him to be a religious zealot who recounted the four freedoms of GNU to me like a mantra. Perhaps that's why Sun was run into the ground when they followed his ideas about open sourcing most everything. I don't look to him for worthwhile reading on these issues.
 I think this ignores the decades-long history we have with 
 open source software by now.  It is not merely "wanting to 
 make the jump," most volunteers simply do not want to do 
 painful tasks like writing documentation or cannot put as much 
 time into development when no money is coming in.  Simply 
 saying "We have to try harder to be professional" seems naive 
 to me.
Odd that you talk about ignoring things, because the general trend we've seen in the decades-long history of free software is that the software business seems to getting more and more open with every year. These days there's a strong expectation of free licensing.
Yes, it is getting "more and more open," because hybrid models are being used more. :) Pure open source software, with no binary blobs, has almost no adoption, so it isn't your preferred purist approach that is doing well. And the reasons are the ones I gave, volunteers can do a lot of things, but there are a lot of things they won't do.
 It's hardly fair to compare languages without also taking into 
 account their relative age.  C++ has its large market share 
 substantially due to historical factors -- it was a major 
 "first mover", and until the advent of D, it was arguably the 
 _only_ language that had that combination of power/flexibility 
 and performance.
Yes, C++ has been greatly helped by its age.
 So far as compiler implementations are concerned, I'd say that 
 it was the fact that there were many different implementations 
 that helped C++.  On the other hand, proprietary 
 implementations may in some ways have damaged adoption, as 
 before standardization you'd have competing, incompatible 
 proprietary versions which limited the portability of code.
But you neglect to mention that most of those "many different implementations" were closed. I agree that completely closed implementations can also cause incompatibilities, which is why I have suggested a hybrid model with limited closed-source patches.
 The binary blobs are nevertheless part of the vanilla kernel, 
 not something "value added" that gets charged for.  They're 
 irrelevant to the development model of the kernel -- they are 
 an irritation that's tolerated for practical reasons, rather 
 than a design feature.
They are not always charged for, but they put the lie to the claims that linux uses a pure open source model. Rather, it is usually a different kind of hybrid model. If it were so pure, there would be no blobs at all. The blobs are certainly not irrelevant, as linux wouldn't run on all the hardware that needs those binary blobs, if they weren't included. Not sure what to make of your non sequitur of binary blobs not being a "design feature." As for paying for blobs, I'll note that the vast majority of linux kernels installed are in Android devices, where one pays for the hardware _and_ the development effort to develop the blobs that run the hardware. So paying for the "value added" from blobs seems to be a very successful model. :)
 So if one looks at linux in any detail, hybrid models are more 
 the norm than the exception, even with the GPL. :)
But no one is selling proprietary extensions to the kernel (not that they could anyway, but ...). They're building services that _use_ the kernel, and in building those services they sometimes write patches to serve particular needs.
Nobody said they were writing "proprietary extensions to the kernel." The point is that almost all linux kernels in use are used with binary blobs or a proprietary-patched Android stack on top. Therefore, linux implementations benefit from a different kind of hybrid model, but linux certainly isn't purely open source most of the time.
 In a similar way, other companies are building services using D 
 and sometimes they write replacements for existing D 
 functionality that better serve their needs (e.g. Leandro's GC 
 work).
And I'm similarly proposing that there be a D compiler that has better, closed-source optimizations but is paid, with the optimizations guaranteed to be open sourced after some time. I'm not sure why you draw the line there.
 C++'s popularity is most likely largely down to historical 
 contingency.  It was a first-mover in its particular design, 
 and use begets use.
You keep trying to pin C++'s age as the only reason why it is popular, and while I don't deny that its early entry helped, I don't think it is quite as determinative as you do.
 What you should rather be thinking of is: can you think of 
 _any_ major new programming language (as in, rising to 
 prominence in the last 10 years and enjoying significant 
 cross-platform success) that wasn't open source?
Java wasn't open source initially, but it was open sourced after its success. Objective-C has become very popular recently with the rise of iOS. I read that Apple open-sourced their patches to gcc, because of the GPL, but kept their Obj-C runtime libraries closed. More recently, they have switched to the BSD-licensed LLVM/clang and they may be using a hybrid model there, tough to tell. I cannot think of any other language that is in the same stratosphere as C and C++: http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html

 simply because if Microsoft makes a particular language a key 
 tool for Windows development, that language will get used.

 But the others?  The reference versions are all open.
Nobody has suggested otherwise for D. I've suggested an open reference version and a faster, paid version with some closed-source patches. This sort of mix of commercial and OSS implementations appears to be the dominant model, for a language with any adoption.
 No one is scared of the idea of a slightly or even wholly 
 closed, paid compiler -- anyone is free to implement one and 
 try to sell it.
Of course. You could even take ldc and do it, as the license allows it.
 People are objecting to the idea of the reference 
 implementation of the D language being distributed in a 
 two-tier version with advanced features only available to 
 paying customers.

 You need to understand the difference between proprietary 
 implementations of a language existing, versus the mainstream 
 development work on the language following any kind of 
 proprietary model.
Nobody is unaware of the difference; I have always said that Walter would put out an open reference version alongside a slightly-closed paid version. Why do you object to that?
 Microsoft is a virtual monopolist in at least two areas of 
 commercial software -- desktop OS and office document suites.  
 The business models that work for them are not necessarily 
 going to bring success for other organizations.  A large part 
 of Red Hat's success comes from the fact that it offers 
 customers a different deal from Microsoft.
I can list dozens of other product companies that make orders of magnitude more revenue than the consulting/support companies, OSS or not, that you prefer: Oracle, SAP, Apple, on and on, not to mention all the hybrid models I've already listed. I simply mentioned Microsoft because they sell the most compilers, but if you want to compare all such product companies, it's a rout. :)
 I suggest that you have not thought through the variety of 
 different business options available.  It is a shame that your 
 first thought for commercialization of D was "Let's close bits 
 of it up!" instead of, "How can I make a successful commercial 
 model of D that works _with_ its wonderful open community?"
On the contrary, I have examined many of these business options over the years. I _was_ answering the latter need, I just think the former is the best way to do it, as long as you open source the closed patches after a guaranteed time limit. I suggest instead that it is you who has not thought through or examined the evidence that your preferred pure-OSS models don't work as well.
 First off, they both have commercial implementations.  Second, 
 they still only have a small fraction of the share as C++: 
 part of this is probably because they don't have as many 
 closed, performant implementations as C++ does.
The reference implementations are free software, and if they weren't, they'd never have got any decent traction.
I believe C++ was originally licensed by AT&T, so it wasn't free software when it got popular. The others you may have in mind are nowhere near as popular.
 I realize this is a religious issue for some people and they 
 cannot be convinced.  In a complex, emerging field like this, 
 it is easy to claim that if OSS projects just try harder, they 
 can succeed.  But after two decades, it has never happened, 
 without stepping back and employing a hybrid model.
I think your understanding of free software history is somewhat flawed, and that you conflate rather too many quite different business models under the single title of "hybrid". (To take just one of your examples, Red Hat's consulting/support model is not the same as paying for closed additions to the software. The _software_ of Red Hat Enterprise Linux is still free!)
If my history is flawed, you could provide an example. :) I believe Red Hat uses the same binary blobs that the linux kernel provides, so it is not pure open source and can be classified as hybrid. I think they also bundle in their own closed-source software on top, though I've never paid them.
 You also don't seem to readily appreciate the differences 
 between what works for software services, versus what works for 
 programming languages -- or the impact that free vs. non-free 
 can have on adoption rates.  (D might have gained more traction 
 much earlier, if not for fears around the DMD backend.)
I suggest that this assertion is best leveled at you. :) I think D would gain much more traction now, if it were making money from a paid version.
 I have examined the evidence and presented arguments for those 
 who are willing to listen, as I'm just about pragmatically 
 using whatever model works best.  I think recent history has 
 shown that hybrid models work very well, possibly the best. :)
Please name a recent successful programming language (other than those effectively imposed by diktat by big players like Microsoft or Apple) that did not build its success around a reference implementation that is free software. Then we'll talk. :-)
I am not going to attack your strawman, as I have not suggested that there shouldn't be a reference implementation for D that is open source. I have simply suggested an additional paid version that funds development for both versions. Let me throw your request back at you, with the _appropriate_ details: please name a recent successful programming language that achieved any kind of success and _didn't_ have commercial support or implementations. Alternately, name one that came anywhere close to the massive popularity of C++ with only a consulting/support model. The evidence is not on your side. :)
Jun 26 2013
parent "Joseph Rushton Wakeling" <joseph.wakeling webdrake.net> writes:
On Wednesday, 26 June 2013 at 19:01:42 UTC, Joakim wrote:
 Why are they guaranteed such patches?  They have advantages 
 because they use different compiler backends.  If they think 
 their backends are so great, let them implement their own 
 optimizations and compete.
I could respond at greater length, but I think that substantial flaws of your point of view are exposed in this single paragraph. GDC and LDC aren't competitors, they are valuable collaborators.
Jun 26 2013
prev sibling parent reply Leandro Lucarella <luca llucax.com.ar> writes:
Joakim, el 26 de June a las 17:52 me escribiste:
 On Wednesday, 26 June 2013 at 11:08:17 UTC, Leandro Lucarella wrote:
Joakim, el 25 de June a las 23:37 me escribiste:
I don't know the views of the key contributors, but I wonder if
they
would have such a knee-jerk reaction against any paid/closed
work.
Against being paid no, against being closed YES. Please don't even think about it. It was a hell of a ride trying to make D more open to step back now.
I suggest you read my original post more carefully. I have not suggested closing up the entire D toolchain, as you seem to imply.
Well, I'm not. I'm sticking with what you said.
 I have suggested working on optimization patches in a closed-source
 manner and providing two versions of the D compiler: one that is
 faster, closed, and paid, with these optimization patches, another
 that is slower, open, and free, without the optimization patches.
I know, and that's what my e-mail was all about. I don't know why you got another impression. I even end the e-mail saying is a very bad business model too to just offer a paid better optimizer.
What we need is companies paying to people to improve the
compiler and toolchain. This is slowly starting to happen, in
Sociomantic we are already 2 people dedicating some time to
improve D as
part of our job (Don and me).
Thanks for the work that you and Don have done with Sociomantic. Why do you think more companies don't do this? My point is that if
Because D is a new language and isn't as polished as other programming languages. I think Sociomantic was a bit crazy to adopt it so early really (my personal opinion). But it worked well (we had to do quite a lot extra efforts but I guess the time it saves in the daily usage paid for it).
 there were money coming in from a paid compiler, Walter could fund
 even more such work.
Well, I think with a paid compiler you remove one of the main reasons why early adopters can be tempted to use D, because is free. What I'm sure is Sociomantic wouldn't pick D if they had to paid at that time, because it was a statup and startup usually don't have much money at first.
We need more of this, and to get this, we need companies to start
using D, and to get this, we need professionalism (I agree 100% with
Andrei on this one). Is a bootstrap effort, and is not like
volunteers need more time to be professional, is just that you have
to want to make the jump.
I think this ignores the decades-long history we have with open source software by now. It is not merely "wanting to make the jump," most volunteers simply do not want to do painful tasks like writing documentation or cannot put as much time into development when no money is coming in. Simply saying "We have to try harder to be professional" seems naive to me.
Well, I guess we have very different views about the decades-long history of open source software, because I know tons of examples of applications being free, without "commercial implementations" or "paid modules" and very few with a more commercial model. Even more, the few examples I know of "paid modules" are quite recent, not decades-old.
I think is way better to do less stuff but with higher quality,
nobody is asking people for more time, is just changing the focus
a bit, at least for some time. Again, this is only bootstrapping, and
is always hard and painful. We need to make the jump to make
companies comfortable using D, then things will start rolling by
themselves.
If I understand your story right, the volunteers need to put a lot of effort into "bootstrapping" the project to be more professional, companies will see this and jump in, then they fund development from then on out? It's possible, but is there any example you have in mind? The languages that go this completely FOSS route tend not to have as much adoption as those with closed implementations, like C++.
Are you kidding me? Python, Ruby, PHP, Perl. Do I have to say more than that? Do you really think C++ took off because there are commercial implementations? Do you think being a standardized language didn't help? Do you think the fact that there was a free implementation around that it supported virtually any existing platform didn't help? Do you think the fact was it was (almost) compatible with C (which was born freeish, since back then software was freely shared between universities) didn't help?
First of all, your examples are completely wrong. The projects you
are mentioning are 100% free, with no closed components (except for
components done by third-party).
You are misstating what I said: I said "commercial," not "closed,"
You said close. Not just in the previous e-mail, but you just repeated it in this one:
 I have suggested working on optimization patches in a CLOSED-SOURCE
 manner and providing two versions of the D compiler: one that is
 faster, CLOSED, and paid, with these optimization patches, another
 that is slower, open, and free, without the optimization patches.
I think you are misstating yourself ;)
Your examples are just reinforcing what
I say above. Linux is completely GPL, so it's not even only open
source.  Is Free Software, meaning the license if more restrictive
than, for example, phobos. This means is harder to adopt by companies
and you can't possibly change it in a closed way if you want to
distribute a binary.
And yet the linux kernel ships with many binary blobs, almost all the time. I don't know how they legally do it, considering the GPL, yet it is much more common to run a kernel with binary blobs than a purely FOSS version. The vast majority of linux installs are due to
That's because the binary blobs are not part of the kernel, is firmware for certain (crappy) hardware that needs the firmware to be loaded to work. For convenience, the kernel distribute it so this (crappy) hardware can work out of the box. But this is produced by third-parties, not kernel developers. This is a completely different example and is giving NO MONEY AT ALL to linux development.
 Android and every single one has significant binary blobs and
 closed-source modifications to the Android source, which is allowed
 since most of Android is under the more liberal Apache license, with
 only the linux kernel under the GPL.
OK, so by Android you mean the entire platform, not the kernel.
 Again, I don't know how they get away with all the binary drivers in
 the kernel, perhaps that is a grey area with the GPL.  For example,
 even the most open source Android devices, the Nexus devices sold
 directly by Google and running stock Android, have many binary
 blobs:
 
 https://developers.google.com/android/nexus/drivers
Again, but this gives is not a model adopted to make money to improve Android, this is just drivers third-party do so android can run in their devices. See the difference? Is not like Android developers said: "we need money to be able to code Android more professionally, so we are going to write closed source driver and sell them!". The companies making the closed source drivers are the hardware retailers.
 Other than Android, linux is really only popular on servers, where
 you can "change it in a closed way" because you are not
 "distributing a binary."  Google takes advantage of this to run
 linux on a million servers powering their search engine, but does
 not release the proprietary patches for their linux kernel.
They do contribute with the mainline kernel though (not as much as they should). But then I think you are digressing a little. You were saying D should offer an improved, closed source, paid optimizer to raise money to improve the quality of the compiler. How this Google example applies to this?
 So if one looks at linux in any detail, hybrid models are more the
 norm than the exception, even with the GPL. :)
Google taking advantage of the flaws of GPL 2.0 doesn't mean the Linux kernel is implemented an hybrid closed/open source model. What you are saying doesn't make any sense, you are mixing everything together. Seriously. The Linux kernel have NO hybrid model, is 100% opensource. Then, there are companies using it, and when they are forced, or it is convenient for them, they contribute to Linux. And this is what I'm suggesting is best for D too. We have to either force or make it convenient to companies to contribute.
Same for C++, which is not a project, is a standards, but the most
successful and widespread compiler, GCC, not only is free, is the
battle horse of free software, of the GNU project and created by the
most extremist free software advocate ever.
D is not just a project but a standard also.
No. A standard is something that was standardized by a standard committee which, ideally, have some credits to do so. C++ is standardized by ISO. I guess Walter and Andrei can give you more details, since I think they both were involved in the standardization of C++. D is a language specification and a reference implementation. Which is not the same.
 I wouldn't say gcc is the "most successful and widespread compiler,"
 that's probably Microsoft's compiler, since Windows market share is
 still much more than linux.
Are you counting devices or just personal computing? You know all Android phones don't use Microsoft compiler, you know all your gadgets don't user Microsoft compiler, you know your car software was not compiled with Microsoft compiler, same your TV and so many other stuff.
 But yes, gcc is a very popular open-source implementation: I didn't
 say there wouldn't be an open-source D compiler also.  But I don't
 think C++ would be where it is today if only open-source
 implementations like gcc supported it, which is the case today with D.
And how do you explain the success of languages that don't have (at least major) commercial implementations like Perl, PHP, Python or Ruby? Or even Go? I think Go is much more suitable for companies right now than D, because in Go they put a lot of emphasis in the toolchain and polishing the tools. You might say "Go was done by Google" and is true. But there are NO COMMERCIAL implementations. Only free. And they are tempting for companies because Go development is professional (thanks to Google support of course). The thing is, again, we need companies support. Not commercial implementations. Well, commercial implementations are welcome though, what we certainly don't need is the reference implementation being (partly) closed.
Android might be the only valid case (but I'm not really familiar
with Android model), but the kernel, since is based on Linux, has to
have the source code when released. Maybe the drivers are closed
source.
As I said earlier, most devices' drivers are almost always closed and the non-GPL parts of Android, which are the majority, are usually customized and the source is usually not released, because most of Android is Apache-licensed. I think this closed option is a key reason for the success of Android. Hell, the hardware vendors would never have adopted Android if not for this, as Google well knew.
Again, I think this is a bad example because those closed sourced elements are not making money to support Android developers. Is what hardware retailers need to do to make their phones work with Android and they don't want to release the code of their drivers.
You are missing more closely related projects, like Python, Haskel,
Ruby, Perl, and probably 90% of the newish programming languages,
which are all 100% open source. And very successful I might say. The
key is always breaking into the corporate ground and make those
corporations contribute.
I believe all of these projects have commercial implementations, with the possible exception of Haskell.
Name them please. I would like to know them.
 Still, all of them combined have much less market share than C++
[citation needed] Not that they are extremely reliable, but sites measuring language popularity say Python+PHP+Perl+Ruby have more share than C++. TIOBE sais: P+P+P+R=13.922 vs C++=8.819 The Transparent Language Popularity Index sais: P+P+P=R=11.354 vs C++=7.544 Also C++ have 10 years advantage, and 10 REALLY good years, where there were almost no alternatives, so I'm guessing there is a lot of legacy C++ code. It would be good to have numbers on the popularity of languages for NEW projects. The 4 most popular languages are old C/C-derived. Java is the only exception, but it had a massive marketing campaign among corporations, and even when now is free, I think is, with implementation.
 possibly because they use the weaker consulting/support commercial
 model most of the time.  One of the main reasons C++ is much more
 popular is that it has very high-performance closed implementations,
 do you disagree?
Yes. I think Java is the example you have in mind, not C++. For the reasons I explain above. I think C++ was popular because it had a high-performance open source implementation. 30 years ago.
 I'm suggesting D will need something similar to get as popular.
And I'm trying to prove you are wrong, that D needs to be popular is to get professional without sacrificing openness. Is because this model is dying that one of the most popular languages switched from a closed source model to an open source one (Java).
There are valid examples of project using hybrid models but they are
usually software as a service models, not very applicable to
a compiler/language, like Wordpress, or other web applications.
Other valid examples are MySQL, or QT I think used an hybrid model at
least once. Lots of them died and were resurrected as 100% free
projects, like StarOffice -> OpenOffice -> LibreOffice.
There are all kinds of hybrid models out there, some would work for compilers also. I think it's instructive that you are listing some of the largest and most successful, mostly-OSS projects in this list. :)
As failed or counter examples :)
And finally making the *optimizer* (or some optimizations) closed
will be hardly a good business, being that there are 2 other backends
out there that usually kicks DMD backend ass already, so people
needing more speed will probably just switch to gdc or ldc.
Let me turn this argument around on you: if there is always competition from ldc and gdc, why are you so scared of another option of a slightly-closed, paid compiler?
I'm not scared. I'm against it for the reference implementation that went through a lot of pain to become more open. DMD started as closed source. There is a big difference. I'll be more than happy to receive a closed source commercial implementation that is not based on open source contributed by the community, which is what you suggested really.
 If it's not "a good business," it will fail and go away.  I think it
 would be very successful.
Start your own company and make a D compiler then! I wish you very good luck! :)
As in breaking into the commercial world? Then agreed. If you imply
commercial == closing some parts of the source, then I think you are
WAY OFF.
OK, so it looks like you are fine with commercial models that keep all the source open, but not with those that close _any_ of the source.
Again, I think you are mixing a lot of stuff together. If you speak about a random company, I'm fine with them doing whatever they like, free, closed, open, whatever. If we are talking about the reference implementation for D that now is mostly-opensource and free software, yes, I'm against closing sources and no, I'm not against commercial models and earning money to pay to contributors.
 The problem is that your favored consulting or support models are
 much weaker business models than a product model, which is much of
 the reason why Microsoft still makes almost two orders of magnitude
 more revenue with their software products than Red Hat makes with
 their consulting/support model.
So now we want D to be an enterprise which goal is to make as much money as possible? Yes, I favor slightly less lucrative models in favor of having an open source reference implementation.
 I am suggesting a unique hybrid product model because I think it
 will bring in the most money for the least discomfort.  That ratio
 is one that D developers often talk about optimizing in technical
 terms, I'm suggesting the same in business terms. :)
Well, we disagree for both ethical and business reasons. Your only example of a commercial compiler making money is Microsoft. You know, we are not Microsoft, we don't have the market share Microsoft have and we can't impose our products like they can. I think selling a slightly more optimized DMD will be a huge failure for the reasons I already said. I think it would be even a better model to offer a more polished toolchain instead of a slightly faster compiler. I don't think speed is an issue for anyone right now. Maybe Walter can have a meaningful opinion about this, being he has been selling compiler for quite some time now.
It is amazing how far D has gotten with no business model: money
certainly isn't everything.  But it is probably impossible to get to
a million users or offer professionalism without commercial
implementations.
Yeah, right, probably Python and Ruby have only 5k users... This argument is BS.
First off, they both have commercial implementations. Second, they
I've been looking for commercial implementations of Ruby and Python. Hard. I only found one for Ruby, RubyMotion, which is only for iOS, which seems to be unsupported by the official Ruby interpreter (or any other free implementation). So is not really relevant in this case. I couldn't find any commercial python implementation. If you have some, please share, but the fact that I couldn't find any suggests if they exist, they are not very popular, and thus not really helping Python or Ruby adoption.
 still only have a small fraction of the share as C++: part of this
 is probably because they don't have as many closed, performant
 implementations as C++ does.
Again I think your reasoning behind C++ is more popular, C++ have closed implementations -> C++ is more popular because it has closed implementations, is way too simplistic (and wrong). I'm also not convinced the closed implementations are more "performant" than the open ones. MS Visual C++, done by the company that makes loads of money, seems to be crap from the benchmarks I found: http://www.g-truc.net/post-0372.html ICC is supposed to be one of the fastest compilers. If you look at their own benchmarks, they say they are better than GCC, at least for some integer and floating point benchmarks), but I don't trust people doing their own benchmarks very much: http://software.intel.com/en-us/intel-composer-xe (go to benchmark tab) A question in stackoverflow seems to indicate that the different isn't that clear: http://stackoverflow.com/questions/1733627/anyone-here-has-benchmarked-intel-c-compiler-and-gcc Also there is this presentation from MySQL comparing ICC and GCC: http://www.mysqlperformanceblog.com/files/presentations/LinuxWorld2004-Intel.pdf In this case they see an important improvement using ICC, but the GCC version used is quite old, I wonder how they compare now. Is strange that there are not really many benchmarks comparing compiler out there. Another indication that speed in C++ is not much of a selling point for compilers. And from what I could find out is not clear there is a huge gain in performance when using closed source compilers. With Visual C++ is even clearly worse, and according to you that's the most widespread C++ compiler.
 I realize this is a religious issue for some people and they cannot
 be convinced.  In a complex, emerging field like this, it is easy to
 claim that if OSS projects just try harder, they can succeed.  But
 after two decades, it has never happened, without stepping back and
 employing a hybrid model.
 
 I have examined the evidence and presented arguments for those who
 are willing to listen, as I'm just about pragmatically using
 whatever model works best.  I think recent history has shown that
 hybrid models work very well, possibly the best. :)
Is true, there are people that base his view on religious believes. Here I presented you lots of proof, with numbers and examples to justify what I say. You, on the contrary, just claimed unfunded statements. There is also our own experience. When DMD was closed source it received almost 0 contributions, of course. Each and every step taken to make the compiler more open shielded more and more contributions. You want more proof and more numbers, you can take a look at this: http://www.llucax.com.ar/blog/blog/post/6cac01e1 This is even pre-git, with git the number of contributions AND THE COMPILER quality increased exponentially. Unfortunately I don't have any numbers on D adoption. I also explained why I think your particular hybrid model is bad business. D's adoption and improvement advanced slow because it WAS closed source, it improved a lot with openness. Because all of this, I find it very hard to believe the model you are proposing would be beneficial for D, from a reason stand point there is no evidence that could happen. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- PITUFO ENRIQUE ATEMORIZA CATAMARCA, AMPLIAREMOS -- Crónica TV
Jun 27 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
I agree with your post, I just want to make a couple of minor corrections.

On 6/27/2013 4:58 AM, Leandro Lucarella wrote:
 Do you really think C++ took off because there are commercial
 implementations?
I got into the C++ fray in the 1987-88 time frame. At the time, there was a great debate between C++ and Objective-C, and they were running neck-and-neck. I was casting about looking for a way to get a competitive edge with my C compiler, and investigated. Objective-C was put out by Stepstone. They wanted royalties from anyone who implemented a clone, and kept a tight fist over the licensing. C++ only existed in its AT&T cfront implementation. I wrote a letter to AT&T's lawyers, asking if I could create a C++ clone, and they phoned me up and were very nice. They said sure, and I wouldn't have to pay any license or royalties. So I went with C++. I don't really know if cfront was open source at the time or not, but I never looked at its source. I think cfront source came with a paid license for unix, but I'm not positive. Anyhow, I wound up implementing the first native C++ compiler for the PC. Directly afterward, C++ took off like a rocket. Was it because of Zortech C++? I think there's strong evidence it was. A lot of programmers turned up their noses at the peasants programming on DOS, but that's where the action was in the 1980's, and ZTC++ had no realistic competitors. You could also see the results in Usenet. Postings about C++ and O-C were neck-and-neck until ZTC++ came out, and then things tilted heavily in C++'s favor, and O-C disappeared into oblivion (later to be resurrected by Steve Jobs, but that's another tale). ZTC++ was so successful that Borland and Microsoft (according to rumor) abandoned their efforts at making a proprietary OOP C, and went with C++. ZTC++ was closed source, as were Borland's Turbo C++ and Microsoft C++.
 Do you think being a standardized language didn't help?
C++ wasn't standardized until 1998, 10 years later. The 90's were pretty much the heyday of C++.
 Do you think the fact that there was a free implementation around that
 it supported virtually any existing platform didn't help? Do you think
 the fact was it was (almost) compatible with C (which was born freeish,
 since back then software was freely shared between universities) didn't
 help?
ZTC++ was cheap as dirt, and at the time people didn't mind paying for compilers. Those days are over, though. People have different expectations today.
 No. A standard is something that was standardized by a standard
 committee which, ideally, have some credits to do so. C++ is
 standardized by ISO. I guess Walter and Andrei can give you more
 details, since I think they both were involved in the standardization of
 C++.
I've attended a few ISO C++ meetings, but I never became a voting member, and have had pretty much zero influence over the direction C++ took after the 1980's. The bottom line was the open source movement was not a very significant force in the 1980's when C++ gained traction. Open source really exploded around 2000, along with the internet. I wonder if open source perhaps needed the internet in order to be viable.
Jun 29 2013
next sibling parent reply "Joseph Rushton Wakeling" <joseph.wakeling webdrake.net> writes:
On Saturday, 29 June 2013 at 08:37:48 UTC, Walter Bright wrote:
 The bottom line was the open source movement was not a very 
 significant force in the 1980's when C++ gained traction. Open 
 source really exploded around 2000, along with the internet. I 
 wonder if open source perhaps needed the internet in order to 
 be viable.
That's a very good point. It's before my time really, but if I understand the history right, the main way to get hold of copies of stuff like GCC in the early days was to pay for a set of disks with it on -- and there was no infrastructure for easily sharing changes. So neither the free-as-in-beer or free-as-in-freedom advantages were as readily apparent or effective as they are today.
Jun 29 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 6/29/2013 5:08 AM, Joseph Rushton Wakeling wrote:
 On Saturday, 29 June 2013 at 08:37:48 UTC, Walter Bright wrote:
 The bottom line was the open source movement was not a very significant force
 in the 1980's when C++ gained traction. Open source really exploded around
 2000, along with the internet. I wonder if open source perhaps needed the
 internet in order to be viable.
That's a very good point. It's before my time really, but if I understand the history right, the main way to get hold of copies of stuff like GCC in the early days was to pay for a set of disks with it on -- and there was no infrastructure for easily sharing changes. So neither the free-as-in-beer or free-as-in-freedom advantages were as readily apparent or effective as they are today.
True, distribution was mainly by physical mail. There was some via BBS's and Usenet, but these were severely limited by bandwidth. I'd receive bug reports by fax, paper listings, and mailed floppies.
Jun 29 2013
parent "Joseph Rushton Wakeling" <joseph.wakeling webdrake.net> writes:
On Sunday, 30 June 2013 at 03:29:06 UTC, Walter Bright wrote:
 On 6/29/2013 5:08 AM, Joseph Rushton Wakeling wrote:
 True, distribution was mainly by physical mail. There was some 
 via BBS's and Usenet, but these were severely limited by 
 bandwidth.

 I'd receive bug reports by fax, paper listings, and mailed 
 floppies.
This was also the heyday of the BBC Micro in UK schools, and I remember well the shelves full of books of sample programs in BBC Basic. We had lots of fun typing them up, working out how they worked, and then twisting them to our more evil designs.
Jul 02 2013
prev sibling next sibling parent reply Leandro Lucarella <luca llucax.com.ar> writes:
Walter Bright, el 29 de June a las 01:37 me escribiste:
 The bottom line was the open source movement was not a very
 significant force in the 1980's when C++ gained traction. Open
 source really exploded around 2000, along with the internet. I
 wonder if open source perhaps needed the internet in order to be
 viable.
Yes, I think that's the whole point, without Internet open source was extremely niche, without resources to distribute it, it was almost impossible to take off, and almost impossible to collaborate, which is the big different open source have vs traditional commercial software. Even when extremely interesting, I think the ZTC++ history before open source existed or was really viable (the free software movement started in 1983, the FSF was founded in 1985 and the open source definition was made in 1998) is irrelevant in terms to analyze if right now it would be valuable to make the reference compiler partly closed. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- EL PRIMER MONITO DEL MILENIO... -- Crónica TV
Jun 29 2013
parent Walter Bright <newshound2 digitalmars.com> writes:
On 6/29/2013 9:10 AM, Leandro Lucarella wrote:
 Even when extremely interesting, I think the ZTC++ history before open
 source existed or was really viable (the free software movement started
 in 1983, the FSF was founded in 1985 and the open source definition was
 made in 1998) is irrelevant in terms to analyze if right now it would be
 valuable to make the reference compiler partly closed.
Yes, I agree. Things are fundamentally different now.
Jun 29 2013
prev sibling next sibling parent reply "CJS" <Prometheus85 hotmail.com> writes:
On Saturday, 29 June 2013 at 08:37:48 UTC, Walter Bright wrote:
 I agree with your post, I just want to make a couple of minor 
 corrections.

 On 6/27/2013 4:58 AM, Leandro Lucarella wrote:
 Do you really think C++ took off because there are commercial
 implementations?
I got into the C++ fray in the 1987-88 time frame. At the time, there was a great debate between C++ and Objective-C, and they were running neck-and-neck. I was casting about looking for a way to get a competitive edge with my C compiler, and investigated. Objective-C was put out by Stepstone. They wanted royalties from anyone who implemented a clone, and kept a tight fist over the licensing. C++ only existed in its AT&T cfront implementation. I wrote a letter to AT&T's lawyers, asking if I could create a C++ clone, and they phoned me up and were very nice. They said sure, and I wouldn't have to pay any license or royalties. So I went with C++. I don't really know if cfront was open source at the time or not, but I never looked at its source. I think cfront source came with a paid license for unix, but I'm not positive. Anyhow, I wound up implementing the first native C++ compiler for the PC. Directly afterward, C++ took off like a rocket. Was it because of Zortech C++? I think there's strong evidence it was. A lot of programmers turned up their noses at the peasants programming on DOS, but that's where the action was in the 1980's, and ZTC++ had no realistic competitors. You could also see the results in Usenet. Postings about C++ and O-C were neck-and-neck until ZTC++ came out, and then things tilted heavily in C++'s favor, and O-C disappeared into oblivion (later to be resurrected by Steve Jobs, but that's another tale). ZTC++ was so successful that Borland and Microsoft (according to rumor) abandoned their efforts at making a proprietary OOP C, and went with C++. ZTC++ was closed source, as were Borland's Turbo C++ and Microsoft C++.
 Do you think being a standardized language didn't help?
C++ wasn't standardized until 1998, 10 years later. The 90's were pretty much the heyday of C++.
 Do you think the fact that there was a free implementation 
 around that
 it supported virtually any existing platform didn't help? Do 
 you think
 the fact was it was (almost) compatible with C (which was born 
 freeish,
 since back then software was freely shared between 
 universities) didn't
 help?
ZTC++ was cheap as dirt, and at the time people didn't mind paying for compilers. Those days are over, though. People have different expectations today.
 No. A standard is something that was standardized by a standard
 committee which, ideally, have some credits to do so. C++ is
 standardized by ISO. I guess Walter and Andrei can give you 
 more
 details, since I think they both were involved in the 
 standardization of
 C++.
I've attended a few ISO C++ meetings, but I never became a voting member, and have had pretty much zero influence over the direction C++ took after the 1980's. The bottom line was the open source movement was not a very significant force in the 1980's when C++ gained traction. Open source really exploded around 2000, along with the internet. I wonder if open source perhaps needed the internet in order to be viable.
Wow. That's interesting reading. Thanks for the history lesson!
Jun 29 2013
parent Walter Bright <newshound2 digitalmars.com> writes:
On 6/29/2013 7:56 PM, CJS wrote:
 Wow. That's interesting reading. Thanks for the history lesson!
There are other versions of this history, none of which mention the role ZTC++ played in C++ attaining critical mass, so I like to repeat my version now and then :-)
Jun 29 2013
prev sibling parent reply "Joakim" <joakim airpost.net> writes:
I was wondering if Walter or Andrei would respond to this thread.

On Saturday, 29 June 2013 at 08:37:48 UTC, Walter Bright wrote:
 I agree with your post, I just want to make a couple of minor 
 corrections.
What exactly do you agree with Luca about, considering all your "minor corrections" basically demolish all his points? ;) Your C++ history was really interesting, as I first used it in '97, right when it was peaking.
 ZTC++ was cheap as dirt, and at the time people didn't mind 
 paying for compilers. Those days are over, though. People have 
 different expectations today.
There's no doubt that developers have been spoiled by all the free and shareware tools out there these days. What do you think of my idea of segmenting the market though? Keep providing a free-as-in-beer dmd, like you are now, for the people who want it, while Remedy and others who want performance pay for a dmd that puts out more performant code, with those improvements slowly merged back into the free dmd over time. If you are not interested in selling a paid compiler yourself, I've noted that there's nothing stopping someone else from doing this. They can take the dmd frontend under the Artistic license, compile it with the BSD-licensed llvm backend and boost-licensed druntime and phobos, and sell a paid compiler, without any permission from you or any other D contributors. You could not do anything legally to stop this, as the permissive OSS licenses allow it. However, as one of the main authors of this code, do you have any preference for or against someone taking your code to do this?
Jun 29 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 6/29/2013 11:39 PM, Joakim wrote:
 What do you think of my idea of segmenting the market though? Keep providing a
 free-as-in-beer dmd, like you are now, for the people who want it, while Remedy
 and others who want performance pay for a dmd that puts out more performant
 code, with those improvements slowly merged back into the free dmd over time.
It won't work. Those days are gone.
 If you are not interested in selling a paid compiler yourself, I've noted that
 there's nothing stopping someone else from doing this.  They can take the dmd
 frontend under the Artistic license, compile it with the BSD-licensed llvm
 backend and boost-licensed druntime and phobos, and sell a paid compiler,
 without any permission from you or any other D contributors.

 You could not do anything legally to stop this, as the permissive OSS licenses
 allow it.  However, as one of the main authors of this code, do you have any
 preference for or against someone taking your code to do this?
Part of issuing it under a permissive license is I won't try to block someone from doing whatever they want to that is allowed by the license.
Jun 30 2013
parent reply "Joakim" <joakim airpost.net> writes:
On Sunday, 30 June 2013 at 09:34:14 UTC, Walter Bright wrote:
 On 6/29/2013 11:39 PM, Joakim wrote:
 What do you think of my idea of segmenting the market though? 
 Keep providing a
 free-as-in-beer dmd, like you are now, for the people who want 
 it, while Remedy
 and others who want performance pay for a dmd that puts out 
 more performant
 code, with those improvements slowly merged back into the free 
 dmd over time.
It won't work. Those days are gone.
I disagree. We'll find out.
 If you are not interested in selling a paid compiler yourself, 
 I've noted that
 there's nothing stopping someone else from doing this.  They 
 can take the dmd
 frontend under the Artistic license, compile it with the 
 BSD-licensed llvm
 backend and boost-licensed druntime and phobos, and sell a 
 paid compiler,
 without any permission from you or any other D contributors.

 You could not do anything legally to stop this, as the 
 permissive OSS licenses
 allow it.  However, as one of the main authors of this code, 
 do you have any
 preference for or against someone taking your code to do this?
Part of issuing it under a permissive license is I won't try to block someone from doing whatever they want to that is allowed by the license.
I understand, but that wasn't exactly my question. I wondered if you have any opinion on such code reuse, if someone takes your code and closes it, even if you wouldn't try to block it because you have already released it under a permissive license. Some wouldn't try to close the source if you expressed a preference that it not be done- I have no such compunction, if the license allows closing source, but others might- just wondering if you have an opinion or preference on your source being closed up. Thanks for all the great work you have done on D and the dmd compiler. As much as I'd like to see a commercial implementation, it is amazing how much you have given away for free. :)
Jun 30 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 6/30/2013 2:50 AM, Joakim wrote:
 I wondered if you have any opinion on such code reuse, if someone takes your
 code and closes it, even if you wouldn't try to block it because you have
 already released it under a permissive license.
No, I don't have an opinion on it, other than that I'd rather they didn't try to create an incompatible language and still call it "D".
Jun 30 2013
parent reply "Joakim" <joakim airpost.net> writes:
On Sunday, 30 June 2013 at 19:24:54 UTC, Walter Bright wrote:
 On 6/30/2013 2:50 AM, Joakim wrote:
 I wondered if you have any opinion on such code reuse, if 
 someone takes your
 code and closes it, even if you wouldn't try to block it 
 because you have
 already released it under a permissive license.
No, I don't have an opinion on it, other than that I'd rather they didn't try to create an incompatible language and still call it "D".
OK, glad to hear that you wouldn't be against it. You'd be surprised how many who use permissive licenses still go nuts when you propose to do exactly what the license allows, ie close up parts of the source. Since you have been so gracious to use such permissive licenses for almost all of D, I'm sure someone will try the closed/paid experiment someday and see if which of us is right. :)
Jun 30 2013
parent reply "Joseph Rushton Wakeling" <joseph.wakeling webdrake.net> writes:
On Sunday, 30 June 2013 at 19:45:06 UTC, Joakim wrote:
 OK, glad to hear that you wouldn't be against it.  You'd be 
 surprised how many who use permissive licenses still go nuts 
 when you propose to do exactly what the license allows, ie 
 close up parts of the source.
Because people don't just care about the strict legal constraints, but also about the social compact around software. Often people choose permissive licenses because they want to ensure other free software authors can use their software without encountering the licensing incompatibilities that can result from the various forms of copyleft. Closing up their software is rightly seen as an abuse of their goodwill. In other cases there may be a broad community consensus that builds up around a piece of software, that this work should be shared and contributed to as a common good (e.g. X.org). Attempts to close it up violate those social norms and are rightly seen as an attack on that community and the valuable commons they have cultivated. Community anger against legal but antisocial behaviour is hardly limited to software, and is a fairly important mechanism for ensuring that people behave well towards one another.
 Since you have been so gracious to use such permissive licenses 
 for almost all of D, I'm sure someone will try the closed/paid 
 experiment someday and see if which of us is right. :)
Good luck with that :-) By the way, you mentioned a project of your own where you employed the short-term open core model you describe. Want to tell us more about that? Regardless of differences of opinion, it's always good to hear about someone's particular experience with a project.
Jul 01 2013
parent reply "Joakim" <joakim airpost.net> writes:
On Monday, 1 July 2013 at 10:15:34 UTC, Joseph Rushton Wakeling 
wrote:
 On Sunday, 30 June 2013 at 19:45:06 UTC, Joakim wrote:
 OK, glad to hear that you wouldn't be against it.  You'd be 
 surprised how many who use permissive licenses still go nuts 
 when you propose to do exactly what the license allows, ie 
 close up parts of the source.
Because people don't just care about the strict legal constraints, but also about the social compact around software. Often people choose permissive licenses because they want to ensure other free software authors can use their software without encountering the licensing incompatibilities that can result from the various forms of copyleft. Closing up their software is rightly seen as an abuse of their goodwill.
Then they should choose a mixed license like the Mozilla Public License or CDDL, which keeps OSS files open while allowing linking with closed source files within the same application. If they instead chose a license that allows closing all source, one can only assume they're okay with it. In any case, I could care less if they're okay with it or not, I was just surprised that they chose the BSD license and then were mad when someone was thinking about closing it up.
 In other cases there may be a broad community consensus that 
 builds up around a piece of software, that this work should be 
 shared and contributed to as a common good (e.g. X.org).  
 Attempts to close it up violate those social norms and are 
 rightly seen as an attack on that community and the valuable 
 commons they have cultivated.
There's no doubt that even if they chose a permissive license like the MIT or BSD license, these communities work primarily with OSS code and tend to prefer that code be open. I can understand if they then tend to rebuff attempts to keep source from them, purely as a social phenomenon, however irrational it may be. That's why I asked Walter if he had a similar opinion, but he didn't care. I still think it's ridiculous to put your code under an extremely permissive license and then get mad when people take you up on it, particularly since they never publicly broadcast that they want everything to be open. It is only after you talk to them that you realize that the BSD gang are often as much freetards as the GPL gang, just in their own special way. ;)
 Community anger against legal but antisocial behaviour is 
 hardly limited to software, and is a fairly important mechanism 
 for ensuring that people behave well towards one another.
I wouldn't call closing source that they legally allowed to be closed antisocial. I'd call their contradictory, angry response to what their license permits antisocial. :)
 Since you have been so gracious to use such permissive 
 licenses for almost all of D, I'm sure someone will try the 
 closed/paid experiment someday and see if which of us is 
 right. :)
Good luck with that :-) By the way, you mentioned a project of your own where you employed the short-term open core model you describe. Want to tell us more about that? Regardless of differences of opinion, it's always good to hear about someone's particular experience with a project.
I wrote up an article a couple years back talking about the new hybrid model I used, it's up on Phoronix and my project is mentioned there: http://www.phoronix.com/scan.php?page=article&item=sprewell_licensing Note that this article was written when Android had less than 10% of the almost billion users it has today, by using a similar hybrid model, and I was thinking up these ideas years before, long before I'd heard of Android. My project was a small one, so it couldn't be a resounding proof of my time-limited version of the hybrid model, but it worked for its purpose and I'm fairly certain it will be the dominant model someday. :)
Jul 01 2013
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/1/2013 10:45 AM, Joakim wrote:
 Then they should choose a mixed license like the Mozilla Public License or
CDDL,
 which keeps OSS files open while allowing linking with closed source files
 within the same application.  If they instead chose a license that allows
 closing all source, one can only assume they're okay with it.  In any case, I
 could care less if they're okay with it or not, I was just surprised that they
 chose the BSD license and then were mad when someone was thinking about closing
 it up.
I should point out that the Boost license was chosen for Phobos specifically because it allowed people to copy it and use it for whatever purpose, including making closed source versions, adapting them for use with Go :-), whatever. It would be pretty silly of us to complain about that after the fact. People who don't want closed source versions should use GPL licenses.
Jul 01 2013
parent reply Brad Roberts <braddr puremagic.com> writes:
On 7/1/13 11:42 AM, Walter Bright wrote:
 On 7/1/2013 10:45 AM, Joakim wrote:
 Then they should choose a mixed license like the Mozilla Public License or
CDDL,
 which keeps OSS files open while allowing linking with closed source files
 within the same application.  If they instead chose a license that allows
 closing all source, one can only assume they're okay with it.  In any case, I
 could care less if they're okay with it or not, I was just surprised that they
 chose the BSD license and then were mad when someone was thinking about closing
 it up.
I should point out that the Boost license was chosen for Phobos specifically because it allowed people to copy it and use it for whatever purpose, including making closed source versions, adapting them for use with Go :-), whatever.
Actually, Boost was specifically chosen because it didn't require attribution when redistributing. If BSD hadn't had that clause we probably would be using it instead.
Jul 01 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/1/2013 2:04 PM, Brad Roberts wrote:
 On 7/1/13 11:42 AM, Walter Bright wrote:
 On 7/1/2013 10:45 AM, Joakim wrote:
 Then they should choose a mixed license like the Mozilla Public License or
CDDL,
 which keeps OSS files open while allowing linking with closed source files
 within the same application.  If they instead chose a license that allows
 closing all source, one can only assume they're okay with it.  In any case, I
 could care less if they're okay with it or not, I was just surprised that they
 chose the BSD license and then were mad when someone was thinking about closing
 it up.
I should point out that the Boost license was chosen for Phobos specifically because it allowed people to copy it and use it for whatever purpose, including making closed source versions, adapting them for use with Go :-), whatever.
Actually, Boost was specifically chosen because it didn't require attribution when redistributing. If BSD hadn't had that clause we probably would be using it instead.
That was indeed another important reason for it. But we were well aware of and approved of the idea that people could take it and make closed source versions.
Jul 01 2013
parent "Joseph Rushton Wakeling" <joseph.wakeling webdrake.net> writes:
On Monday, 1 July 2013 at 21:20:39 UTC, Walter Bright wrote:
 On 7/1/2013 2:04 PM, Brad Roberts wrote:
 Actually, Boost was specifically chosen because it didn't 
 require attribution
 when redistributing. If BSD hadn't had that clause we probably 
 would be using it
 instead.
That was indeed another important reason for it. But we were well aware of and approved of the idea that people could take it and make closed source versions.
It was always clear (and logical) to me why the core libraries were permissively licensed, but the no-need-to-give-attribution-for-non-source-distribution feature was a subtlety I hadn't considered before.
Jul 02 2013
prev sibling next sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Monday, 1 July 2013 at 17:45:59 UTC, Joakim wrote:
 I wouldn't call closing source that they legally allowed to be 
 closed antisocial.  I'd call their contradictory, angry 
 response to what their license permits antisocial. :)
Just because you're doing something legal doesn't mean you're not being antisocial. It's a pretty psychopathic attitude to conflate legality and morality, it's effectively saying "I have the moral right to do whatever I can get away with"
Jul 01 2013
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/1/2013 2:29 PM, John Colvin wrote:
 On Monday, 1 July 2013 at 17:45:59 UTC, Joakim wrote:
 I wouldn't call closing source that they legally allowed to be closed
 antisocial.  I'd call their contradictory, angry response to what their
 license permits antisocial. :)
Just because you're doing something legal doesn't mean you're not being antisocial. It's a pretty psychopathic attitude to conflate legality and morality, it's effectively saying "I have the moral right to do whatever I can get away with"
(A nit: it is not illegal to break a contract's terms. You're quite right, though, that morality and legality are very different things, and are too often in contradiction.) Anyhow, the reason we have written contracts is because people often (and honestly) misremember or misinterpret verbally agreed upon terms. Having a written record is a lot better when the two parties find themselves in a dispute. There are many licenses to choose from, and we chose Boost knowing full well what was said (and not said) in it about closed source usage.
Jul 01 2013
prev sibling parent reply "Joakim" <joakim airpost.net> writes:
On Monday, 1 July 2013 at 21:29:21 UTC, John Colvin wrote:
 On Monday, 1 July 2013 at 17:45:59 UTC, Joakim wrote:
 I wouldn't call closing source that they legally allowed to be 
 closed antisocial.  I'd call their contradictory, angry 
 response to what their license permits antisocial. :)
Just because you're doing something legal doesn't mean you're not being antisocial.
Read my previous post. Of course it's possible for a license to technically allow something but for the authors to disapprove of it, not that its antisocial to simply do something they disapprove of. But, as I said earlier, the BSD crowd does not publicly broadcast that they disapprove of closing source. In fact, they will occasionally link to press releases about contributions back from corporations who closed the source. For people using the BSD license to then get mad when yet another person comes along to close source is the only "antisocial" behavior I'm seeing here. It'd be one thing if they publicly said that while the BSD license allows closing source, they're against it. Feel free to provide such a public statement, you won't find it. It's only after you talk to them privately about closing source that you realize how many of them are against it. As I've said repeatedly, I don't much care that their behavior is so "antisocial," :) as long as its legal to close source. But it is pretty funny to cast that tag on somebody else, who is simply doing what their license allows and what their press releases trumpet.
 It's a pretty psychopathic attitude to conflate legality and 
 morality, it's effectively saying "I have the moral right to do 
 whatever I can get away with"
On the contrary, it's a pretty psychopathic attitude to make such claims about morality when 1. nobody was talking about morality 2. the BSD crowd doesn't publicly talk about their problems with closing source either, whether they think it's immoral or antisocial or whatever.
Jul 01 2013
parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Tuesday, 2 July 2013 at 05:21:35 UTC, Joakim wrote:
 On Monday, 1 July 2013 at 21:29:21 UTC, John Colvin wrote:
 On Monday, 1 July 2013 at 17:45:59 UTC, Joakim wrote:
 I wouldn't call closing source that they legally allowed to 
 be closed antisocial.  I'd call their contradictory, angry 
 response to what their license permits antisocial. :)
Just because you're doing something legal doesn't mean you're not being antisocial.
Read my previous post. Of course it's possible for a license to technically allow something but for the authors to disapprove of it, not that its antisocial to simply do something they disapprove of. But, as I said earlier, the BSD crowd does not publicly broadcast that they disapprove of closing source. In fact, they will occasionally link to press releases about contributions back from corporations who closed the source. For people using the BSD license to then get mad when yet another person comes along to close source is the only "antisocial" behavior I'm seeing here. It'd be one thing if they publicly said that while the BSD license allows closing source, they're against it. Feel free to provide such a public statement, you won't find it. It's only after you talk to them privately about closing source that you realize how many of them are against it. As I've said repeatedly, I don't much care that their behavior is so "antisocial," :) as long as its legal to close source. But it is pretty funny to cast that tag on somebody else, who is simply doing what their license allows and what their press releases trumpet.
 It's a pretty psychopathic attitude to conflate legality and 
 morality, it's effectively saying "I have the moral right to 
 do whatever I can get away with"
On the contrary, it's a pretty psychopathic attitude to make such claims about morality when 1. nobody was talking about morality 2. the BSD crowd doesn't publicly talk about their problems with closing source either, whether they think it's immoral or antisocial or whatever.
This is all a bit moot as I was making a general point, not specifically related to BSD. However, in their case, I think it is perfectly fine that some don't like closed source personally, but as a group they decide to endorse it. A group where everyone is forced to agree on everything isn't an organisation, it's a cult. I think what I'm really trying to say is this: A license is a description of what you will *allow*, not what you *want*. I personally like to take in to account what people *want* me to do, not just what they will *allow* me to do.
Jul 02 2013
parent reply "Joakim" <joakim airpost.net> writes:
On Tuesday, 2 July 2013 at 09:59:19 UTC, John Colvin wrote:
 This is all a bit moot as I was making a general point, not 
 specifically related to BSD. However, in their case, I think it 
 is perfectly fine that some don't like closed source 
 personally, but as a group they decide to endorse it. A group 
 where everyone is forced to agree on everything isn't an 
 organisation, it's a cult.
Of course there will be a wide variety of opinions within any community but the point is those who push such permissively-licensed software but privately dislike closing of source, then lash out at those who try to do it, are being silly.
 I think what I'm really trying to say is this:

 A license is a description of what you will *allow*, not what 
 you *want*.
 I personally like to take in to account what people *want* me 
 to do, not just what they will *allow* me to do.
You're really splitting hairs at this point. If you _allow_ almost anything, as most permissive licenses like the BSD or MIT license do, nobody is going to then ask permission of the community for every possible thing they might do, to see who "wants" it, particularly since the community hasn't stated anything publicly. Since the community likely has a variety of opinions, as you yourself just admitted, such a poll of "wants" would likely be meaningless anyway. Unless the particular community puts out a public statement of "wants" that most of them can get behind, which very few of them do, it is silly to talk about what they might "want" which isn't in the license. The license is essentially all that matters.
Jul 02 2013
parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Tuesday, 2 July 2013 at 14:40:42 UTC, Joakim wrote:
 You're really splitting hairs at this point.  If you _allow_ 
 almost anything, as most permissive licenses like the BSD or 
 MIT license do, nobody is going to then ask permission of the 
 community for every possible thing they might do, to see who 
 "wants" it, particularly since the community hasn't stated 
 anything publicly.  Since the community likely has a variety of 
 opinions, as you yourself just admitted, such a poll of "wants" 
 would likely be meaningless anyway.

 Unless the particular community puts out a public statement of 
 "wants" that most of them can get behind, which very few of 
 them do, it is silly to talk about what they might "want" which 
 isn't in the license.  The license is essentially all that 
 matters.
The difference between what people allow and what people want is much more significant than just "splitting hairs". However, I agree that there is often no coherent set of "wants" in a community, which makes it hard to consider them meaningfully. However, I do believe there's a level of common courtesy that should be honoured when using other people's work in a significant project, including at the very least making them aware that you will be doing so (anonymously, if secrecy is important). I know many people will just take whatever they can get and give as little as they can, but that doesn't make it right. I suspect we will never see eye to eye on this. You are convinced that the letter of the licence is all that matters, I am not.
Jul 02 2013
prev sibling next sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On 1 July 2013 18:45, Joakim <joakim airpost.net> wrote:
 In other cases there may be a broad community consensus that builds up
 around a piece of software, that this work should be shared and contributed
 to as a common good (e.g. X.org).  Attempts to close it up violate those
 social norms and are rightly seen as an attack on that community and the
 valuable commons they have cultivated.
There's no doubt that even if they chose a permissive license like the MIT or BSD license, these communities work primarily with OSS code and tend to prefer that code be open. I can understand if they then tend to rebuff attempts to keep source from them, purely as a social phenomenon, however irrational it may be. That's why I asked Walter if he had a similar opinion, but he didn't care. I still think it's ridiculous to put your code under an extremely permissive license and then get mad when people take you up on it, particularly since they never publicly broadcast that they want everything to be open. It is only after you talk to them that you realize that the BSD gang are often as much freetards as the GPL gang, just in their own special way. ;)
To be 'retarded' is to be held back or hindered in the development or progress of an action or process. F/OSS comes with no such hindrance, unlike some other model that people falsely advertise as Everything Open and Free. All the Time!* -- Iain Buclaw *except whatever I am selling.
Jul 02 2013
prev sibling parent "Joseph Rushton Wakeling" <joseph.wakeling webdrake.net> writes:
On Monday, 1 July 2013 at 17:45:59 UTC, Joakim wrote:
 Then they should choose a mixed license like the Mozilla Public 
 License or CDDL, which keeps OSS files open while allowing 
 linking with closed source files within the same application.  
 If they instead chose a license that allows closing all source, 
 one can only assume they're okay with it.  In any case, I could 
 care less if they're okay with it or not, I was just surprised 
 that they chose the BSD license and then were mad when someone 
 was thinking about closing it up.
The trouble is, even very weak copyleft licenses like MPL and CDDL can result in licensing incompatibilities. Only by granting very permissive licensing terms can you guarantee that your software will be usable by the full range of free software alternatives. For what it's worth, I have also made the argument on many occasions that projects shouldn't pick permissive licenses unless they're happy to see their work turned into proprietary products. But if a developer releases software under a liberal license, saying "I'm doing this so that everyone can use it but please keep it free," I think they have a right to be pissed off when someone ignores their moral request.
 There's no doubt that even if they chose a permissive license 
 like the MIT or BSD license, these communities work primarily 
 with OSS code and tend to prefer that code be open.  I can 
 understand if they then tend to rebuff attempts to keep source 
 from them, purely as a social phenomenon, however irrational it 
 may be.  That's why I asked Walter if he had a similar opinion, 
 but he didn't care.
Yes, the conscious choice of an extremely permissive license for druntime and Phobos is a different situation. It's completely right in this case to facilitate all forms of development and re-use, under all licensing scenarios.
 I still think it's ridiculous to put your code under an 
 extremely permissive license and then get mad when people take 
 you up on it, particularly since they never publicly broadcast 
 that they want everything to be open.  It is only after you 
 talk to them that you realize that the BSD gang are often as 
 much freetards as the GPL gang, just in their own special way. 
 ;)
It's a shame that you feel the need to resort to name-calling because someone has come to a considered moral or strategic position that's different from yours. It also doesn't really help your position -- you're better off just getting on with developing software using your strategy and showing how it serves free software in the long run.
 I wouldn't call closing source that they legally allowed to be 
 closed antisocial.  I'd call their contradictory, angry 
 response to what their license permits antisocial. :)
Personally speaking, I find there are a lot of things in life which I prefer to be legally permitted, but still nevertheless consider antisocial -- and I don't think there need be a contradiction there.
 http://www.phoronix.com/scan.php?page=article&item=sprewell_licensing

 Note that this article was written when Android had less than 
 10% of the almost billion users it has today, by using a 
 similar hybrid model, and I was thinking up these ideas years 
 before, long before I'd heard of Android.

 My project was a small one, so it couldn't be a resounding 
 proof of my time-limited version of the hybrid model, but it 
 worked for its purpose and I'm fairly certain it will be the 
 dominant model someday. :)
Thanks for the interesting read. I think you have a point inasmuch as this is a model that clearly works very well from a business perspective where apps are concerned -- and if you're going to have open core, I'd rather it be one where the closed parts are guaranteed to eventually be opened up. Of course, this is not the same as moral approval :-) What I'd say, though, is that what works for apps isn't going to be what works for languages or their core development tools. Most apps seem to be single- or small-team developments, not community projects, they are targeting niche requirements, and ultimately they're being delivered to a target audience that's used to paying for software. On the other hand with a language your overwhelming goal is to grow the user community, and (unless you're Microsoft or Apple, who can dictate terms to software developers) the best way by far to do that is to secure the language quality while keeping the development tools available free of charge. You'll get more mileage out of monetising other things -- e.g. bug-fixing services, support, consultancy -- than you will out of restricting access to tools that enable people to use the language effectively. You also have to consider the user perspective. If I was offered a new language whose tools were delivered with open core terms, I would almost certainly refuse -- I'd feel unable to trust that I wouldn't at some point find all future releases locked up, leaving me with the Hobson's choice of either porting all my software to another language or coughing up the licensing fee.
Jul 02 2013
prev sibling parent reply "Joseph Rushton Wakeling" <joseph.wakeling webdrake.net> writes:
On Tuesday, 25 June 2013 at 21:38:01 UTC, Joakim wrote:
 I don't know the views of the key contributors, but I wonder if 
 they would have such a knee-jerk reaction against any 
 paid/closed work.  The current situation would seem much more 
 of a kick in the teeth to me: spending time trying to be 
 "professional," as Andrei asks, and producing a viable, stable 
 product used by a million developers, corporate users included, 
 but never receiving any compensation for this great tool you've 
 poured effort into, that your users are presumably often making 
 money with.
Obviously I can't speak for the core developers, or even for the community as a group. But I can make the following observations. D's success as a language is _entirely_ down to volunteer effort -- as Walter highlighted in his keynote. Volunteer effort is responsible for the development of the compiler frontend, the runtime, and the standard library. Volunteers have put in the hard work of porting these to other compiler backends. Volunteers have made and reviewed language improvement proposals, and have been vigilant in reporting and resolving bugs. Volunteers also contribute to vibrant discussions on these very forums, providing support and advice to those in need of help. And many of these volunteers have been doing so over the course of years. Now, in trying to drive more funding and professional effort towards D development, do you _really_ think that the right thing to do is to turn around to all those people and say: "Hey guys, after all the work you put in to make D so great, now we're going to build on that, but you'll have to wait 6 months for the extra goodies unless you pay"? How do you think that will affect the motivation of all those volunteers -- the code contributors, the bug reporters, the forum participants? What could you say to the maintainers of GDC or LDC, after all they've done to enable people to use the language, that could justify denying their compilers up-to-date access to the latest features? How would it affect the atmosphere of discussion about language development -- compared to the current friendly, collegial approach? ... and -- how do you think it would affect uptake, if it was announced that access to the best features would come at a price? There are orders of magnitude of difference between uptake of free and non-free services no matter what the domain, and software is one where free (as in freedom and beer) is much more strongly desired than in many other fields.
 I understand that such a shift from being mostly OSS to having 
 some closed components can be tricky, but that depends on the 
 particular community.  I don't think any OSS project has ever 
 become popular without having some sort of commercial model 
 attached to it.  C++ would be nowhere without commercial 
 compilers; linux would be unheard of without IBM and Red Hat 
 figuring out a consulting/support model around it; and Android 
 would not have put the linux kernel on hundreds of millions of 
 computing devices without the hybrid model that Google 
 employed, where they provide an open source core, paid for 
 through increased ad revenue from Android devices, and the 
 hardware vendors provide closed hardware drivers and UI skins 
 on top of the OSS core.
There's a big difference between introducing commercial models with a greater degree of paid professional work, and introducing closed components. Red Hat is a good example of that -- I can get, legally and for free, a fully functional copy of Red Hat Enterprise Linux without paying a penny. It's just missing the Red Hat name and logos and the support contract. In another email you mentioned Microsoft's revenues from Visual Studio but -- leaving aside for a moment all the moral and strategic concerns of closing things up -- Visual Studio enjoys that success because it's a virtually essential tool for professional development on Microsoft Windows, which still has an effective monopoly on modern desktop computing. Microsoft has the market presence to be able to dictate terms like that -- no one else does. Certainly no upcoming programming language could operate like that!
 This talk prominently mentioned scaling to a million users and 
 being professional: going commercial is the only way to get 
 there.
It's more likely that closing off parts of the offering would limit that uptake, for reasons already given. On the other hand, with more and more organizations coming to use and rely on D, there are plenty of other ways professional development could be brought in. Just to take one example: companies with a mission-critical interest in D have a corresponding interest in their developers giving time to the language itself. How many such companies do you think there need to be before D has a stable of skilled professional developers being paid explicitly to maintain and develop the language? Your citation of the Linux kernel is relevant here. Do you think that Linux would have had all that diverse success if parts of it had been closed up and sold at a premium? D's status as a purely community-run project is an asset compared to corporate-backed languages, not a liability.
Jun 26 2013
parent reply "Joakim" <joakim airpost.net> writes:
On Wednesday, 26 June 2013 at 12:02:38 UTC, Joseph Rushton 
Wakeling wrote:
 Now, in trying to drive more funding and professional effort 
 towards D development, do you _really_ think that the right 
 thing to do is to turn around to all those people and say: "Hey 
 guys, after all the work you put in to make D so great, now 
 we're going to build on that, but you'll have to wait 6 months 
 for the extra goodies unless you pay"?
Yes, I think it is the right thing to do. I am only talking about closing off the optimization patches, all bugfixes and feature patches would likely be applied to both the free and paid compilers, certainly bugfixes. So not _all_ the "extra goodies" have to be paid for, and even the optimization patches are eventually open-sourced.
 How do you think that will affect the motivation of all those 
 volunteers -- the code contributors, the bug reporters, the 
 forum participants?  What could you say to the maintainers of 
 GDC or LDC, after all they've done to enable people to use the 
 language, that could justify denying their compilers up-to-date 
 access to the latest features?  How would it affect the 
 atmosphere of discussion about language development -- compared 
 to the current friendly, collegial approach?
I don't know how it will affect their motivation, as they probably differ in the reasons they contribute. If D becomes much more popular because the quality of implementation goes up and their D skills and contributions become much more prized, I suspect they will be very happy. :) If they are religious zealots about having only a single, completely open-source implementation- damn the superior results from hybrid models- perhaps they will be unhappy. I suspect the former far outnumber the latter, since D doesn't employ the purely-GPL approach the zealots usually insist on. We could poll them and find out. You keep talking about closed patches as though they can only piss off the volunteers. But if I'm right and a hybrid model would lead to a lot more funding and adoption of D, their volunteer work places them in an ideal position, where their D skills and contributions are much more valued and they can then probably do paid work in D. I suspect most will end up happier. I have not proposed denying GDC and LDC "access to the latest features," only optimization patches. LDC could do the same as dmd and provide a closed, paid version with the optimization patches, which it could license from dmd. GDC couldn't do this, of course, but that is the result of their purist GPL-only approach. Why do you think a hybrid model would materially "affect the atmosphere of discussion about language development?" Do you believe that the people who work on hybrid projects like Android, probably the most widely-used, majority-OSS project in the world, are not able to collaborate effectively?
 ... and -- how do you think it would affect uptake, if it was 
 announced that access to the best features would come at a 
 price?
Please stop distorting my argument. There are many different types of patches added to the dmd frontend every day: bugfixes, features, optimizations, etc. I have only proposed closing the optimization patches. However, I do think some features can also be closed this way. For example, Walter has added features like SIMD modifications only for Remedy. He could make this type of feature closed initially, available only in the paid compiler. As the feature matures and is paid for, it would eventually be merged into the free compiler. This is usually not a problem as those who want that kind of performance usually make a lot of money off of it and are happy to pay for that performance: that is all I'm proposing with my optimization patches idea also. As for how it would "affect uptake," I think most people know that free products are usually less capable than paid products. The people who don't need the capability use Visual Studio Express, those who need it pay for the full version of Visual Studio. There's no reason D couldn't employ a similar segmented model.
  There are orders of magnitude of difference between uptake of 
 free and non-free services no matter what the domain, and 
 software is one where free (as in freedom and beer) is much 
 more strongly desired than in many other fields.
Yes, you're right, non-free services have orders of magnitude more uptake. :p I think there are advantages to both closed and open source, which is why hybrid open/closed source models are currently very popular. Open source allows more collaboration from outside, while closed source allows for _much_ more funding from paying customers. I see no reason to dogmatically insist that these source models not be mixed.
 There's a big difference between introducing commercial models 
 with a greater degree of paid professional work, and 
 introducing closed components.  Red Hat is a good example of 
 that -- I can get, legally and for free, a fully functional 
 copy of Red Hat Enterprise Linux without paying a penny.  It's 
 just missing the Red Hat name and logos and the support 
 contract.
Yes, it's a consulting or support model. These don't scale as well as a product model with closed components. If all closed components are guaranteed to be open sourced after some time limit, as I've suggested above, I see no reason for OSS proponents to protest.
 In another email you mentioned Microsoft's revenues from Visual 
 Studio but -- leaving aside for a moment all the moral and 
 strategic concerns of closing things up -- Visual Studio enjoys 
 that success because it's a virtually essential tool for 
 professional development on Microsoft Windows, which still has 
 an effective monopoly on modern desktop computing.  Microsoft 
 has the market presence to be able to dictate terms like that 
 -- no one else does.  Certainly no upcoming programming 
 language could operate like that!
Yes, Microsoft has unusual leverage. But Visual Studio's compiler is not the only paid C++ compiler in the market, hell, Walter still sells C and C++ compilers. I'm not proposing D operate just like Microsoft. I'm suggesting a subtle compromise, a mix of that familiar closed model and the open source model you prefer, a hybrid model that you are no doubt familiar with, since you correctly pegged the licensing lingo earlier, when you mentioned "open core." These hybrid models are immensely popular these days: the two most popular software projects of the last decade, iOS and Android, are hybrid models. Of course, that is partly because mobile is such a hot field, but the explosion of mobile software didn't mean success for the closed models of RIM, Nokia, or Microsoft. Android's hybrid model is a big reason why it succeeded. I see no reason why another "upcoming" project like D couldn't do the same. :)
 It's more likely that closing off parts of the offering would 
 limit that uptake, for reasons already given.  On the other 
 hand, with more and more organizations coming to use and rely 
 on D, there are plenty of other ways professional development 
 could be brought in.  Just to take one example: companies with 
 a mission-critical interest in D have a corresponding interest 
 in their developers giving time to the language itself.  How 
 many such companies do you think there need to be before D has 
 a stable of skilled professional developers being paid 
 explicitly to maintain and develop the language?
I disagree that having a slightly-closed paid version would limit uptake, I think it would greatly increase it, for the same reasons that closed-source Windows is still on the vast majority of PCs and the hybrid model of Android led it to dominating on mobile devices. It is possible that your alternative of corporate backers providing paid development for D will materialize, even if solely for their own benefit, though it would benefit the wider community also. I'm suggesting that the hybrid model I'm proposing will provide such funding much faster and better, for the reasons given.
 Your citation of the Linux kernel is relevant here.  Do you 
 think that Linux would have had all that diverse success if 
 parts of it had been closed up and sold at a premium?  D's 
 status as a purely community-run project is an asset compared 
 to corporate-backed languages, not a liability.
As I noted in my response to Luca, linux has always had binary blobs. Do I think it would have been as successful if more parts of it had been closed up? Well, much of it cannot be closed because of the GPL, so it is just not possible in many respects. But to take a recent example where it is possible, a recent rumor is that Sony is using FreeBSD for the Playstation 4: http://www.vgleaks.com/some-details-about-playstation-4-os-development/ Even if it isn't used on the final PS4, FreeBSD is known to be used in commercial, closed products like Juniper routers or NetApp file servers. I don't think this has limited its uptake, if anything, it has helped as they have eventually contributed back towards the project. I don't think a "purely community-run project" is a worthwhile goal, particularly if you are aiming for a million users and professionalism. I think there is always opportunity for mixing of commercial implementations and community involvement, as very successful hybrid projects like Android or Chrome have shown.
Jun 26 2013
parent reply "Iain Buclaw" <ibuclaw gdcproject.org> writes:
I can't be bothered to read all points the both of you have 
mentioned thus far, but I do hope to add a voice of reason to 
calm you down. ;)



On Wednesday, 26 June 2013 at 17:42:23 UTC, Joakim wrote:
 On Wednesday, 26 June 2013 at 12:02:38 UTC, Joseph Rushton 
 Wakeling wrote:
 Now, in trying to drive more funding and professional effort 
 towards D development, do you _really_ think that the right 
 thing to do is to turn around to all those people and say: 
 "Hey guys, after all the work you put in to make D so great, 
 now we're going to build on that, but you'll have to wait 6 
 months for the extra goodies unless you pay"?
Yes, I think it is the right thing to do. I am only talking about closing off the optimization patches, all bugfixes and feature patches would likely be applied to both the free and paid compilers, certainly bugfixes. So not _all_ the "extra goodies" have to be paid for, and even the optimization patches are eventually open-sourced.
From a licensing perspective, the only part of the source that can be "closed off" is the DMD backend. Any optimisation fixes in the DMD backend does not affect GDC/LDC.
 How do you think that will affect the motivation of all those 
 volunteers -- the code contributors, the bug reporters, the 
 forum participants?  What could you say to the maintainers of 
 GDC or LDC, after all they've done to enable people to use the 
 language, that could justify denying their compilers 
 up-to-date access to the latest features?  How would it affect 
 the atmosphere of discussion about language development -- 
 compared to the current friendly, collegial approach?
I don't know how it will affect their motivation, as they probably differ in the reasons they contribute. If D becomes much more popular because the quality of implementation goes up and their D skills and contributions become much more prized, I suspect they will be very happy. :) If they are religious zealots about having only a single, completely open-source implementation- damn the superior results from hybrid models- perhaps they will be unhappy. I suspect the former far outnumber the latter, since D doesn't employ the purely-GPL approach the zealots usually insist on.
You should try reading The Cathedral and the Bazaar if you don't understand why an open approach to development has caused the D programming language to grow by ten fold over the last year or so. If you still don't understand, read it again ad infinitum.
 ... and -- how do you think it would affect uptake, if it was 
 announced that access to the best features would come at a 
 price?
Please stop distorting my argument. There are many different types of patches added to the dmd frontend every day: bugfixes, features, optimizations, etc. I have only proposed closing the optimization patches. However, I do think some features can also be closed this way. For example, Walter has added features like SIMD modifications only for Remedy. He could make this type of feature closed initially, available only in the paid compiler. As the feature matures and is paid for, it would eventually be merged into the free compiler. This is usually not a problem as those who want that kind of performance usually make a lot of money off of it and are happy to pay for that performance: that is all I'm proposing with my optimization patches idea also.
Think I might just point out that GDC had SIMD support before DMD. And that Remedy used GDC to get their D development off the ground. It was features such as UDAs, along with many language bug fixes that were only available in DMD development that caused them to switch over. In other words, they needed a faster turnaround for bugs at the time they were adopting D, however the D front-end in GDC stays pretty much stable on the current release.
 In another email you mentioned Microsoft's revenues from 
 Visual Studio but -- leaving aside for a moment all the moral 
 and strategic concerns of closing things up -- Visual Studio 
 enjoys that success because it's a virtually essential tool 
 for professional development on Microsoft Windows, which still 
 has an effective monopoly on modern desktop computing.  
 Microsoft has the market presence to be able to dictate terms 
 like that -- no one else does.  Certainly no upcoming 
 programming language could operate like that!
Yes, Microsoft has unusual leverage. But Visual Studio's compiler is not the only paid C++ compiler in the market, hell, Walter still sells C and C++ compilers. I'm not proposing D operate just like Microsoft. I'm suggesting a subtle compromise, a mix of that familiar closed model and the open source model you prefer, a hybrid model that you are no doubt familiar with, since you correctly pegged the licensing lingo earlier, when you mentioned "open core." These hybrid models are immensely popular these days: the two most popular software projects of the last decade, iOS and Android, are hybrid models. Of course, that is partly because mobile is such a hot field, but the explosion of mobile software didn't mean success for the closed models of RIM, Nokia, or Microsoft. Android's hybrid model is a big reason why it succeeded. I see no reason why another "upcoming" project like D couldn't do the same. :)
You seem to be confusing D for an Operating System, Smartphone, or any general consumer product. Having used closed source languages in the past, I strongly believe that closed languages do not stimulate growth or adoption at all. And where adoption does occur, knowledge is kept within specialised groups.
 I don't think a "purely community-run project" is a worthwhile 
 goal, particularly if you are aiming for a million users and 
 professionalism.  I think there is always opportunity for 
 mixing of commercial implementations and community involvement, 
 as very successful hybrid projects like Android or Chrome have 
 shown.
Your argument seems lost on me as you seem to be taking a very strange angle of association with the D language and/or compiler, and you don't seem to understand how the development process of D works either. My thoughts in summary: - The language implementation is open source. This allows anyone to take the current front-end code - or even write their own clean-room implementation from ground-up - and integrate it to their own backend X. - The compiler itself is not associated with the development of the language, so those who are owners of the copyright are free to do what they want with their binary releases. - The development model of D on github has adopted a "pull, review and merge" system, where any changes to the language or compiler do not go in unless it goes through proper coding review and testing (thank's to the wonderful auto-tester). So your suggestion of an "open core" model has a slight fallacy here in that any changes to the closed off compiler would have to go through the same process to be accepted into the open one - and it might even be rejected. - Likewise, because of licensing and copyright assignments in place on the D front-end implementation. Any closed D compiler using it would have to make its sources of the front-end, with local modifications, available upon request. So it makes no sense whatsoever to make language features - such as SIMD - closed off. tl/dr; DMD - as in refering to the binary releases - can be closed / paid / whatever it likes. The D Programming Language - as in the D front-end implementation - is under a dual GPL/Artistic license and cannot be used by any closed source product without said product releasing their copy of the front-end sources also. This means that your "hybrid" proposal only works for code that is not under this license - eg: the DMD backend - which is not what the vast majority of contributors actually submit patches for. If you strongly believe that a programming language can't be big (as in 1M users) without being partly closed source, I suggest you do your research better. </ End argument on feasibility of a hybrid development model > Regards Iain "That GDC Developer" Bucław
Jun 26 2013
next sibling parent reply "Joakim" <joakim airpost.net> writes:
On Wednesday, 26 June 2013 at 19:26:37 UTC, Iain Buclaw wrote:
 From a licensing perspective, the only part of the source that 
 can be "closed off" is the DMD backend.  Any optimisation fixes 
 in the DMD backend does not affect GDC/LDC.
This is flat wrong. I suggest you read the Artistic license, it was chosen for a reason, ie it allows closing of source as long as you provide the original, unmodified binaries with any modified binaries. I suspect optimization fixes will be in both the frontend and backend.
 You should try reading The Cathedral and the Bazaar if you 
 don't understand why an open approach to development has caused 
 the D programming language to grow by ten fold over the last 
 year or so.

 If you still don't understand, read it again ad infinitum.
Never read it but I have corresponded with the author, and I found him to be as religious about pure open source as Stallman is about the GPL. I suggest you try examining why D is still such a niche language even with "ten fold" growth. If you're not sure why, I suggest you look at the examples and reasons I've given, as to why closed source and hybrid models do much better.
 Think I might just point out that GDC had SIMD support before 
 DMD. And that Remedy used GDC to get their D development off 
 the ground.  It was features such as UDAs, along with many 
 language bug fixes that were only available in DMD development 
 that caused them to switch over.

 In other words, they needed a faster turnaround for bugs at the 
 time they were adopting D, however the D front-end in GDC stays 
 pretty much stable on the current release.
Not sure what point you are trying to make, as both gdc and dmd are open source. I'm suggesting closing such patches, for a limited time.
 I see no reason why another "upcoming" project like D couldn't 
 do the same. :)
You seem to be confusing D for an Operating System, Smartphone, or any general consumer product.
You seem to be confusing the dmd compiler to not be a piece of software, just like the rest, or the many proprietary C++ compilers out there.
 Having used closed source languages in the past, I strongly 
 believe that closed languages do not stimulate growth or 
 adoption at all.  And where adoption does occur, knowledge is 
 kept within specialised groups.
Perhaps there is some truth to that. But nobody is suggesting a purely closed-source language either.
 I don't think a "purely community-run project" is a worthwhile 
 goal, particularly if you are aiming for a million users and 
 professionalism.  I think there is always opportunity for 
 mixing of commercial implementations and community 
 involvement, as very successful hybrid projects like Android 
 or Chrome have shown.
Your argument seems lost on me as you seem to be taking a very strange angle of association with the D language and/or compiler, and you don't seem to understand how the development process of D works either.
I am associating D, an open source project, with Android and Chrome, two of the most successful open source projects at the moment, which both benefit from hybrid models. I find it strange that you cannot follow. If I don't understand how the development process of D works, you could point out an example, instead of making basic mistakes in not knowing what licenses it uses and what they allow. :)
 - The language implementation is open source. This allows 
 anyone to take the current front-end code - or even write their 
 own clean-room implementation from ground-up - and integrate it 
 to their own backend X.
Sort of. The dmd frontend is open source, but the backend is not under an open source license. Someone can swap out the backend and go completely closed, for example, using ldc (ldc used to have one or two GPL files, those would obviously have to be removed).
 - The compiler itself is not associated with the development of 
 the language, so those who are owners of the copyright are free 
 to do what they want with their binary releases.

 - The development model of D on github has adopted a "pull, 
 review and merge" system, where any changes to the language or 
 compiler do not go in unless it goes through proper coding 
 review and testing (thank's to the wonderful auto-tester).  So 
 your suggestion of an "open core" model has a slight fallacy 
 here in that any changes to the closed off compiler would have 
 to go through the same process to be accepted into the open one 
 - and it might even be rejected.
I'm not sure why you think "open core" patches that are opened after a time limit would be any more likely to be rejected from that review process. The only fallacy I see here is yours.
 - Likewise, because of licensing and copyright assignments in 
 place on the D front-end implementation.  Any closed D compiler 
 using it would have to make its sources of the front-end, with 
 local modifications, available upon request.  So it makes no 
 sense whatsoever to make language features - such as SIMD - 
 closed off.
I suggest you read the Artistic license. You have no idea what you are talking about.
 DMD - as in refering to the binary releases - can be closed / 
 paid / whatever it likes.

 The D Programming Language - as in the D front-end 
 implementation - is under a dual GPL/Artistic license and 
 cannot be used by any closed source product without said 
 product releasing their copy of the front-end sources also.  
 This means that your "hybrid" proposal only works for code that 
 is not under this license - eg: the DMD backend - which is not 
 what the vast majority of contributors actually submit patches 
 for.
Wrong, you have clearly not read the Artistic license.
 If you strongly believe that a programming language can't be 
 big (as in 1M users) without being partly closed source, I 
 suggest you do your research better.
I have done my research and provided examples: you provide none. I suggest it is you who needs to research this topic. Start with reading the Artistic license. :)
 </ End argument on feasibility of a hybrid development model >
</ End my demolition of your ignorant arguments >
Jun 26 2013
parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
On Jun 26, 2013 9:00 PM, "Joakim" <joakim airpost.net> wrote:
 On Wednesday, 26 June 2013 at 19:26:37 UTC, Iain Buclaw wrote:
 From a licensing perspective, the only part of the source that can be
"closed off" is the DMD backend. Any optimisation fixes in the DMD backend does not affect GDC/LDC.
 This is flat wrong. I suggest you read the Artistic license, it was
chosen for a reason, ie it allows closing of source as long as you provide the original, unmodified binaries with any modified binaries. I suspect optimization fixes will be in both the frontend and backend.

Code generation is in the back end, so the answer to that is simply 'no'.

 You should try reading The Cathedral and the Bazaar if you don't
understand why an open approach to development has caused the D programming language to grow by ten fold over the last year or so.
 If you still don't understand, read it again ad infinitum.
Never read it but I have corresponded with the author, and I found him to
be as religious about pure open source as Stallman is about the GPL. I suggest you try examining why D is still such a niche language even with "ten fold" growth. If you're not sure why, I suggest you look at the examples and reasons I've given, as to why closed source and hybrid models do much better.

Then you should read it, as the 'cathedral' in question was GCC - a project
started by Stallman. :)

 Think I might just point out that GDC had SIMD support before DMD. And
that Remedy used GDC to get their D development off the ground. It was features such as UDAs, along with many language bug fixes that were only available in DMD development that caused them to switch over.
 In other words, they needed a faster turnaround for bugs at the time
they were adopting D, however the D front-end in GDC stays pretty much stable on the current release.
 Not sure what point you are trying to make, as both gdc and dmd are open
source. I'm suggesting closing such patches, for a limited time.

Closing patches benefit no one.  And more to the point,  you can't say that
two compiler's implement the same language if both have different language
features.

 I see no reason why another "upcoming" project like D couldn't do the
same. :)
 You seem to be confusing D for an Operating System, Smartphone, or any
general consumer product.
 You seem to be confusing the dmd compiler to not be a piece of software,
just like the rest, or the many proprietary C++ compilers out there.

You seem to think when I say D I'm referring to dmd, or any other D
compiler out there.

 - The language implementation is open source. This allows anyone to take
the current front-end code - or even write their own clean-room implementation from ground-up - and integrate it to their own backend X.
 Sort of.  The dmd frontend is open source, but the backend is not under
an open source license. Someone can swap out the backend and go completely closed, for example, using ldc (ldc used to have one or two GPL files, those would obviously have to be removed).

The backend is not part of the D language implementation / specification.
(for starters, it's not documented anywhere except as code).

 - The compiler itself is not associated with the development of the
language, so those who are owners of the copyright are free to do what they want with their binary releases.
 - The development model of D on github has adopted a "pull, review and
merge" system, where any changes to the language or compiler do not go in unless it goes through proper coding review and testing (thank's to the wonderful auto-tester). So your suggestion of an "open core" model has a slight fallacy here in that any changes to the closed off compiler would have to go through the same process to be accepted into the open one - and it might even be rejected.
 I'm not sure why you think "open core" patches that are opened after a
time limit would be any more likely to be rejected from that review process. The only fallacy I see here is yours.

Where did I say that? I only invited you to speculate on what would happen
if a 'closed patch' got rejected.  This leads back to the point that you
can't call it a compiler for the D programming language if it derives from
the specification / implementation.

 DMD - as in refering to the binary releases - can be closed / paid /
whatever it likes.
 The D Programming Language - as in the D front-end implementation - is
under a dual GPL/Artistic license and cannot be used by any closed source product without said product releasing their copy of the front-end sources also. This means that your "hybrid" proposal only works for code that is not under this license - eg: the DMD backend - which is not what the vast majority of contributors actually submit patches for.
 Wrong, you have clearly not read the Artistic license.
I'll allow you to keep on thinking that for a while longer...
 If you strongly believe that a programming language can't be big (as in
1M users) without being partly closed source, I suggest you do your research better.
 I have done my research and provided examples: you provide none.  I
suggest it is you who needs to research this topic. Start with reading the Artistic license. :)

All I've seen from you from my skim,  snore, skip,  skim are projects
started by multi-millionaire companies who have both resource, influence,
and marketing behind them.  The contributors who have helped design and
shape the D programming language are neither of these.
Jun 26 2013
parent reply "Joakim" <joakim airpost.net> writes:
On Wednesday, 26 June 2013 at 21:15:34 UTC, Iain Buclaw wrote:
 On Jun 26, 2013 9:00 PM, "Joakim" <joakim airpost.net> wrote:
 This is flat wrong. I suggest you read the Artistic license, 
 it was
chosen for a reason, ie it allows closing of source as long as you provide the original, unmodified binaries with any modified binaries. I suspect optimization fixes will be in both the frontend and backend.

 Code generation is in the back end, so the answer to that is 
 simply 'no'.
From what I understand about the kinds of optimizations that Walter was talking about, at least some of them would require work on the frontend also. But lets assume that you are right and the optimization patches I'm talking about would tend to end up only in the backend. In that case, the frontend would not have any closed patches and the paid version of dmd would simply have a slightly-closed, more-optimized backend. There go all of Joseph's previous arguments about the paid version not making the same OSS frontend available to the free reference compiler or ldc and gdc. You are making my case for me. :)
 Never read it but I have corresponded with the author, and I 
 found him to
be as religious about pure open source as Stallman is about the GPL. I suggest you try examining why D is still such a niche language even with "ten fold" growth. If you're not sure why, I suggest you look at the examples and reasons I've given, as to why closed source and hybrid models do much better.

 Then you should read it, as the 'cathedral' in question was GCC 
 - a project
 started by Stallman. :)
I'm familiar with its arguments from a summary, not particularly interested in reading the whole thing. Insofar as he made the case for various benefits of open source, some of the arguments I've heard make sense and I have no problem with it. Insofar as he and others believe that it is an argument for _pure_ open source or that _all_ source will eventually be open, I think history has shown that argument to be dead wrong along with the reasons why. It boils down to the facts that there is nowhere near as much money in pure OSS models and volunteers cannot possibly provide all the materials necessary for a full product, both of which I've mentioned before. This is why hybrid models are now taking off, blending the benefits of open and closed source.
 Not sure what point you are trying to make, as both gdc and 
 dmd are open
source. I'm suggesting closing such patches, for a limited time.

 Closing patches benefit no one.  And more to the point,  you 
 can't say that
 two compiler's implement the same language if both have 
 different language
 features.
The closed patches benefit those making money from the paid compiler and since the free compiler would get these patches after a time limit, they eventually benefit the community also. As for your purist approach to compiler implementation, by that rationale, no two C++ compilers and all the D compilers do not implement the "same language," since there are always differences in the features supported by the different compilers. I'd say that some differentiation between compilers is normal and necessary.
 I see no reason why another "upcoming" project like D 
 couldn't do the
same. :)
 You seem to be confusing D for an Operating System, 
 Smartphone, or any
general consumer product.
 You seem to be confusing the dmd compiler to not be a piece of 
 software,
just like the rest, or the many proprietary C++ compilers out there.

 You seem to think when I say D I'm referring to dmd, or any 
 other D
 compiler out there.
I referred to the D project and have been talking about the compiler all along. The fact that you decided to make a meaningless statement, presumably about how D is a spec and therefore cannot be compared with Android, is irrelevant and frankly laughable. :)
 - The language implementation is open source. This allows 
 anyone to take
the current front-end code - or even write their own clean-room implementation from ground-up - and integrate it to their own backend X.
 Sort of.  The dmd frontend is open source, but the backend is 
 not under
an open source license. Someone can swap out the backend and go completely closed, for example, using ldc (ldc used to have one or two GPL files, those would obviously have to be removed).

 The backend is not part of the D language implementation / 
 specification.
 (for starters, it's not documented anywhere except as code).
Of course the backend is part of the language implementation. It's not part of the spec, but you never mentioned the spec originally.
 - The development model of D on github has adopted a "pull, 
 review and
merge" system, where any changes to the language or compiler do not go in unless it goes through proper coding review and testing (thank's to the wonderful auto-tester). So your suggestion of an "open core" model has a slight fallacy here in that any changes to the closed off compiler would have to go through the same process to be accepted into the open one - and it might even be rejected.
 I'm not sure why you think "open core" patches that are opened 
 after a
time limit would be any more likely to be rejected from that review process. The only fallacy I see here is yours.

 Where did I say that? I only invited you to speculate on what 
 would happen
 if a 'closed patch' got rejected.  This leads back to the point 
 that you
 can't call it a compiler for the D programming language if it 
 derives from
 the specification / implementation.
Your sentence didn't really make sense as written- it is not a "fallacy" that patches might get rejected- so I had to guess what you were trying to say. So what if an optimization patch gets rejected? The OSS reference version would simply continue being slower than the slightly-closed, paid version. They would still conform to the same spec. If it were a new feature like SIMD, I'm sure the reference version would implement their own open implementation of that feature, if they couldn't wait for the closed patch to be released or didn't like it for whatever reason. As I said earlier, compilers for all languages differ in various ways all the time, just as ldc, gdc, and dmd differ in various ways today. It's hardly a deal-breaker.
 The D Programming Language - as in the D front-end 
 implementation - is
under a dual GPL/Artistic license and cannot be used by any closed source product without said product releasing their copy of the front-end sources also. This means that your "hybrid" proposal only works for code that is not under this license - eg: the DMD backend - which is not what the vast majority of contributors actually submit patches for.
 Wrong, you have clearly not read the Artistic license.
I'll allow you to keep on thinking that for a while longer...
Well, in that case, you do not understand it. The Artistic license is not a copyleft license. You can close up the entire dmd frontend and sell a modified binary based on ldc, as long as you provide unmodified binaries also. You repeatedly stated that the source must be released, which even a cursory skim of the Artistic license would show isn't true.
 I have done my research and provided examples: you provide 
 none.  I
suggest it is you who needs to research this topic. Start with reading the Artistic license. :)

 All I've seen from you from my skim,  snore, skip,  skim are 
 projects
 started by multi-millionaire companies who have both resource, 
 influence,
 and marketing behind them.  The contributors who have helped 
 design and
 shape the D programming language are neither of these.
Perhaps you should read what I wrote more carefully, as your arguments are as slipshod as how you claim to have read my posts. It is true that many of the hybrid projects I've listed have been put together by large companies. But these large companies chose hybrid models for a reason, they aren't dumb. Get the community involvement from open source, _and_ the much increased funding from the closed source portions. Also, many of the hybrid projects have pulled in previously purely-open, purely community projects like KHTML/WebKit or mostly-open projects like the linux kernel. The linux kernel repo evolved over time to include many binary blobs and effectively become a hybrid model. And not all hybrid companies are that large: MySQL before it got bought out was pulling in hundreds of millions of dollars in revenue using an "open core" model but certainly wasn't in the super-sized class of Google or Oracle. There are a handful of small companies that provided closed, optimized versions of the PostgreSQL database (since most of the underlying code is open source, it can be considered a hybrid model). There is no reason that a hybrid model wouldn't help a smaller project like dmd. In fact, smaller projects are helped the most by hybrid models, as it's the only way for D to professionalize, whereas the large companies could have just gone closed-source and had the resources to pull that off.
Jun 27 2013
next sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On 27 June 2013 09:21, Joakim <joakim airpost.net> wrote:
 But lets assume that you are right and the optimization patches I'm talking
 about would tend to end up only in the backend. In that case, the frontend
 would not have any closed patches and the paid version of dmd would simply
 have a slightly-closed, more-optimized backend.  There go all of Joseph's
 previous arguments about the paid version not making the same OSS frontend
 available to the free reference compiler or ldc and gdc.

 You are making my case for me. :)
Now you are just re-hashing what my initial thoughts were... ;)
 Never read it but I have corresponded with the author, and I found him to
be as religious about pure open source as Stallman is about the GPL. I suggest you try examining why D is still such a niche language even with "ten fold" growth. If you're not sure why, I suggest you look at the examples and reasons I've given, as to why closed source and hybrid models do much better.

 Then you should read it, as the 'cathedral' in question was GCC - a
 project
 started by Stallman. :)
I'm familiar with its arguments from a summary, not particularly interested in reading the whole thing. Insofar as he made the case for various benefits of open source, some of the arguments I've heard make sense and I have no problem with it. Insofar as he and others believe that it is an argument for _pure_ open source or that _all_ source will eventually be open, I think history has shown that argument to be dead wrong along with the reasons why. It boils down to the facts that there is nowhere near as much money in pure OSS models and volunteers cannot possibly provide all the materials necessary for a full product, both of which I've mentioned before. This is why hybrid models are now taking off, blending the benefits of open and closed source.
But it's not blending the benefits at all. Open-core, however you try to sway or pitch it, does not qualify as open source. It is closed source. It is the opposite of open source. Personally, it is not acceptable that you market yourself as an open source product when in fact your business model is to sell closed source. This is confusing, I'd say it is border line lying. Well, marketing often is lying, but in the open source community we call out such lies, however subtle. Most open core vendors still market themselves as open source leaders, then come to you to sell closed source software. (They deserve to be critizised if you ask me).
 Not sure what point you are trying to make, as both gdc and dmd are open
source. I'm suggesting closing such patches, for a limited time.

 Closing patches benefit no one.  And more to the point,  you can't say
 that
 two compiler's implement the same language if both have different language
 features.
The closed patches benefit those making money from the paid compiler and since the free compiler would get these patches after a time limit, they eventually benefit the community also. As for your purist approach to compiler implementation, by that rationale, no two C++ compilers and all the D compilers do not implement the "same language," since there are always differences in the features supported by the different compilers. I'd say that some differentiation between compilers is normal and necessary.
This is were C/C++ went horribly wrong. Different compilers having a vagary of macros to identify the same platform or architecture, the question of what is valid syntax being different between compilers, code written in a certain way working in one compiler but throws an error with another... We are striving to be better than that from the start.
 Also, many of the hybrid projects have pulled in previously purely-open,
 purely community projects like KHTML/WebKit or mostly-open projects like the
 linux kernel.  The linux kernel repo evolved over time to include many
 binary blobs and effectively become a hybrid model.
Binary blobs are the exception rather than the rule in Linux, and many hardware vendors would flat out say 'no' to doing any support on them. Moreover, the position of the Linux Foundation is that any closed-source kernel module is harmful and undesirable, and they are always urging for vendors to adopt a policy of supporting their customers on Linux with open-source kernel code. Which goes to show how useful a hybrid model has been for them...
 And not all hybrid companies are that large: MySQL before it got bought out
 was pulling in hundreds of millions of dollars in revenue using an "open
 core" model but certainly wasn't in the super-sized class of Google or
 Oracle.  There are a handful of small companies that provided closed,
 optimized versions of the PostgreSQL database (since most of the underlying
 code is open source, it can be considered a hybrid model).
MySQL iirc was the first to practice this model. But they were initially open source from the start, and made their business through a support model. When they switched, their business model turned to selling closed source software, and so their incentives moved away from being able to produce open source software, and to instead produce as much closed source software as they could get away with. In MySQL's case they eventually had to backtrack on the plans of closed source backup, because they didn't get away with it. The public pressure became a burden for Sun and MySQL management had to give up those plans. Though now this is in the hands of Oracle, they have now revitalized the plans for closed source backup tools - so now we are seeing products that evolved from the same codebase, but are now considered two separate products (LibreOffice, OpenIndiana...) The most successful open source projects, like Linux and Apache, are not Capital backed businesses to begin with, and even among companies, the most successful ones, such as Red Hat, Canonical, are not "open core" companies but so called "pure play" open source companies that have committed themselves to stay open source. But in the end... De Gustibus Non Est Disputandum. -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Jun 27 2013
prev sibling parent "Joseph Rushton Wakeling" <joseph.wakeling webdrake.net> writes:
On Thursday, 27 June 2013 at 08:21:12 UTC, Joakim wrote:
 I'm familiar with its arguments from a summary, not 
 particularly interested in reading the whole thing.
You know, I think I see what your problem is ... :-)
Jun 27 2013
prev sibling parent reply "Joseph Rushton Wakeling" <joseph.wakeling webdrake.net> writes:
On Wednesday, 26 June 2013 at 19:26:37 UTC, Iain Buclaw wrote:
 I can't be bothered to read all points the both of you have 
 mentioned thus far, but I do hope to add a voice of reason to 
 calm you down. ;)
Quick, nurse, the screens! ... or perhaps, "Someone throw a bucket of water over them"? :-P
 From a licensing perspective, the only part of the source that 
 can be "closed off" is the DMD backend.  Any optimisation fixes 
 in the DMD backend does not affect GDC/LDC.
To be honest, I can't see the "sales value" of optimization fixes in the DMD backend given that GDC and LDC already have such strong performance. The one strong motivation to use DMD over the other two compilers is (as you describe) access to the bleeding edge of features, but I'd have thought this will stop being an advantage in time as/when the frontend becomes a genuinely "plug-and-play" component. By the way, I hope you didn't feel I was trying to speak on behalf of GDC -- wasn't my intention. :-)
 Having used closed source languages in the past, I strongly 
 believe that closed languages do not stimulate growth or 
 adoption at all.  And where adoption does occur, knowledge is 
 kept within specialised groups.
Last year I had the dubious privilege of having to work with MS Visual Basic for a temporary job. What was strikingly different from the various open source languages was that although there was an extensive quantity of documentation available from Microsoft, it was incredibly badly organized, much of it was out of date, and there was no meaningful community support that I could find. I got the job done, but I would surely have had a much easier experience with any of the open source languages out there. Suffice to say that the only reason I used VB in this case was because it was an obligatory part of the work -- I'd never use it by choice.
 - The development model of D on github has adopted a "pull, 
 review and merge" system, where any changes to the language or 
 compiler do not go in unless it goes through proper coding 
 review and testing (thank's to the wonderful auto-tester).  So 
 your suggestion of an "open core" model has a slight fallacy 
 here in that any changes to the closed off compiler would have 
 to go through the same process to be accepted into the open one 
 - and it might even be rejected.
I had a similar thought but from a slightly different angle -- that allowing "open core" in the frontend would damage the effectiveness of the review process. How can you restrict certain features to proprietary versions without having also a two-tier hierarchy of reviewers? And would you be able to maintain the broader range of community review if some select, paid few had privileged review access?
Jun 26 2013
parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
On Jun 26, 2013 9:50 PM, "Joseph Rushton Wakeling" <
joseph.wakeling webdrake.net> wrote:
 On Wednesday, 26 June 2013 at 19:26:37 UTC, Iain Buclaw wrote:
 I can't be bothered to read all points the both of you have mentioned
thus far, but I do hope to add a voice of reason to calm you down. ;)
 Quick, nurse, the screens!

 ... or perhaps, "Someone throw a bucket of water over them"? :-P
Don't call be Shirley...
 From a licensing perspective, the only part of the source that can be
"closed off" is the DMD backend. Any optimisation fixes in the DMD backend does not affect GDC/LDC.
 To be honest, I can't see the "sales value" of optimization fixes in the
DMD backend given that GDC and LDC already have such strong performance. The one strong motivation to use DMD over the other two compilers is (as you describe) access to the bleeding edge of features, but I'd have thought this will stop being an advantage in time as/when the frontend becomes a genuinely "plug-and-play" component.

Sometimes it feels like achieving this is as trying to break down a brick
barrier with a shoelace.

 By the way, I hope you didn't feel I was trying to speak on behalf of GDC
-- wasn't my intention. :-)

I did, and it hurt.  :o)

 Having used closed source languages in the past, I strongly believe that
closed languages do not stimulate growth or adoption at all. And where adoption does occur, knowledge is kept within specialised groups.
 Last year I had the dubious privilege of having to work with MS Visual
Basic for a temporary job. What was strikingly different from the various open source languages was that although there was an extensive quantity of documentation available from Microsoft, it was incredibly badly organized, much of it was out of date, and there was no meaningful community support that I could find.
 I got the job done, but I would surely have had a much easier experience
with any of the open source languages out there. Suffice to say that the only reason I used VB in this case was because it was an obligatory part of the work -- I'd never use it by choice.

Yes, it's like trying to learn D, but the only reference you have of the
language is the grammar page, and an IDE which offers thousands of
auto-complete options for things that *sound* like what you want, but don't
compile when it comes to testing.  :o)

Regards
-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';
Jun 26 2013
parent reply "Joseph Rushton Wakeling" <joseph.wakeling webdrake.net> writes:
On Wednesday, 26 June 2013 at 21:29:12 UTC, Iain Buclaw wrote:
 Don't call be Shirley...
Serious? :-)
 By the way, I hope you didn't feel I was trying to speak on 
 behalf of GDC -- wasn't my intention. :-)
I did, and it hurt. :o)
Oh no. 50 shades of #DDDDDD ? :-)
Jun 26 2013
parent reply Mathias Lang <pro.mathias.lang gmail.com> writes:
I've read (almost), everything, so I hope I won't miss a point here:
a) I've heard about MSVC, Red Hat, Qt, Linux and so on. From my
understanding, none of the projects mentionned have gone from free (as in
free beer) to hybrid/closed. And I'm not currently able to think of one
successful, widespread project that did.
b) Thinking that being free (as a beer and/or as freedom), hybrid, closed
source of whatever is a single critera of success seems foolish. I'm not
asking for a complete comparison (I think my mailbox won't stand it ;-) ),
but please stop comparing a free operating software with a paid compiler,
and assume the former have more users than the later because it's free (and
vice-versa). In addition, I don't see the logic behind comparing something
born in the 90s with something from the 2000s. Remember the Dot-com bubble ?
c) There are other way to get more people involved, for exemple if
dlang.orgbecomes a foundation (see related thread), we would be able
to apply for
GSoC.
d) People pay for something they need. They don't adopt something because
they can pay for it. That's why paid compiler must follow language
promotion, not the other way around.


2013/6/27 Joseph Rushton Wakeling <joseph.wakeling webdrake.net>

 On Wednesday, 26 June 2013 at 21:29:12 UTC, Iain Buclaw wrote:

 Don't call be Shirley...
Serious? :-) By the way, I hope you didn't feel I was trying to speak on behalf of GDC
 -- wasn't my intention. :-)
I did, and it hurt. :o)
Oh no. 50 shades of #DDDDDD ? :-)
Jun 26 2013
parent reply "Joakim" <joakim airpost.net> writes:
On Thursday, 27 June 2013 at 03:20:37 UTC, Mathias Lang wrote:
 I've read (almost), everything, so I hope I won't miss a point 
 here:
 a) I've heard about MSVC, Red Hat, Qt, Linux and so on. From my
 understanding, none of the projects mentionned have gone from 
 free (as in
 free beer) to hybrid/closed. And I'm not currently able to 
 think of one
 successful, widespread project that did.
Then you are not paying attention. As I've already noted several times, Visual Studio, which is the way most use MSVC, has both paid and free versions. Red Hat contains binary blobs and possibly other non-OSS software and charges companies for consulting and support. Qt is an "open core" project that is dual-licensed under both OSS and commercial licenses, the latter of which you pay for. Linux contains binary blobs in the vast majority of installs and most people running it paid for it. If your implied point is that the original authors aren't the ones taking the project hybrid or paid, it depends on the license. Sometimes it is those owning the original copyright, as it had to be in the Qt, MySQL, and other dual-licensing cases, other times it isn't.
 b) Thinking that being free (as a beer and/or as freedom), 
 hybrid, closed
 source of whatever is a single critera of success seems 
 foolish. I'm not
 asking for a complete comparison (I think my mailbox won't 
 stand it ;-) ),
 but please stop comparing a free operating software with a paid 
 compiler,
 and assume the former have more users than the later because 
 it's free (and
 vice-versa). In addition, I don't see the logic behind 
 comparing something
 born in the 90s with something from the 2000s. Remember the 
 Dot-com bubble ?
Obviously nothing is a "single criteria of success," as has been stated already. In complex social fields like business or technology ventures, where there are many confounding factors, judgement and interpretation are everything. By your rationale, we might as well do _anything_ because how could we possibly know that C++ wasn't immensely successful only because Bjarne Stroustrup is a Dane? Obviously none of this discussion matters, as D has very little Danish involvement and therefore can never be as popular. ;) You have to have the insight to be able to weigh all these competing factors and while I agree that most cannot, those who are successful do.
 d) People pay for something they need. They don't adopt 
 something because
 they can pay for it. That's why paid compiler must follow 
 language
 promotion, not the other way around.
These assertions are somewhat meaningless. Those who value performance will pay for the optimized version of the dmd compiler that I've proposed. Those who don't will use the slower, pure-OSS version. There is no reason for a paid compiler to only follow promotion, both must be done at the same time. In any case, I've lost interest in this debate. I've made my case, those involved with the D compiler can decide if this would be a worthwhile direction. From their silence so far, I can only assume that they are not interested in rousing the ire of the freetards and will simply maintain the status quo of keeping all source public. This will lead to D's growth being slowed, compared to the alternative of providing a paid compiler also. That's their choice to make. If somebody stumbles across this thread later, perhaps they will close up optimization patches to ldc and sell a paid version. Given that those behind dmd have not expressed any interest in a paid version, maybe these ldc vendors will not involve them with the money or feature decisions of their paid ldc. It would be likely that this paid compiler becomes the dominant one and the original dmd project is forgotten. If you don't choose the best approach, a hybrid model, you leave it open for somebody else to do it and take the project in a different direction.
Jun 27 2013
parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
On 27 June 2013 09:53, Joakim <joakim airpost.net> wrote:
 those involved with the D compiler can decide if this would be a worthwhile
 direction.  From their silence so far, I can only assume that they are not
 interested in rousing the ire of the freetards and will simply maintain the
 status quo of keeping all source public.
True, I tend to just ignore comments from opportunists who jump in and shout "Hey guys! I'm new around here, have you guys tried to do something completely radical on the off chance that it will work? I have a good feeling about this..!!" But as it stands, I'm taking a quick break from my usual GDC work before I reach stage 12 of burnout. ;)
 This will lead to D's growth being slowed, compared to the alternative of
 providing a paid compiler also.  That's their choice to make.
In your opinion.
 If somebody stumbles across this thread later, perhaps they will close up
 optimization patches to ldc and sell a paid version.  Given that those
 behind dmd have not expressed any interest in a paid version, maybe these
 ldc vendors will not involve them with the money or feature decisions of
 their paid ldc.  It would be likely that this paid compiler becomes the
 dominant one and the original dmd project is forgotten.
This was said when GDC got the D2 language stable. The reality? I wouldn't hold my breath... I'm still a one-man team, and there is no contingency in place should something happen to me. -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Jun 27 2013
parent reply "Joakim" <joakim airpost.net> writes:
As I said earlier, I'm done with this debate.

There is no point talking to people who make blatantly ignorant 
statements like, "Binary blobs are the exception rather than the 
rule in Linux, and many hardware vendors would flat out say 'no' 
to doing any support on them."  This assertion is so ignorant of 
the facts, it's laughable. :) I have no idea what to make of 
Iain's talking about gdc or that it is a "one-man team" in 
response to my prediction that ldc could go closed/paid and 
obsolete dmd: there is absolutely no connection between the 
topics.

As for Luca's long response, it is filled with basic mistakes, 
silly and incorrect rehashes of material already covered, or 
trivial twits, like the fact that D has a spec but isn't 
standardized by any international body.  For example, I 
originally pointed out several examples of other projects with 
existing commercial models and I was told that they're not 
"closed."  I responded that I never said that they were all 
closed, only commercial, and I'm now told that since my proposed 
model for D is closed, I'm "misstating" myself. (Slaps head)

These responses seem written by people who have a very tenuous 
grasp on the text I wrote.

Look, I get it, you guys are religious zealots- you tip your hand 
when you allude to ethical or moral reasons for using open 
source, a crazy idea if there ever was one- and you will come up 
with all kinds of silly arguments in the face of overwhelming 
evidence that _pure_ open source has failed.  Instead, you claim 
success when hybrid models bring more open source into the world, 
then nonsensically reverse course and claim that either they 
aren't actually hybrid or that such hybrid models are not really 
"open source," that it's a lie to call it that. (Slaps head again)

I'm not trying to convince you zealots.  You want to keep banging 
your heads against the wall for the greater glory of your 
religion, have fun with that.

I'm simply putting forward a case for D going the route of the 
most successful projects these days, by using a hybrid model, 
with a unique variation that I came up with :) and have 
successfully used for a project of my own.

Those who aren't religious about _pure_ open source can consider 
what I've proposed and my evidence and see if it makes sense to 
them.
Jun 27 2013
next sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Thursday, 27 June 2013 at 13:18:01 UTC, Joakim wrote:
 As I said earlier, I'm done with this debate.

 There is no point talking to people who make blatantly ignorant 
 statements like, "Binary blobs are the exception rather than 
 the rule in Linux, and many hardware vendors would flat out say 
 'no' to doing any support on them."  This assertion is so 
 ignorant of the facts, it's laughable. :) I have no idea what 
 to make of Iain's talking about gdc or that it is a "one-man 
 team" in response to my prediction that ldc could go 
 closed/paid and obsolete dmd: there is absolutely no connection 
 between the topics.

 As for Luca's long response, it is filled with basic mistakes, 
 silly and incorrect rehashes of material already covered, or 
 trivial twits, like the fact that D has a spec but isn't 
 standardized by any international body.  For example, I 
 originally pointed out several examples of other projects with 
 existing commercial models and I was told that they're not 
 "closed."  I responded that I never said that they were all 
 closed, only commercial, and I'm now told that since my 
 proposed model for D is closed, I'm "misstating" myself. (Slaps 
 head)

 These responses seem written by people who have a very tenuous 
 grasp on the text I wrote.

 Look, I get it, you guys are religious zealots- you tip your 
 hand when you allude to ethical or moral reasons for using open 
 source, a crazy idea if there ever was one- and you will come 
 up with all kinds of silly arguments in the face of 
 overwhelming evidence that _pure_ open source has failed.  
 Instead, you claim success when hybrid models bring more open 
 source into the world, then nonsensically reverse course and 
 claim that either they aren't actually hybrid or that such 
 hybrid models are not really "open source," that it's a lie to 
 call it that. (Slaps head again)

 I'm not trying to convince you zealots.  You want to keep 
 banging your heads against the wall for the greater glory of 
 your religion, have fun with that.

 I'm simply putting forward a case for D going the route of the 
 most successful projects these days, by using a hybrid model, 
 with a unique variation that I came up with :) and have 
 successfully used for a project of my own.

 Those who aren't religious about _pure_ open source can 
 consider what I've proposed and my evidence and see if it makes 
 sense to them.
Most replies to you have been quite measured and reasonable. I'm not sure what justifies you calling people zealots.
Jun 27 2013
parent reply "Joakim" <joakim airpost.net> writes:
On Thursday, 27 June 2013 at 13:25:06 UTC, John Colvin wrote:
 On Thursday, 27 June 2013 at 13:18:01 UTC, Joakim wrote:
 Look, I get it, you guys are religious zealots- you tip your 
 hand when you allude to ethical or moral reasons for using 
 open source, a crazy idea if there ever was one- and you will 
 come up with all kinds of silly arguments in the face of 
 overwhelming evidence that _pure_ open source has failed.
Most replies to you have been quite measured and reasonable. I'm not sure what justifies you calling people zealots.
Read the rest of the sentence which you quoted, my reasons are stated. When I come across so many arguments that are _factually_ wrong- "the Artistic license doesn't allow closing source," "most linux installs don't use binary blobs"- I know I'm dealing with religious zealots.
Jun 27 2013
parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On 27 June 2013 14:40, Joakim <joakim airpost.net> wrote:
 On Thursday, 27 June 2013 at 13:25:06 UTC, John Colvin wrote:
 On Thursday, 27 June 2013 at 13:18:01 UTC, Joakim wrote:
 Look, I get it, you guys are religious zealots- you tip your hand when
 you allude to ethical or moral reasons for using open source, a crazy idea
 if there ever was one- and you will come up with all kinds of silly
 arguments in the face of overwhelming evidence that _pure_ open source has
 failed.
Most replies to you have been quite measured and reasonable. I'm not sure what justifies you calling people zealots.
Read the rest of the sentence which you quoted, my reasons are stated. When I come across so many arguments that are _factually_ wrong- "the Artistic license doesn't allow closing source," "most linux installs don't use binary blobs"- I know I'm dealing with religious zealots.
Which is quite amusing, as those quotes aren't found anywhere in this thread. :o) -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Jun 27 2013
prev sibling next sibling parent "Dicebot" <public dicebot.lv> writes:
On Thursday, 27 June 2013 at 13:18:01 UTC, Joakim wrote:
 There is no point talking to people who make blatantly ignorant 
 statements
Yeah, I keep wondering why someone even bothered to waste time explaining all this to someone who is incapable of both providing own reasoning and studying opponent one. I hope that anyone that has followed D history is perfectly aware of numbers that prove how beneficial transition to a community-based open development was.
Jun 27 2013
prev sibling next sibling parent Leandro Lucarella <luca llucax.com.ar> writes:
Joakim, el 27 de June a las 15:17 me escribiste:
 As I said earlier, I'm done with this debate.
 
 There is no point talking to people who make blatantly ignorant
 statements like, "Binary blobs are the exception rather than the
 rule in Linux, and many hardware vendors would flat out say 'no' to
 doing any support on them."  This assertion is so ignorant of the
 facts, it's laughable. :) I have no idea what to make of Iain's
 talking about gdc or that it is a "one-man team" in response to my
 prediction that ldc could go closed/paid and obsolete dmd: there is
 absolutely no connection between the topics.
 
 As for Luca's long response, it is filled with basic mistakes, silly
 and incorrect rehashes of material already covered, or trivial
How convenient is to put a lot of adjectives together and not a single fact to say someone is wrong. Almost as convenient as calling people religious zaelots when you run out of arguments. :) And is so funny that you keep talking about the D contributors not participating in the thread when evidently you don't know who the contributors are. I'm just so glad that you are done with this debate... My eyes were hurting from reading so much crap. Bye, bye! Have fun with Visual C++! -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Did you know the originally a Danish guy invented the burglar-alarm unfortunately, it got stolen
Jun 27 2013
prev sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On 27 June 2013 14:17, Joakim <joakim airpost.net> wrote:
 As I said earlier, I'm done with this debate.

 There is no point talking to people who make blatantly ignorant statements
 like, "Binary blobs are the exception rather than the rule in Linux, and
 many hardware vendors would flat out say 'no' to doing any support on them."
 This assertion is so ignorant of the facts, it's laughable. :)
Fact: That quote you find laughable isn't my opinion. It was what Linus said during a Q&A after one of his talks (at least, if I remember it correctly ;).
 I have no idea what to make of Iain's talking about gdc or that it is a
"one-man team"
 in response to my prediction that ldc could go closed/paid and obsolete dmd:
 there is absolutely no connection between the topics.
I suppose that was my ignorance there, I assumed that you at least *knew* a little bit of history behind the development of D1/D2. I'm sure people would raise their eyebrows and sigh to have the age old question "why don't we just drop development of DMD and move it to X?" asked again. :o) -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Jun 27 2013
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/24/13 9:13 AM, Andrei Alexandrescu wrote:
 reddit:
 http://www.reddit.com/r/programming/comments/1gz40q/dconf_2013_closing_keynote_quo_vadis_by_andrei/


 facebook: https://www.facebook.com/dlang.org/posts/662488747098143

 twitter: https://twitter.com/D_Programming/status/349197737805373441

 hackernews: https://news.ycombinator.com/item?id=5933818

 youtube: http://youtube.com/watch?v=4M-0LFBP9AU


 Andrei
HD version available: http://archive.org/details/dconf2013-day03-talk06 Andrei
Jun 25 2013
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Mon, 24 Jun 2013 09:13:48 -0700
Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> wrote:

 reddit: 
 http://www.reddit.com/r/programming/comments/1gz40q/dconf_2013_closing_keynote_quo_vadis_by_andrei/
 
Torrents and links up, plus a torrent now for the original MP4 of the previous talk: http://semitwist.com/download/misc/dconf2013/
Jun 25 2013
prev sibling parent reply "CJS" <Prometheus85 hotmail.com> writes:
In the talk Andrei seems to mentions that D's associative arrays 
are lacking in performance somehow. I'm very new to D, but it's 
not obvious to me what the shortcoming is. I assume it's that for 
some reason it's hard to specialize associative arrays to specfic 
types to give increased performance in specfic cases, but I'm 
unclear why that would be difficult. Could someone please 
elaborate?
Jun 30 2013
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, June 30, 2013 21:05:41 CJS wrote:
 In the talk Andrei seems to mentions that D's associative arrays
 are lacking in performance somehow. I'm very new to D, but it's
 not obvious to me what the shortcoming is. I assume it's that for
 some reason it's hard to specialize associative arrays to specfic
 types to give increased performance in specfic cases, but I'm
 unclear why that would be difficult. Could someone please
 elaborate?
There's one implementation, and you can't swap it out, whereas different use cases may perform better with different implementations. On top of that, the current implementation is rather buggy and fragile, but that's an implementation issue rather than an inherent one. - Jonathan M Davis
Jun 30 2013
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Sun, 30 Jun 2013 15:51:32 -0400, Jonathan M Davis <jmdavisProg gmx.com>  
wrote:

 On Sunday, June 30, 2013 21:05:41 CJS wrote:
 In the talk Andrei seems to mentions that D's associative arrays
 are lacking in performance somehow. I'm very new to D, but it's
 not obvious to me what the shortcoming is. I assume it's that for
 some reason it's hard to specialize associative arrays to specfic
 types to give increased performance in specfic cases, but I'm
 unclear why that would be difficult. Could someone please
 elaborate?
There's one implementation, and you can't swap it out, whereas different use cases may perform better with different implementations. On top of that, the current implementation is rather buggy and fragile, but that's an implementation issue rather than an inherent one.
No, the main issue is the current one is runtime-only, and so simple function calls such as toHash and opCmp cannot be inlined. You absolutely can change implementations (Walter did a few years ago from tree-based collision resolution to linked-list based). What you can't do is switch to a fully generated AA, or change the compiler-expected API. -Steve
Jun 30 2013
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, June 30, 2013 19:20:47 Steven Schveighoffer wrote:
 On Sun, 30 Jun 2013 15:51:32 -0400, Jonathan M Davis <jmdavisProg gmx.com>
 
 wrote:
 On Sunday, June 30, 2013 21:05:41 CJS wrote:
 In the talk Andrei seems to mentions that D's associative arrays
 are lacking in performance somehow. I'm very new to D, but it's
 not obvious to me what the shortcoming is. I assume it's that for
 some reason it's hard to specialize associative arrays to specfic
 types to give increased performance in specfic cases, but I'm
 unclear why that would be difficult. Could someone please
 elaborate?
There's one implementation, and you can't swap it out, whereas different use cases may perform better with different implementations. On top of that, the current implementation is rather buggy and fragile, but that's an implementation issue rather than an inherent one.
No, the main issue is the current one is runtime-only, and so simple function calls such as toHash and opCmp cannot be inlined.
Yeah. That's a big problem. We really need to templatize all that - though the current implementation is enough of a mess to make that difficult.
 You absolutely can change implementations (Walter did a few years ago from
 tree-based collision resolution to linked-list based).  What you can't do
 is switch to a fully generated AA, or change the compiler-expected API.
Okay. I didn't know that. But I think that they key issue with swapping out the implementation is not whether you can swap out the implementation for your whole program but rather being able to choose different implementations for different parts of your program. If you really care about your containers enough to worry about optimizing them for your particular use cases, then unless you only have one use case within your program, there's a good chance that you're going to want different implementations for different parts of your program. With library containers, that's as simple as swapping which one you use. With the bulit-in stuff like AAs, you can't do that. You only get one (even if you can make it different across programs). Now, if you just use library types, you don't have that problem, so in the long run, folks who really want to optimize their containers will probably do that. And if they _really_ want to optimize their containers, they're probably writing them themselves anyway. Regardless, while having a built-in AA simplifies the common case, it _is_ more limiting, and if you really care about how your AAs function, you're going to have to use a library solution (even if the bult-in AAs have a solid implementation). - Jonathan M Davis
Jun 30 2013
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Sun, 30 Jun 2013 21:43:53 -0400, Jonathan M Davis <jmdavisProg gmx.com>  
wrote:

 But I think that they key issue with swapping out
 the implementation is not whether you can swap out the implementation  
 for your
 whole program but rather being able to choose different implementations  
 for
 different parts of your program.
This would never happen. AAs are only ever going to be one implementation. If you want to use another map type, you will have to use a struct/class. I suppose AA's could simply be polymorphic, but I don't see the benefit. -Steve
Jun 30 2013
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, June 30, 2013 21:54:08 Steven Schveighoffer wrote:
 On Sun, 30 Jun 2013 21:43:53 -0400, Jonathan M Davis <jmdavisProg gmx.com>
 
 wrote:
 But I think that they key issue with swapping out
 the implementation is not whether you can swap out the implementation
 for your
 whole program but rather being able to choose different implementations
 for
 different parts of your program.
This would never happen. AAs are only ever going to be one implementation. If you want to use another map type, you will have to use a struct/class. I suppose AA's could simply be polymorphic, but I don't see the benefit.
I know. My point was that that's an inherent problem with built-in AAs that can't be overcome (regardless of how well they're implemented). If you want that level of control, you _have_ to use a library solution. - Jonathan M Davis
Jun 30 2013
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Sun, 30 Jun 2013 22:02:11 -0400, Jonathan M Davis <jmdavisProg gmx.com>  
wrote:

 On Sunday, June 30, 2013 21:54:08 Steven Schveighoffer wrote:
 On Sun, 30 Jun 2013 21:43:53 -0400, Jonathan M Davis  
 <jmdavisProg gmx.com>

 wrote:
 But I think that they key issue with swapping out
 the implementation is not whether you can swap out the implementation
 for your
 whole program but rather being able to choose different  
implementations
 for
 different parts of your program.
This would never happen. AAs are only ever going to be one implementation. If you want to use another map type, you will have to use a struct/class. I suppose AA's could simply be polymorphic, but I don't see the benefit.
I know. My point was that that's an inherent problem with built-in AAs that can't be overcome (regardless of how well they're implemented). If you want that level of control, you _have_ to use a library solution.
OK. I guess I was just reading your statement like it was a "problem" :) -Steve
Jun 30 2013
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, June 30, 2013 22:45:04 Steven Schveighoffer wrote:
 On Sun, 30 Jun 2013 22:02:11 -0400, Jonathan M Davis <jmdavisProg gmx.com>
 I know. My point was that that's an inherent problem with built-in AAs
 that
 can't be overcome (regardless of how well they're implemented). If you
 want
 that level of control, you _have_ to use a library solution.
OK. I guess I was just reading your statement like it was a "problem" :)
Well, it is in the sense that it _is_ a deficiency of built-in AAs for those who want to be able to use different implementations for different use cases, but it's not something that can actually be fixed, and having to use a library solution isn't exactly all that bad anyway, especially when most languages don't have AAs built-in in the first place. It's just a convenience feature. - Jonathan M Davis
Jun 30 2013
parent "CJS" <Prometheus85 hotmail.com> writes:
 Well, it is in the sense that it _is_ a deficiency of built-in 
 AAs for those
 who want to be able to use different implementations for 
 different use cases,
 but it's not something that can actually be fixed, and having 
 to use a library
 solution isn't exactly all that bad anyway, especially when 
 most languages
 don't have AAs built-in in the first place. It's just a 
 convenience feature.

 - Jonathan M Davis
Thanks for the dicussion! I thankfully haven't run into problems with needed highly optimized assiociative arrays, but it's good to know what the limitations are and why.
Jun 30 2013
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Sun, 30 Jun 2013 21:43:53 -0400, Jonathan M Davis <jmdavisProg gmx.com>  
wrote:

 On Sunday, June 30, 2013 19:20:47 Steven Schveighoffer wrote:
 No, the main issue is the current one is runtime-only, and so simple
 function calls such as toHash and opCmp cannot be inlined.
Yeah. That's a big problem. We really need to templatize all that - though the current implementation is enough of a mess to make that difficult.
The current implementation suffers from two problems: 1. The compiler doesn't treat T[U] as a direct instantiation of AssocArray!(T, U) in all cases. 2. The compiler has bugs in terms of pure/ safe/ctfe/etc that it "overlooks" when using built-in AAs. I think those who have tried to make a complete library-replacement for AAs have found this out. The best path to resolution I think is: 1. make a complete replacement for AAs that can be instantiated and operate without mapping to the AA syntax. It may not build, but it should be written and bugs against the compiler filed. 2. Fix all bugs to make 1. compile/possible 3. Switch compiler to use type in 1. whenever AA's are used. I believe some have made a very good attempt to do 1 (H.S. Teoh I think? maybe someone else) -Steve
Jun 30 2013
parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, June 30, 2013 21:59:45 Steven Schveighoffer wrote:
 On Sun, 30 Jun 2013 21:43:53 -0400, Jonathan M Davis <jmdavisProg gmx.com>
 
 wrote:
 On Sunday, June 30, 2013 19:20:47 Steven Schveighoffer wrote:
 No, the main issue is the current one is runtime-only, and so simple
 function calls such as toHash and opCmp cannot be inlined.
Yeah. That's a big problem. We really need to templatize all that - though the current implementation is enough of a mess to make that difficult.
The current implementation suffers from two problems: 1. The compiler doesn't treat T[U] as a direct instantiation of AssocArray!(T, U) in all cases. 2. The compiler has bugs in terms of pure/ safe/ctfe/etc that it "overlooks" when using built-in AAs. I think those who have tried to make a complete library-replacement for AAs have found this out. The best path to resolution I think is: 1. make a complete replacement for AAs that can be instantiated and operate without mapping to the AA syntax. It may not build, but it should be written and bugs against the compiler filed. 2. Fix all bugs to make 1. compile/possible 3. Switch compiler to use type in 1. whenever AA's are used. I believe some have made a very good attempt to do 1 (H.S. Teoh I think? maybe someone else)
Yeah. He was working on it and seems to have pretty much decided that the job is too big for one man (and/or that he doesn't have enough time). I believe that there was a post on D.Learn a few months back where you pointed to where he had his changes thus far (so that others could look at it and potentially continue his work), but AFAIK, he's given up on the whole thing for now. - Jonathan M Davis
Jun 30 2013