www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Idea #1 on integrating RC with GC

reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Consider we add a library slice type called RCSlice!T. It would have the 
same primitives as T[] but would use reference counting through and 
through. When the last reference count is gone, the buffer underlying 
the slice is freed. The underlying allocator will be the GC allocator.

Now, what if someone doesn't care about the whole RC thing and aims at 
convenience? There would be a method .toGC that just detaches the slice 
and disables the reference counter (e.g. by setting it to uint.max/2 or 
whatever).

Then people who want reference counting say

auto x = fun();

and those who don't care say:

auto x = fun().toGC();


Destroy.

Andrei
Feb 04 2014
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 04 Feb 2014 18:51:35 -0500, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 Consider we add a library slice type called RCSlice!T. It would have the  
 same primitives as T[] but would use reference counting through and  
 through. When the last reference count is gone, the buffer underlying  
 the slice is freed. The underlying allocator will be the GC allocator.
Doesn't that mean it lives in the GC heap and is scanned along with all the other data in the GC heap (and triggers GC cycles)? What is the benefit?
 Now, what if someone doesn't care about the whole RC thing and aims at  
 convenience? There would be a method .toGC that just detaches the slice  
 and disables the reference counter (e.g. by setting it to uint.max/2 or  
 whatever).

 Then people who want reference counting say

 auto x = fun();

 and those who don't care say:

 auto x = fun().toGC();
Shouldn't the default be what is expected now? That is, I don't want to have to change all my code to return RCSlice!T instead of T[]. I admit, I don't know how that would work... -Steve
Feb 04 2014
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/4/14, 4:01 PM, Steven Schveighoffer wrote:
 On Tue, 04 Feb 2014 18:51:35 -0500, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 Consider we add a library slice type called RCSlice!T. It would have
 the same primitives as T[] but would use reference counting through
 and through. When the last reference count is gone, the buffer
 underlying the slice is freed. The underlying allocator will be the GC
 allocator.
Doesn't that mean it lives in the GC heap and is scanned along with all the other data in the GC heap (and triggers GC cycles)? What is the benefit?
GC.free is the benefit.
 Shouldn't the default be what is expected now? That is, I don't want to
 have to change all my code to return RCSlice!T instead of T[]. I admit,
 I don't know how that would work...
Me neither. Andrei
Feb 04 2014
prev sibling next sibling parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Tue, 04 Feb 2014 15:51:35 -0800, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 Consider we add a library slice type called RCSlice!T. It would have the  
 same primitives as T[] but would use reference counting through and  
 through. When the last reference count is gone, the buffer underlying  
 the slice is freed. The underlying allocator will be the GC allocator.

 Now, what if someone doesn't care about the whole RC thing and aims at  
 convenience? There would be a method .toGC that just detaches the slice  
 and disables the reference counter (e.g. by setting it to uint.max/2 or  
 whatever).

 Then people who want reference counting say

 auto x = fun();

 and those who don't care say:

 auto x = fun().toGC();


 Destroy.

 Andrei
I am assuming that you ignoring cyclic-refs for now. And dealing with them would mean that you would need something like: auto weak x = fun(); Since the compiler cannot figure out that this is supposed to be weak or not it would assume strong as that is the majority. Second thought. Why is their an extra step for those of us who don't want the GC. I know there has to be an extra step for some people, but I feel that we should make the simplest option the default and then open up avenues for more advanced control for people who want it. ARC is not as simple to understand as a GC for newbies, and ARC requires more careful control, so why not make getting into ARC a little harder, that way we prevent heartache for new people. So something more like: auto x = fun().toARC(); I can't imagine any situation where you could go from ARC to GC that could not also go from GC to ARC, they have to be equivalent operations to do that anyways. In the design world you make the simple option easy and more the more advanced options harder to get at to reduce the chance that the user accidentally shoot themselves in the foot. And the guys who want to the advanced options are power users anyways, they expect to have to work at it a little more. Related third thought. This goes against the current paradigm and will subtly break existing D code that has cyclic-refs in it. -- Adam Wilson GitHub/IRC: LightBender Aurora Project Coordinator
Feb 04 2014
next sibling parent reply "woh" <wojw yahoo.com> writes:
He said *new* library type, so obviously it would not break 
existing code since nothing uses it.

  Wednesday, 5 February 2014 at 00:07:30 UTC, Adam Wilson wrote:
 On Tue, 04 Feb 2014 15:51:35 -0800, Andrei Alexandrescu 
 <SeeWebsiteForEmail erdani.org> wrote:

 Consider we add a library slice type called RCSlice!T. It 
 would have the same primitives as T[] but would use reference 
 counting through and through. When the last reference count is 
 gone, the buffer underlying the slice is freed. The underlying 
 allocator will be the GC allocator.

 Now, what if someone doesn't care about the whole RC thing and 
 aims at convenience? There would be a method .toGC that just 
 detaches the slice and disables the reference counter (e.g. by 
 setting it to uint.max/2 or whatever).

 Then people who want reference counting say

 auto x = fun();

 and those who don't care say:

 auto x = fun().toGC();


 Destroy.

 Andrei
I am assuming that you ignoring cyclic-refs for now. And dealing with them would mean that you would need something like: auto weak x = fun(); Since the compiler cannot figure out that this is supposed to be weak or not it would assume strong as that is the majority. Second thought. Why is their an extra step for those of us who don't want the GC. I know there has to be an extra step for some people, but I feel that we should make the simplest option the default and then open up avenues for more advanced control for people who want it. ARC is not as simple to understand as a GC for newbies, and ARC requires more careful control, so why not make getting into ARC a little harder, that way we prevent heartache for new people. So something more like: auto x = fun().toARC(); I can't imagine any situation where you could go from ARC to GC that could not also go from GC to ARC, they have to be equivalent operations to do that anyways. In the design world you make the simple option easy and more the more advanced options harder to get at to reduce the chance that the user accidentally shoot themselves in the foot. And the guys who want to the advanced options are power users anyways, they expect to have to work at it a little more. Related third thought. This goes against the current paradigm and will subtly break existing D code that has cyclic-refs in it.
Feb 04 2014
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 04 Feb 2014 19:15:24 -0500, woh <wojw yahoo.com> wrote:

 He said *new* library type, so obviously it would not break existing  
 code since nothing uses it.
I think the possible goal is that phobos functions that now return slices would return this (at some point in the future), to give the option of using RC or GC with any phobos calls. -Steve
Feb 04 2014
prev sibling next sibling parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Tue, 04 Feb 2014 16:07:29 -0800, Adam Wilson <flyboynw gmail.com> wrote:

 On Tue, 04 Feb 2014 15:51:35 -0800, Andrei Alexandrescu  
 <SeeWebsiteForEmail erdani.org> wrote:

 Consider we add a library slice type called RCSlice!T. It would have  
 the same primitives as T[] but would use reference counting through and  
 through. When the last reference count is gone, the buffer underlying  
 the slice is freed. The underlying allocator will be the GC allocator.

 Now, what if someone doesn't care about the whole RC thing and aims at  
 convenience? There would be a method .toGC that just detaches the slice  
 and disables the reference counter (e.g. by setting it to uint.max/2 or  
 whatever).

 Then people who want reference counting say

 auto x = fun();

 and those who don't care say:

 auto x = fun().toGC();


 Destroy.

 Andrei
I am assuming that you ignoring cyclic-refs for now. And dealing with them would mean that you would need something like: auto weak x = fun(); Since the compiler cannot figure out that this is supposed to be weak or not it would assume strong as that is the majority. Second thought. Why is their an extra step for those of us who don't want the GC. I know there has to be an extra step for some people, but I feel that we should make the simplest option the default and then open up avenues for more advanced control for people who want it. ARC is not as simple to understand as a GC for newbies, and ARC requires more careful control, so why not make getting into ARC a little harder, that way we prevent heartache for new people. So something more like: auto x = fun().toARC(); I can't imagine any situation where you could go from ARC to GC that could not also go from GC to ARC, they have to be equivalent operations to do that anyways. In the design world you make the simple option easy and more the more advanced options harder to get at to reduce the chance that the user accidentally shoot themselves in the foot. And the guys who want to the advanced options are power users anyways, they expect to have to work at it a little more. Related third thought. This goes against the current paradigm and will subtly break existing D code that has cyclic-refs in it.
Ok, disregard my previous post. I think I understand better what Andrei is driving at, but it's not obvious, better examples are needed. But more to the point this is going to make us all equally miserable. The ARC guys don't get the compiler support needed to make ARC fast. It completely ignores cyclic-refs inside the slice. It doesn't solve the primary ARC-camp complaint about the reliance on the GC. The GC now guys have more hoops to jump-through when working with Phobos. And it increases complexity and cognitive load all around. -- Adam Wilson GitHub/IRC: LightBender Aurora Project Coordinator
Feb 04 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/4/14, 5:06 PM, Adam Wilson wrote:
 Ok, disregard my previous post. I think I understand better what Andrei
 is driving at, but it's not obvious, better examples are needed. But
 more to the point this is going to make us all equally miserable. The
 ARC guys don't get the compiler support needed to make ARC fast.
Of course they could. Compiler can have internal support for RC slices. Object will need some built-in support anyway.
 It
 completely ignores cyclic-refs inside the slice.
Wrong. It leaves cyclic references to the GC.
 It doesn't solve the
 primary ARC-camp complaint about the reliance on the GC.
Why not?
 The GC now guys
 have more hoops to jump-through when working with Phobos.
Why?
 And it
 increases complexity and cognitive load all around.
That comes with the territory. Have no illusion we can add RC support in any way at no cost. Andrei
Feb 04 2014
parent "Adam Wilson" <flyboynw gmail.com> writes:
On Tue, 04 Feb 2014 20:21:17 -0800, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 2/4/14, 5:06 PM, Adam Wilson wrote:
 Ok, disregard my previous post. I think I understand better what Andrei
 is driving at, but it's not obvious, better examples are needed. But
 more to the point this is going to make us all equally miserable. The
 ARC guys don't get the compiler support needed to make ARC fast.
Of course they could. Compiler can have internal support for RC slices. Object will need some built-in support anyway.
 It
 completely ignores cyclic-refs inside the slice.
Wrong. It leaves cyclic references to the GC.
Ok, I can see that.
 It doesn't solve the
 primary ARC-camp complaint about the reliance on the GC.
Why not?
It still has to be loaded and running. It can still non-deterministically pause. Yes, the pause may happen less often but they will take a similar amount of time to run the mark phase as the current, which is statistically the longest phase. I already know that they are going to complain loudly because this does nothing to resolve their central complaint. Also Adam Ruppe brought up an interesting problem that I didn't understand with slicing in this configuration. If he could comment that would be appreciated.
 The GC now guys
 have more hoops to jump-through when working with Phobos.
Why?
New details to corral: Don't forget to add your .toGC() calls now kids!
 And it
 increases complexity and cognitive load all around.
That comes with the territory. Have no illusion we can add RC support in any way at no cost.
Yes, but why do it in such a way as to penalize existing practices. The penalty properly belongs on the new practices.
 Andrei
-- Adam Wilson GitHub/IRC: LightBender Aurora Project Coordinator
Feb 04 2014
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/4/14, 4:07 PM, Adam Wilson wrote:
 I am assuming that you ignoring cyclic-refs for now.
The GC will lift cycles.
 Second thought. Why is their an extra step for those of us who don't
 want the GC.
The flow goes from RC to losing RC and relying on GC. You can't add RC to some arbitrary allocation safely. Andrei
Feb 04 2014
prev sibling next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:

 auto x = fun().toGC();
Recently I remember Kenji saying that we are now almost able to implement "dup" in library code. If that happens, why don't you call it "dup"? Bye, bearophile
Feb 04 2014
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 04 Feb 2014 19:42:55 -0500, bearophile <bearophileHUGS lycos.com>  
wrote:

 Andrei Alexandrescu:

 auto x = fun().toGC();
Recently I remember Kenji saying that we are now almost able to implement "dup" in library code. If that happens, why don't you call it "dup"?
No, toGC does not create a copy. It just detaches the RC component, and relies on GC to clean up the resulting slice (I assume typeof(fun().toGC()) == T[]) dup would imply that you wanted a copy in addition to the reference-counted version. In fact, you may want a copy that is also reference counted. -Steve
Feb 04 2014
prev sibling next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 4 February 2014 at 23:51:35 UTC, Andrei Alexandrescu 
wrote:
 Consider we add a library slice type called RCSlice!T. It would 
 have the same primitives as T[] but would use reference 
 counting through and through. When the last reference count is 
 gone, the buffer underlying the slice is freed. The underlying 
 allocator will be the GC allocator.

 Now, what if someone doesn't care about the whole RC thing and 
 aims at convenience? There would be a method .toGC that just 
 detaches the slice and disables the reference counter (e.g. by 
 setting it to uint.max/2 or whatever).

 Then people who want reference counting say

 auto x = fun();

 and those who don't care say:

 auto x = fun().toGC();


 Destroy.

 Andrei
RC need GC to collect loops. So you want to have the GC at the lowest level. That being understood, I'd rather connect things the other way around. auto x = foo(); auto x = foo().toRC();
Feb 04 2014
next sibling parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Tue, 04 Feb 2014 17:12:37 -0800, deadalnix <deadalnix gmail.com> wrote:

 On Tuesday, 4 February 2014 at 23:51:35 UTC, Andrei Alexandrescu wrote:
 Consider we add a library slice type called RCSlice!T. It would have  
 the same primitives as T[] but would use reference counting through and  
 through. When the last reference count is gone, the buffer underlying  
 the slice is freed. The underlying allocator will be the GC allocator.

 Now, what if someone doesn't care about the whole RC thing and aims at  
 convenience? There would be a method .toGC that just detaches the slice  
 and disables the reference counter (e.g. by setting it to uint.max/2 or  
 whatever).

 Then people who want reference counting say

 auto x = fun();

 and those who don't care say:

 auto x = fun().toGC();


 Destroy.

 Andrei
RC need GC to collect loops. So you want to have the GC at the lowest level. That being understood, I'd rather connect things the other way around. auto x = foo(); auto x = foo().toRC();
The ARC crowd is going to hate this because it's still a GC allocation then you hook it to RC. So they can still have random GC pauses. -- Adam Wilson GitHub/IRC: LightBender Aurora Project Coordinator
Feb 04 2014
parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Wednesday, 5 February 2014 at 01:52:48 UTC, Adam Wilson wrote:
 On Tue, 04 Feb 2014 17:12:37 -0800, deadalnix 
 <deadalnix gmail.com> wrote:

 On Tuesday, 4 February 2014 at 23:51:35 UTC, Andrei 
 Alexandrescu wrote:
 Consider we add a library slice type called RCSlice!T. It 
 would have the same primitives as T[] but would use reference 
 counting through and through. When the last reference count 
 is gone, the buffer underlying the slice is freed. The 
 underlying allocator will be the GC allocator.

 Now, what if someone doesn't care about the whole RC thing 
 and aims at convenience? There would be a method .toGC that 
 just detaches the slice and disables the reference counter 
 (e.g. by setting it to uint.max/2 or whatever).

 Then people who want reference counting say

 auto x = fun();

 and those who don't care say:

 auto x = fun().toGC();


 Destroy.

 Andrei
RC need GC to collect loops. So you want to have the GC at the lowest level. That being understood, I'd rather connect things the other way around. auto x = foo(); auto x = foo().toRC();
The ARC crowd is going to hate this because it's still a GC allocation then you hook it to RC. So they can still have random GC pauses.
GC.disable. Just saying.
Feb 04 2014
parent "Adam Wilson" <flyboynw gmail.com> writes:
On Tue, 04 Feb 2014 18:08:32 -0800, Sean Kelly <sean invisibleduck.org>  
wrote:

 On Wednesday, 5 February 2014 at 01:52:48 UTC, Adam Wilson wrote:
 On Tue, 04 Feb 2014 17:12:37 -0800, deadalnix <deadalnix gmail.com>  
 wrote:

 On Tuesday, 4 February 2014 at 23:51:35 UTC, Andrei Alexandrescu wrote:
 Consider we add a library slice type called RCSlice!T. It would have  
 the same primitives as T[] but would use reference counting through  
 and through. When the last reference count is gone, the buffer  
 underlying the slice is freed. The underlying allocator will be the  
 GC allocator.

 Now, what if someone doesn't care about the whole RC thing and aims  
 at convenience? There would be a method .toGC that just detaches the  
 slice and disables the reference counter (e.g. by setting it to  
 uint.max/2 or whatever).

 Then people who want reference counting say

 auto x = fun();

 and those who don't care say:

 auto x = fun().toGC();


 Destroy.

 Andrei
RC need GC to collect loops. So you want to have the GC at the lowest level. That being understood, I'd rather connect things the other way around. auto x = foo(); auto x = foo().toRC();
The ARC crowd is going to hate this because it's still a GC allocation then you hook it to RC. So they can still have random GC pauses.
GC.disable. Just saying.
Hmm, as long as it doesn't disable the allocation that might just work. -- Adam Wilson GitHub/IRC: LightBender Aurora Project Coordinator
Feb 04 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/4/14, 5:12 PM, deadalnix wrote:
 On Tuesday, 4 February 2014 at 23:51:35 UTC, Andrei Alexandrescu wrote:
 Consider we add a library slice type called RCSlice!T. It would have
 the same primitives as T[] but would use reference counting through
 and through. When the last reference count is gone, the buffer
 underlying the slice is freed. The underlying allocator will be the GC
 allocator.

 Now, what if someone doesn't care about the whole RC thing and aims at
 convenience? There would be a method .toGC that just detaches the
 slice and disables the reference counter (e.g. by setting it to
 uint.max/2 or whatever).

 Then people who want reference counting say

 auto x = fun();

 and those who don't care say:

 auto x = fun().toGC();


 Destroy.

 Andrei
RC need GC to collect loops. So you want to have the GC at the lowest level.
Correctamundo.
 That being understood, I'd rather connect things the other way around.

 auto x = foo();
 auto x = foo().toRC();
I don't know how to implement that. Andrei
Feb 04 2014
parent reply "Jameson Ernst" <jameson example.com> writes:
I'm just a casual end-user of D, but have been following this and 
related discussions with great interest. Just yesterday I was 
trying to "sell" D to a friend, and he basically told me that 
he'd be interested once the memory management situation gets 
resolved. I've been thinking about this a lot lately, even though 
I'm probably way out of my depth given the experts that frequent 
this forum. Still, I had an idea about this and thought I'd throw 
it out there.

On Wednesday, 5 February 2014 at 04:18:49 UTC, Andrei 
Alexandrescu wrote:
 On 2/4/14, 5:12 PM, deadalnix wrote:
 That being understood, I'd rather connect things the other way 
 around.

 auto x = foo();
 auto x = foo().toRC();
I don't know how to implement that. Andrei
From the discussion currently going on about postblits, it seems like there's a new emerging concept of a "unique expression," the value of which is guaranteed not to be referenced elsewhere. Could this concept perhaps be leveraged to go backwards from GC to RC? If you perform a GC allocation in the context of a unique expression, would it then be safe to force it back into an RC context, knowing that there are no outstanding references to it? What's more, it would allow library writes to mostly perform allocations a single way, giving the choice to the caller how they'd like to manage the lifetime of the newly returned unique object. I could be completely misunderstanding the unique-expression concept though.
Feb 05 2014
parent reply Marco Leise <Marco.Leise gmx.de> writes:
Am Wed, 05 Feb 2014 19:44:50 +0000
schrieb "Jameson Ernst" <jameson example.com>:

 I'm just a casual end-user of D, but have been following this and 
 related discussions with great interest. Just yesterday I was 
 trying to "sell" D to a friend, and he basically told me that 
 he'd be interested once the memory management situation gets 
 resolved. I've been thinking about this a lot lately, even though 
 I'm probably way out of my depth given the experts that frequent 
 this forum. Still, I had an idea about this and thought I'd throw 
 it out there.
 
 On Wednesday, 5 February 2014 at 04:18:49 UTC, Andrei 
 Alexandrescu wrote:
 On 2/4/14, 5:12 PM, deadalnix wrote:
 That being understood, I'd rather connect things the other way 
 around.

 auto x = foo();
 auto x = foo().toRC();
I don't know how to implement that. Andrei
From the discussion currently going on about postblits, it seems like there's a new emerging concept of a "unique expression," the value of which is guaranteed not to be referenced elsewhere. Could this concept perhaps be leveraged to go backwards from GC to RC? If you perform a GC allocation in the context of a unique expression, would it then be safe to force it back into an RC context, knowing that there are no outstanding references to it? What's more, it would allow library writes to mostly perform allocations a single way, giving the choice to the caller how they'd like to manage the lifetime of the newly returned unique object. I could be completely misunderstanding the unique-expression concept though.
The intent is to make it possible to avoid the GC, but if I understand you correctly you talk about always allocating through the GC first. -- Marco
Feb 06 2014
parent "Jameson Ernst" <jameson example.com> writes:
On Thursday, 6 February 2014 at 23:13:11 UTC, Marco Leise wrote:
 The intent is to make it possible to avoid the GC, but if I
 understand you correctly you talk about always allocating
 through the GC first.
One of the ideas on the table is to use ARC backed by the GC to clear cycles, which would mean the GC needs to be aware of all allocations anyway so it can scan and clear them if a cycle is formed. It's not safe to start reference counting GC memory at an arbitrary time after allocation, since references to it could be anywhere, but if you have a unique expression, then you know there are no other references to the new data, so it's safe to begin reference counting it at that time before it propagates out into the program at large.
Feb 06 2014
prev sibling next sibling parent reply "Jesse Phillips" <Jesse.K.Phillips+D gmail.com> writes:
On Tuesday, 4 February 2014 at 23:51:35 UTC, Andrei Alexandrescu 
wrote:
 and those who don't care say:

 auto x = fun().toGC();
If I don't care, why would I place .toGC() at the end of my calls? What reason do I have to go out of my way to request this? What problems can I expect when I forget to add it?
Feb 04 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/4/14, 11:20 PM, Jesse Phillips wrote:
 On Tuesday, 4 February 2014 at 23:51:35 UTC, Andrei Alexandrescu wrote:
 and those who don't care say:

 auto x = fun().toGC();
If I don't care, why would I place .toGC() at the end of my calls?
This is the way it all works: RC+GC is more structure than GC, so you start with more structure and then optionally "forget" it.
 What
 reason do I have to go out of my way to request this?
You use an API that uses e.g. string, not RCString.
 What problems can
 I expect when I forget to add it?
Passing x around won't compile. Andrei
Feb 04 2014
next sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Tue, 04 Feb 2014 23:21:43 -0800, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 2/4/14, 11:20 PM, Jesse Phillips wrote:
 On Tuesday, 4 February 2014 at 23:51:35 UTC, Andrei Alexandrescu wrote:
 and those who don't care say:

 auto x = fun().toGC();
If I don't care, why would I place .toGC() at the end of my calls?
This is the way it all works: RC+GC is more structure than GC, so you start with more structure and then optionally "forget" it.
 What
 reason do I have to go out of my way to request this?
You use an API that uses e.g. string, not RCString.
The amount of existing-code-breakage here will be immense. Almost nothing, unless it uses Phobos only, will compile once this is released. It might even do more damage to D's still fragile public-image than the Phobos/Tango fiasco did.
 What problems can
 I expect when I forget to add it?
Passing x around won't compile.
And we're supposed to want that? (See Above)
 Andrei
-- Adam Wilson GitHub/IRC: LightBender Aurora Project Coordinator
Feb 04 2014
prev sibling parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On 2/5/2014 2:21 AM, Andrei Alexandrescu wrote:
 On 2/4/14, 11:20 PM, Jesse Phillips wrote:
 On Tuesday, 4 February 2014 at 23:51:35 UTC, Andrei Alexandrescu wrote:
 and those who don't care say:

 auto x = fun().toGC();
If I don't care, why would I place .toGC() at the end of my calls?
This is the way it all works: RC+GC is more structure than GC, so you start with more structure and then optionally "forget" it.
 What
 reason do I have to go out of my way to request this?
You use an API that uses e.g. string, not RCString.
IIUC, it sounds like it'd work like this: // Library author provides either one or both of these: T[] getFooGC() {...} RCSlice!T getFooARC() {...} And then if, for whatever reason, you have a RCSlice!T and need to pass it to something that expects a T[], then you can cancel the RC-ing via toGC. If that's so, then I'd think lib authors could easily provide APIs that offer GC by default and ARC as an opt-in choice with templating: enum WantARC { Yes, No } auto getFoo(WantARC arc = WantARC.No)() { static if(arc == WantARC.No) return getFoo().toGC(); else { RCSlice!T x = ...; return x; } } T[] fooGC = getFoo(); RCSlice!T = getFoo!(WantARC.Yes)(); And I imagine that boilerplate could be encapsulated in a utility template: private RCSlice!T getFooARC() { RCSlice!T x = ...; return x; } template makeGCDefault(...){...magic happens here...} alias getFoo = makeGCDefault!getFooARC;
Feb 05 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/5/14, 12:03 AM, Nick Sabalausky wrote:
 IIUC, it sounds like it'd work like this:

 // Library author provides either one or both of these:
 T[] getFooGC() {...}
 RCSlice!T getFooARC() {...}

 And then if, for whatever reason, you have a RCSlice!T and need to pass
 it to something that expects a T[], then you can cancel the RC-ing via
 toGC.
Yah. We'd then need to do something similar on the parameter side, e.g. extend isSomeString to comprehend the RC variety as well.
 If that's so, then I'd think lib authors could easily provide APIs that
 offer GC by default and ARC as an opt-in choice with templating:

 enum WantARC { Yes, No }
 auto getFoo(WantARC arc = WantARC.No)() {
      static if(arc == WantARC.No)
          return getFoo().toGC();
      else {
      RCSlice!T x = ...;
          return x;
      }
 }
Nice. Though I'd say just return RC strings and let the client call toGC. They'd need to specify WantArc.No anyway so one way or another the user has to do something. Andrei
Feb 05 2014
parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On 2/5/2014 10:31 AM, Andrei Alexandrescu wrote:
 On 2/5/14, 12:03 AM, Nick Sabalausky wrote:
 If that's so, then I'd think lib authors could easily provide APIs that
 offer GC by default and ARC as an opt-in choice with templating:

 enum WantARC { Yes, No }
 auto getFoo(WantARC arc = WantARC.No)() {
      static if(arc == WantARC.No)
          return getFoo().toGC();
      else {
      RCSlice!T x = ...;
          return x;
      }
 }
Nice. Though I'd say just return RC strings and let the client call toGC. They'd need to specify WantArc.No anyway so one way or another the user has to do something.
Note the default parameter of WantARC.No. Therefore: // GC: T[] fooGC = getFoo(); // Nothing special specified // ARC: RCSlice!T = getFoo!(WantARC.Yes)(); Or am I missing something?
Feb 05 2014
prev sibling next sibling parent reply "Kagamin" <spam here.lot> writes:
My understanding was that ARC completely replaces GC and 
everything including slices becomes refcounted. Is having mixed 
incompatible GC and ARC code and trying to get them interoperate 
a good idea? Can you sell such mixed code to ARC guys?
Feb 04 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/4/14, 11:38 PM, Kagamin wrote:
 My understanding was that ARC completely replaces GC and everything
 including slices becomes refcounted. Is having mixed incompatible GC and
 ARC code and trying to get them interoperate a good idea? Can you sell
 such mixed code to ARC guys?
In an RC system you must collect cycles. ARC leaves that to the programmer in the form of weak pointers. This particular idea automates that. Andrei
Feb 04 2014
parent reply "Kagamin" <spam here.lot> writes:
On Wednesday, 5 February 2014 at 07:45:26 UTC, Andrei 
Alexandrescu wrote:
 In an RC system you must collect cycles. ARC leaves that to the 
 programmer in the form of weak pointers. This particular idea 
 automates that.
Having GC as a backup for cycles doesn't prevent making everything transparently refcounted. As to allocation strategies, the code can just use compatible allocation strategy.
Feb 05 2014
parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Wednesday, 5 February 2014 at 08:06:30 UTC, Kagamin wrote:
 On Wednesday, 5 February 2014 at 07:45:26 UTC, Andrei 
 Alexandrescu wrote:
 In an RC system you must collect cycles. ARC leaves that to 
 the programmer in the form of weak pointers. This particular 
 idea automates that.
Having GC as a backup for cycles doesn't prevent making everything transparently refcounted. As to allocation strategies, the code can just use compatible allocation strategy.
Yes, the GC just needs to check roots for already released blocks, if I am not mistaken.
Feb 05 2014
next sibling parent reply "Francesco Cattoglio" <francesco.cattoglio gmail.com> writes:
On Wednesday, 5 February 2014 at 12:12:01 UTC, Paulo Pinto wrote:
 Yes, the GC just needs to check roots for already released 
 blocks, if I am not mistaken.
Perhaps I lost some explanation in the discussions, but does this improve performance in any way (other than allowing you to disable the GC)?
Feb 05 2014
parent reply Michel Fortin <michel.fortin michelf.ca> writes:
On 2014-02-05 12:25:48 +0000, "Francesco Cattoglio" 
<francesco.cattoglio gmail.com> said:

 On Wednesday, 5 February 2014 at 12:12:01 UTC, Paulo Pinto wrote:
 Yes, the GC just needs to check roots for already released blocks, if I 
 am not mistaken.
Perhaps I lost some explanation in the discussions, but does this improve performance in any way (other than allowing you to disable the GC)?
In general ARC+GC would release memory faster so you need less memory overall. Less garbage memory blocks floating around might make processor caches more efficients. And less memory pressure means the GC itself runs less often, thus less pauses, and shorter pauses since there is less memory to scan. On the other hand, if you allocate everything you need from the start and then just shuffle pointers around, ARC will make things slower. So there's some good and some bad. It's hard to know exactly how much of each in practice without an implementation to take some measurements with real world programs. But what's certain is that with ARC the overhead is more evenly distributed in time. -- Michel Fortin michel.fortin michelf.ca http://michelf.ca
Feb 05 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Wednesday, 5 February 2014 at 15:25:27 UTC, Michel Fortin 
wrote:
 In general ARC+GC would release memory faster so you need less 
 memory overall. Less garbage memory blocks floating around 
 might make processor caches more efficients. And less memory 
 pressure means the GC itself runs less often, thus less pauses, 
 and shorter pauses since there is less memory to scan.
Which does not fix a single problem mentioned in embedded/gamedev threads. Difference between "somewhat less" and "reliably constrained" is beyond measure.
Feb 05 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 5 February 2014 at 15:46:21 UTC, Dicebot wrote:
 Which does not fix a single problem mentioned in 
 embedded/gamedev threads. Difference between "somewhat less" 
 and "reliably constrained" is beyond measure.
If you write code in a way that does not create much cycles you don't have to call the GC at all. So getting the GC out of the implicit allocations the language make might be the most important thing, but how much memory is wasted over an hour that way? A game should perhaps run for 1 hour without a hiccup, ARC might be good enough if RC collect 98% of all garbage. A real time audio application should run for 12 hours without a hiccup… you probably want a GC free audio callback. A real time server that monitors some vital resource should run for hours without a hiccup... You either want a real time GC or no GC. Different scenarios have different needs.
Feb 05 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Wednesday, 5 February 2014 at 15:57:51 UTC, Ola Fosheim 
Grøstad wrote:
 If you write code in a way that does not create much cycles you 
 don't have to call the GC at all. So getting the GC out of the 
 implicit allocations the language make might be the most 
 important thing, but how much memory is wasted over an hour 
 that way?
It is up to programmer to decide. Right now he does not have a choice and sometimes you can't afford to have GC in your program at all (as in can't have it linked to the binary), not just can't call collection cycles. Having sane fallback is very desired. Proposed solution does not seem to save you from uncontrollably long collection cycles anyway as it still uses same memory pool so I don't see how it can help even games, not even speaking about more demanding applications.
 A game should perhaps run for 1 hour without a hiccup, ARC 
 might be good enough if RC collect 98% of all garbage.

 A real time audio application should run for 12 hours without a 
 hiccup… you probably want a GC free audio callback.

 A real time server that monitors some vital resource should run 
 for hours without a hiccup... You either want a real time GC or 
 no GC.

 Different scenarios have different needs.
Haven't you just basically confirmed my opinion? :)
Feb 05 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 5 February 2014 at 16:04:06 UTC, Dicebot wrote:
 It is up to programmer to decide. Right now he does not have a 
 choice and sometimes you can't afford to have GC in your 
 program at all (as in can't have it linked to the binary), not 
 just can't call collection cycles. Having sane fallback is very 
 desired.
Yes, if D is going to be a system level programming language then there is no other option than.
 Proposed solution does not seem to save you from uncontrollably 
 long collection cycles anyway as it still uses same memory pool 
 so I don't see how it can help even games, not even speaking 
 about more demanding applications.
Well, for games and game servers I think a 100ms delay once or twice per hour is inconsequential in terms of impact. If you can reduce the GC load by various means it might work out for most applications. 1. Reduce the set considered for GC by having the GC not scanning paths that are known to be covered by RC. 2. Improving speed of GC by avoiding interior pointers etc. 3. Reducing the number of calls to GC by having RC take care of the majority of the memory releases. 4. Have local GC by collecting roots of nodes that are known to create cycles. I don't think ARC is an option for OS-level development and critical applications anyway. :-)
 Different scenarios have different needs.
Haven't you just basically confirmed my opinion? :)
In a way. :-) But what if the question is this: How can you in a pragmatic way come up with a solution that cover most soft real time applications? A compiler switch that default to RC (i.e. turn standard GC features into standard RC features) could in theory get you pretty close, but I think clever RC/GC memory management requires whole program analysis…
Feb 05 2014
parent reply "ixid" <nuaccount gmail.com> writes:
On Wednesday, 5 February 2014 at 16:50:40 UTC, Ola Fosheim 
Grøstad wrote:
 On Wednesday, 5 February 2014 at 16:04:06 UTC, Dicebot wrote:
 It is up to programmer to decide. Right now he does not have a 
 choice and sometimes you can't afford to have GC in your 
 program at all (as in can't have it linked to the binary), not 
 just can't call collection cycles. Having sane fallback is 
 very desired.
Yes, if D is going to be a system level programming language then there is no other option than.
 Proposed solution does not seem to save you from 
 uncontrollably long collection cycles anyway as it still uses 
 same memory pool so I don't see how it can help even games, 
 not even speaking about more demanding applications.
Well, for games and game servers I think a 100ms delay once or twice per hour is inconsequential in terms of impact. If you can reduce the GC load by various means it might work out for most applications. 1. Reduce the set considered for GC by having the GC not scanning paths that are known to be covered by RC. 2. Improving speed of GC by avoiding interior pointers etc. 3. Reducing the number of calls to GC by having RC take care of the majority of the memory releases. 4. Have local GC by collecting roots of nodes that are known to create cycles. I don't think ARC is an option for OS-level development and critical applications anyway. :-)
 Different scenarios have different needs.
Haven't you just basically confirmed my opinion? :)
In a way. :-) But what if the question is this: How can you in a pragmatic way come up with a solution that cover most soft real time applications? A compiler switch that default to RC (i.e. turn standard GC features into standard RC features) could in theory get you pretty close, but I think clever RC/GC memory management requires whole program analysis…
For a competitive game 100ms delays during gameplay would be completely unacceptable.
Feb 08 2014
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 8 February 2014 at 10:21:27 UTC, ixid wrote:
 For a competitive game 100ms delays during gameplay would be 
 completely unacceptable.
But inconsequential, in the economic sense: you cannot demand a refund for such a tiny deficiency.
Feb 08 2014
prev sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 08.02.2014 11:21, schrieb ixid:
 On Wednesday, 5 February 2014 at 16:50:40 UTC, Ola Fosheim Grøstad wrote:
 On Wednesday, 5 February 2014 at 16:04:06 UTC, Dicebot wrote:
 It is up to programmer to decide. Right now he does not have a choice
 and sometimes you can't afford to have GC in your program at all (as
 in can't have it linked to the binary), not just can't call
 collection cycles. Having sane fallback is very desired.
Yes, if D is going to be a system level programming language then there is no other option than.
 Proposed solution does not seem to save you from uncontrollably long
 collection cycles anyway as it still uses same memory pool so I don't
 see how it can help even games, not even speaking about more
 demanding applications.
Well, for games and game servers I think a 100ms delay once or twice per hour is inconsequential in terms of impact. If you can reduce the GC load by various means it might work out for most applications. 1. Reduce the set considered for GC by having the GC not scanning paths that are known to be covered by RC. 2. Improving speed of GC by avoiding interior pointers etc. 3. Reducing the number of calls to GC by having RC take care of the majority of the memory releases. 4. Have local GC by collecting roots of nodes that are known to create cycles. I don't think ARC is an option for OS-level development and critical applications anyway. :-)
 Different scenarios have different needs.
Haven't you just basically confirmed my opinion? :)
In a way. :-) But what if the question is this: How can you in a pragmatic way come up with a solution that cover most soft real time applications? A compiler switch that default to RC (i.e. turn standard GC features into standard RC features) could in theory get you pretty close, but I think clever RC/GC memory management requires whole program analysis…
For a competitive game 100ms delays during gameplay would be completely unacceptable.
In the stock exchange there are Java and .NET systems doing transactions at faster speed than that. 100ms is too much money to be lost if the transaction course changes. -- Paulo
Feb 08 2014
parent "Sean Kelly" <sean invisibleduck.org> writes:
On Saturday, 8 February 2014 at 16:41:42 UTC, Paulo Pinto wrote:
 In the stock exchange there are Java and .NET systems doing 
 transactions at faster speed than that.

 100ms is too much money to be lost if the transaction course 
 changes.
It might be worth referring back to Don's talk at least year's D conference. His company does targeted advertising, and if I remember correctly, their entire transaction response time limit including time on the wire is around 40ms. It's small amounts rather than potentially millions per transaction, but either way, for many network services these days, profit is directly tied to exceedingly rapid response time. Even for cloud services, let's say a user is willing to accept a 500ms transaction time, including transport. Now let's say the could service has to talk to 5 other services to process the request, each which has a normal transaction time of 50ms. If one occasionally spikes to 250ms, that's a problem for that one request, but it's an even bigger problem for all the requests piling up behind it, since in this case the entire process is blocking on a garbage collection. Under very heavy load, the process may not even be able to recover. You can see this in Netty for Java, as it has a fixed request queue size (defaults to like 5) and once this limit is reached it just starts rejecting requests. A long collection means request failures.
Feb 09 2014
prev sibling parent reply Manu <turkeyman gmail.com> writes:
On 6 February 2014 01:46, Dicebot <public dicebot.lv> wrote:

 On Wednesday, 5 February 2014 at 15:25:27 UTC, Michel Fortin wrote:

 In general ARC+GC would release memory faster so you need less memory
 overall. Less garbage memory blocks floating around might make processor
 caches more efficients. And less memory pressure means the GC itself runs
 less often, thus less pauses, and shorter pauses since there is less memory
 to scan.
Which does not fix a single problem mentioned in embedded/gamedev threads. Difference between "somewhat less" and "reliably constrained" is beyond measure.
The problem is completely solved; you turn the backing GC off. Devs are responsible for correct weak pointer attribution.
Feb 05 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Wednesday, 5 February 2014 at 16:03:56 UTC, Manu wrote:
 The problem is completely solved; you turn the backing GC off. 
 Devs are
 responsible for correct weak pointer attribution.
What does it give you over current situation with GC switched off and RefCounted used everywhere? Language features will still leak GC memory.
Feb 05 2014
parent reply Manu <turkeyman gmail.com> writes:
On 6 February 2014 02:05, Dicebot <public dicebot.lv> wrote:

 On Wednesday, 5 February 2014 at 16:03:56 UTC, Manu wrote:

 The problem is completely solved; you turn the backing GC off. Devs are
 responsible for correct weak pointer attribution.
What does it give you over current situation with GC switched off and RefCounted used everywhere? Language features will still leak GC memory.
Huh? Why would they? They don't create cycles, and would clean up reliably.
Feb 05 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Wednesday, 5 February 2014 at 16:13:33 UTC, Manu wrote:
 Huh? Why would they? They don't create cycles, and would clean 
 up reliably.
Because they still return T* and not RC!T ? Andrei's post speaks purely about extra library type and does not mention about possibility to make it default allocation type for language. Or it is just silently assumed?
Feb 05 2014
parent reply Manu <turkeyman gmail.com> writes:
On 6 February 2014 02:16, Dicebot <public dicebot.lv> wrote:

 On Wednesday, 5 February 2014 at 16:13:33 UTC, Manu wrote:

 Huh? Why would they? They don't create cycles, and would clean up
 reliably.
Because they still return T* and not RC!T ? Andrei's post speaks purely about extra library type and does not mention about possibility to make it default allocation type for language. Or it is just silently assumed?
Dunno, but I don't think any solution which uses RC!T will ever be acceptable. It basically defeats the whole purpose.
Feb 05 2014
next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Wednesday, 5 February 2014 at 17:40:35 UTC, Manu wrote:
 Dunno, but I don't think any solution which uses RC!T will ever 
 be
 acceptable. It basically defeats the whole purpose.
I feel terribly confused now :( Andrei, can you describe your intention with some big picture context?
Feb 05 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/5/14, 9:51 AM, Dicebot wrote:
 On Wednesday, 5 February 2014 at 17:40:35 UTC, Manu wrote:
 Dunno, but I don't think any solution which uses RC!T will ever be
 acceptable. It basically defeats the whole purpose.
I feel terribly confused now :( Andrei, can you describe your intention with some big picture context?
Phobos needs to be able to return allocated memory without creating litter. With RCSlice!T the ownership is passed back to the user. The user can continue tracking it by using reference counting built into RCSlice, or make the object "immortal" by calling .toGC against it. It's simple, really. All about libraries returning (and sometimes accepting) slices. Andrei
Feb 05 2014
next sibling parent reply "Namespace" <rswhite4 googlemail.com> writes:
How would look the syntax for creating a RC slice?
int[] arr = new int[100];
or
somewhat ugly as RCSlice!int arr = new int[100];
?
Or (as I fear) RCSlice!int arr = RCSlice!int(100); ?
Feb 05 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/5/14, 10:38 AM, Namespace wrote:
 How would look the syntax for creating a RC slice?
 int[] arr = new int[100];
 or
 somewhat ugly as RCSlice!int arr = new int[100];
 ?
 Or (as I fear) RCSlice!int arr = RCSlice!int(100); ?
Just call a function! The "new" syntax for arrays is syntactically bankrupt anyway (can't distinguish between static and dynamic arrays). A historical mistake. Andrei
Feb 05 2014
parent "Namespace" <rswhite4 googlemail.com> writes:
On Wednesday, 5 February 2014 at 18:42:55 UTC, Andrei 
Alexandrescu wrote:
 On 2/5/14, 10:38 AM, Namespace wrote:
 How would look the syntax for creating a RC slice?
 int[] arr = new int[100];
 or
 somewhat ugly as RCSlice!int arr = new int[100];
 ?
 Or (as I fear) RCSlice!int arr = RCSlice!int(100); ?
Just call a function! The "new" syntax for arrays is syntactically bankrupt anyway (can't distinguish between static and dynamic arrays). A historical mistake. Andrei
Thanks. So I can still rely on my own code. :)
Feb 05 2014
prev sibling next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 5 February 2014 at 18:33:54 UTC, Andrei 
Alexandrescu wrote:
 Phobos needs to be able to return allocated memory without 
 creating litter. With RCSlice!T the ownership is passed back to 
 the user. The user can continue tracking it by using reference 
 counting built into RCSlice, or make the object "immortal" by 
 calling .toGC against it.
Would it be possible to use RCSlice for all reference counting? I.e. that a reference counter for a struct would be a RCSlice of fixed length 1? If so then maybe it would be nice, but with a different name.
Feb 05 2014
next sibling parent reply "Namespace" <rswhite4 googlemail.com> writes:
On Wednesday, 5 February 2014 at 18:45:28 UTC, Ola Fosheim 
Grøstad wrote:
 On Wednesday, 5 February 2014 at 18:33:54 UTC, Andrei 
 Alexandrescu wrote:
 Phobos needs to be able to return allocated memory without 
 creating litter. With RCSlice!T the ownership is passed back 
 to the user. The user can continue tracking it by using 
 reference counting built into RCSlice, or make the object 
 "immortal" by calling .toGC against it.
Would it be possible to use RCSlice for all reference counting? I.e. that a reference counter for a struct would be a RCSlice of fixed length 1? If so then maybe it would be nice, but with a different name.
shared_ptr? ;) Welcome to C++.
Feb 05 2014
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 5 February 2014 at 18:51:05 UTC, Namespace wrote:
 On Wednesday, 5 February 2014 at 18:45:28 UTC, Ola Fosheim 
 Grøstad wrote:
 If so then maybe it would be nice, but with a different name.
shared_ptr? ;) Welcome to C++.
Yes! :-) Even better: Let the length be 0 for weak pointers after the resource is released. Not so bad, actually.
Feb 05 2014
prev sibling parent reply "Namespace" <rswhite4 googlemail.com> writes:
Would it not be possible to add an "int rc" to the internal Array 
struct? So that int[] arr = [1, 2, 3]; is ref counted per 
default? If you want it to be immortal, call toGC(arr) or with 
UFCS arr.toGC() which set the rc to uint.max / 2?
That would keep the nice syntax. :)
Feb 05 2014
parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Wednesday, 5 February 2014 at 19:04:10 UTC, Namespace wrote:
 Would it not be possible to add an "int rc" to the internal 
 Array struct? So that int[] arr = [1, 2, 3]; is ref counted per 
 default?
Please no (it'd also have to be int* rc btw). This makes slices fatter than they need to be. We should be looking at reducing costs, not shuffling them around or adding to them.
Feb 05 2014
parent reply "Namespace" <rswhite4 googlemail.com> writes:
On Wednesday, 5 February 2014 at 19:15:41 UTC, Adam D. Ruppe 
wrote:
 On Wednesday, 5 February 2014 at 19:04:10 UTC, Namespace wrote:
 Would it not be possible to add an "int rc" to the internal 
 Array struct? So that int[] arr = [1, 2, 3]; is ref counted 
 per default?
Please no (it'd also have to be int* rc btw). This makes slices fatter than they need to be. We should be looking at reducing costs, not shuffling them around or adding to them.
Only if you change the current implementation from this: ---- struct Array { void* ptr; size_t length; } ---- to this: ---- struct Array { void* ptr; int* rc; size_t length; } ---- But we could also implement it like this: ---- struct Array { static struct Payload { void* ptr; int* rc; } Payload* p; size_t length; } ---- with the same costs.
Feb 05 2014
parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Wednesday, 5 February 2014 at 19:36:00 UTC, Namespace wrote:
 	static struct Payload {
 		void* ptr;
 		int* rc;
 	}

 	Payload* p;
Now there's double indirection to get to the data... and you also forgot the necessary postblits and dtors to maintain the reference count. (these could be inlined or elided in some cases, but not all) You can't put the rc at the beginning of the pointer either, since then a = a[1..$] won't work. So the added pointer is unavoidable, your way or my way, both have a cost.
Feb 05 2014
parent reply "Namespace" <rswhite4 googlemail.com> writes:
On Wednesday, 5 February 2014 at 19:42:26 UTC, Adam D. Ruppe 
wrote:
 On Wednesday, 5 February 2014 at 19:36:00 UTC, Namespace wrote:
 	static struct Payload {
 		void* ptr;
 		int* rc;
 	}

 	Payload* p;
Now there's double indirection to get to the data... and you also forgot the necessary postblits and dtors to maintain the reference count. (these could be inlined or elided in some cases, but not all) You can't put the rc at the beginning of the pointer either, since then a = a[1..$] won't work. So the added pointer is unavoidable, your way or my way, both have a cost.
Hm, you're right. Would have been nice if the nice syntax could be retained, instead of further unsightly library solution.
Feb 05 2014
parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Wednesday, 5 February 2014 at 19:48:58 UTC, Namespace wrote:
 Hm, you're right. Would have been nice if the nice syntax could 
 be retained, instead of further unsightly library solution.
The thing is most slices don't need anything special - they are inspected, but not stored. Since they aren't stored, the allocation isn't this function's problem. Yesterday, I wrote a post with a function int average(int[] numbers) as an illustration here. numbers might be on the stack, the Gc heap, or a refcounted array, and none of that matters. Average just looks at it. As long as there isn't something like another thread that frees the memory in the middle of average's execution, it will be fine. (And if there is a magic thread freeing things willy nilly, now that's a real WTF!) This is why the borrowed idea (implemented by escape analysis in my mind, i think that would work and get us most the benefits without all of Rust's complexity) is nice: with a borrowed pointer, you explicitly know freeing it isn't your problem. You don't have to count or carry a refcount, you don't have to run the gc, you don't have to call free. You can take a lightweight slice and use it with confidence... as long as it doesn't escape the scope and thus accidentally stick around after the function returns.
Feb 05 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/5/14, 10:45 AM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Wednesday, 5 February 2014 at 18:33:54 UTC, Andrei Alexandrescu wrote:
 Phobos needs to be able to return allocated memory without creating
 litter. With RCSlice!T the ownership is passed back to the user. The
 user can continue tracking it by using reference counting built into
 RCSlice, or make the object "immortal" by calling .toGC against it.
Would it be possible to use RCSlice for all reference counting? I.e. that a reference counter for a struct would be a RCSlice of fixed length 1? If so then maybe it would be nice, but with a different name.
This is only focusing on slices. Andrei
Feb 05 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 5 February 2014 at 19:37:29 UTC, Andrei 
Alexandrescu wrote:
 This is only focusing on slices.
That is not going to work out well, because you should be able to decrease the ref count of a pointer to something arbitrary (void) without knowing the object type. And the ref count object should persist after destruction of the object in the case of weak pointers.
Feb 05 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/5/14, 11:45 AM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Wednesday, 5 February 2014 at 19:37:29 UTC, Andrei Alexandrescu wrote:
 This is only focusing on slices.
That is not going to work out well, because you should be able to decrease the ref count of a pointer to something arbitrary (void) without knowing the object type.
I said it a couple of times, and it seems it bears repeating: the charter of this is solely to create a slice type that takes care of itself. What this is not is a general solution for managing internal pointers or pointers to arbitrary objects. Andrei
Feb 05 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 5 February 2014 at 20:04:31 UTC, Andrei 
Alexandrescu wrote:
 I said it a couple of times, and it seems it bears repeating: 
 the charter of this is solely to create a slice type that takes 
 care of itself. What this is not is a general solution for 
 managing internal pointers or pointers to arbitrary objects.
Then I think this is starting in the wrong end of the problem space. Slices are tiny dots in this picture. It would be better to have a RC compiler switch and version{} statements in the libraries rather than having extensive special casing of RC/GC vs pure GC types in user code.
Feb 05 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/5/14, 12:20 PM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Wednesday, 5 February 2014 at 20:04:31 UTC, Andrei Alexandrescu wrote:
 I said it a couple of times, and it seems it bears repeating: the
 charter of this is solely to create a slice type that takes care of
 itself. What this is not is a general solution for managing internal
 pointers or pointers to arbitrary objects.
Then I think this is starting in the wrong end of the problem space. Slices are tiny dots in this picture.
I'm confused. Slices are the bulkiest allocation there is: chunks of contiguous data.
 It would be better to have a RC compiler switch and version{} statements
 in the libraries rather than having extensive special casing of RC/GC
 vs pure GC types in user code.
I don't see where the extensive casing comes from. User code decides which way they want and they go for it. Simple aliases make certain decisions swappable. Andrei
Feb 05 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 5 February 2014 at 22:18:33 UTC, Andrei 
Alexandrescu wrote:
 I'm confused. Slices are the bulkiest allocation there is: 
 chunks of contiguous data.
IMHO there should be one highly optimized low level ref-count structure that works everywhere, has compiler support and has an attractive syntax. That seems to be the most obvious solution. If not you might as well write your own array implementation, with better performance for your domain, or use an existing C++ framework that is geared towards your application domain. If we are talking games and simulation then you probably have your own allocator for search data structures (engine level) which are kind of fixed, predictable, tuneable, highly optimized, and uniquely owned. You allocate more, but probably seldom release. What is allocated/deallocated is "scripty" evolutionary stuff (content level). Dynamic game-world stuff that come in all kind of shapes and sizes: enemies, weapons, cars, events etc. Objects and their shifty relationships. 100.000+ objects for a game server. Stuff that is heterogeneous and is constantly modified in order to improve game play.
Feb 05 2014
prev sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Wednesday, 5 February 2014 at 18:33:54 UTC, Andrei 
Alexandrescu wrote:
 Phobos needs to be able to return allocated memory without 
 creating litter. With RCSlice!T the ownership is passed back to 
 the user. The user can continue tracking it by using reference 
 counting built into RCSlice, or make the object "immortal" by 
 calling .toGC against it.
Ok, clear. It will work then as far as I can see. have never been worried by lack of control over generated garbage in Phobos - it was the other way around, worries about control of how it is allocated. So yes, this is likely to solve issue you have mentioned but does not seem to be of any interest to me.
Feb 05 2014
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/5/14, 10:55 AM, Dicebot wrote:
 On Wednesday, 5 February 2014 at 18:33:54 UTC, Andrei Alexandrescu wrote:
 Phobos needs to be able to return allocated memory without creating
 litter. With RCSlice!T the ownership is passed back to the user. The
 user can continue tracking it by using reference counting built into
 RCSlice, or make the object "immortal" by calling .toGC against it.
Ok, clear. It will work then as far as I can see.
 I have
 never been worried by lack of control over generated garbage in Phobos -
 it was the other way around, worries about control of how it is allocated.

 So yes, this is likely to solve issue you have mentioned but does not
 seem to be of any interest to me.
Noted. That part needs to be addressed by allocators. Andrei
Feb 05 2014
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/5/14, 9:40 AM, Manu wrote:
 On 6 February 2014 02:16, Dicebot <public dicebot.lv
 <mailto:public dicebot.lv>> wrote:

     On Wednesday, 5 February 2014 at 16:13:33 UTC, Manu wrote:

         Huh? Why would they? They don't create cycles, and would clean
         up reliably.


     Because they still return T* and not RC!T ? Andrei's post speaks
     purely about extra library type and does not mention about
     possibility to make it default allocation type for language.

     Or it is just silently assumed?


 Dunno, but I don't think any solution which uses RC!T will ever be
 acceptable.
Why not? What would be deemed acceptable? And who's the acceptor?
 It basically defeats the whole purpose.
What is the purpose? Andrei
Feb 05 2014
prev sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 5 February 2014 at 12:12:01 UTC, Paulo Pinto wrote:
 Yes, the GC just needs to check roots for already released 
 blocks, if I am not mistaken.
Yes, when they go to non-zero. This is the scheme used in PHPs ARC/GC solution, published in this paper: http://researcher.watson.ibm.com/researcher/files/us-bacon/Bacon01Concurrent.pdf You push roots that are candidates onto a queue when the counter is decreased to nonzero, then you do a concurrent scan when you have a set of roots to scan for cycles. So you probably need a clever ARC to reduce the scanning. I think however, that for most programs that use ARC you don't do the GC at all. For long lived processes such as servers you might run it using heuristics (based on memory headroom or during the night). According to the paper above the amount of cyclic garbage tends to be low, but varies a great deal. They site another paper relating to Inferno that supposedly claims that RC caught 98% of the garbage. So the need for cycle collection is rather application specific. Perl has traditionally not caught cycles at all, but then again perl programs tends to be short lived. ---- There are also some papers on near real time GC, such as Staccato and Metronome, on David F. Bacons list: http://researcher.watson.ibm.com/researcher/view_pubs.php?person=us-bacon&t=1
Feb 05 2014
prev sibling parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Tue, 04 Feb 2014 23:38:54 -0800, Kagamin <spam here.lot> wrote:

 My understanding was that ARC completely replaces GC and everything  
 including slices becomes refcounted. Is having mixed incompatible GC and  
 ARC code and trying to get them interoperate a good idea? Can you sell  
 such mixed code to ARC guys?
I've been asking myself those questions a lot over the past couple days. If GC backed ARC is such a brilliant idea, how come nobody has done it yet? I mean, it is a rather obvious solution. What I am confident of is that it is going to create a metric-ton of implementation gotchas for the compiler to sort out (as if we don't have enough open trouble tickets already) and it is going to pretty steeply increase the complexity of language. I thought the whole point of D was to not be C++, particularly in terms of complexity? All for a potentially undeliverable promise of a little more speed and fewer (not none) collection pauses. I have a suspicion that the reason that it hasn't been done is that it doesn't actually improve the overall speed and quite possibly reduces it. It will take months of engineering effort just to ship the buggy initial functionality, and many more months and possibly years to sort out all the edge cases. This will in turn significantly reduce the bandwidth going towards fixing the features we already have that don't work right and improving the GC so that it isn't such an eyesore. -- Adam Wilson GitHub/IRC: LightBender Aurora Project Coordinator
Feb 05 2014
next sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Wednesday, 5 February 2014 at 09:05:02 UTC, Adam Wilson wrote:
 On Tue, 04 Feb 2014 23:38:54 -0800, Kagamin <spam here.lot> 
 wrote:

 My understanding was that ARC completely replaces GC and 
 everything including slices becomes refcounted. Is having 
 mixed incompatible GC and ARC code and trying to get them 
 interoperate a good idea? Can you sell such mixed code to ARC 
 guys?
I've been asking myself those questions a lot over the past couple days. If GC backed ARC is such a brilliant idea, how come nobody has done it yet? I mean, it is a rather obvious solution. What I am confident of is that it is going to create a metric-ton of implementation gotchas for the compiler to sort out (as if we don't have enough open trouble tickets already) and it is going to pretty steeply increase the complexity of language. I thought the whole point of D was to not be C++, particularly in terms of complexity? All for a potentially undeliverable promise of a little more speed and fewer (not none) collection pauses. I have a suspicion that the reason that it hasn't been done is that it doesn't actually improve the overall speed and quite possibly reduces it. It will take months of engineering effort just to ship the buggy initial functionality, and many more months and possibly years to sort out all the edge cases. This will in turn significantly reduce the bandwidth going towards fixing the features we already have that don't work right and improving the GC so that it isn't such an eyesore.
They have done it. It is how the systems programming language Cedar, for the Mesa operating system at Xerox PARC used to work. There are papers about it, that I already posted multiple times. -- Paulo
Feb 05 2014
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/5/14, 1:05 AM, Adam Wilson wrote:
 On Tue, 04 Feb 2014 23:38:54 -0800, Kagamin <spam here.lot> wrote:

 My understanding was that ARC completely replaces GC and everything
 including slices becomes refcounted. Is having mixed incompatible GC
 and ARC code and trying to get them interoperate a good idea? Can you
 sell such mixed code to ARC guys?
I've been asking myself those questions a lot over the past couple days. If GC backed ARC is such a brilliant idea, how come nobody has done it yet?
It's a classic. Andrei
Feb 05 2014
prev sibling next sibling parent reply Manu <turkeyman gmail.com> writes:
On 5 February 2014 09:51, Andrei Alexandrescu <SeeWebsiteForEmail erdani.org
 wrote:
 Consider we add a library slice type called RCSlice!T. It would have the
 same primitives as T[] but would use reference counting through and
 through. When the last reference count is gone, the buffer underlying the
 slice is freed. The underlying allocator will be the GC allocator.

 Now, what if someone doesn't care about the whole RC thing and aims at
 convenience? There would be a method .toGC that just detaches the slice and
 disables the reference counter (e.g. by setting it to uint.max/2 or
 whatever).

 Then people who want reference counting say

 auto x = fun();

 and those who don't care say:

 auto x = fun().toGC();


 Destroy.
This doesn't excite me at all. What about all other types of allocations? I don't want to mangle my types. What about closures? What about allocations from phobos? What about allocations from 3rd party libs that I have no control over? I don't like that it requires additional specification, and special treatment to have it detach to the GC. There's nothing transparent about that. Another library solution like RefCounted doesn't address the problem. Counter question; why approach it this way? Is there a reason that it needs to be of one kind or the other?
Feb 05 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 5 February 2014 at 12:40:13 UTC, Manu wrote:
 Counter question; why approach it this way?
 Is there a reason that it needs to be of one kind or the other?
Sure, you could make all allocations with RC enabled make space for a counter at a negative offset (ptr-offset), but that would not work with C structs or internal pointers and aligned data might hog a bit of extra space. You also need to handle internal pointers to embedded objects and arrays of objects (not pointers to objects). How do you ref count those? I guess you could switch all pointers into (allocated_base,offset) tuples and special case (ptr,0). You could do like C++, have the ref counter be a separate object. Then record the properties of the pointer (such as offset), then have special magic checks for deallocation: testing for internal reference then decrease the real ref counter of the parent rather than deallocate. This is quite compatible with having a GC, you could free the object only when proven safe and just ignore it and leave it for GC when you cannot prove it. IMHO you proabably need to redesign the language in order to support transparent or efficient automatic memory handling. If you retain C-legacy, you also retain manual memory handling or a limited set of opportunites for automatic garbage collection.
Feb 05 2014
parent reply Manu <turkeyman gmail.com> writes:
On 6 February 2014 00:04, <"Ola Fosheim Gr=C3=B8stad\"
<ola.fosheim.grostad+dlang gmail.com>" puremagic.com> wrote:

 On Wednesday, 5 February 2014 at 12:40:13 UTC, Manu wrote:

 Counter question; why approach it this way?
 Is there a reason that it needs to be of one kind or the other?
Sure, you could make all allocations with RC enabled make space for a counter at a negative offset (ptr-offset), but that would not work with C structs or internal pointers and aligned data might hog a bit of extra space.
Perhaps match a very particular and unlikely bit pattern in the negative offset to know if it is RC or not? Or I wonder if there's opportunity to pinch a single bit from pointers to mark that it is raw or RC allocated? Probably fine on 64bit. 32bit probably needs to match a bit pattern or something. Aligned data is a challenge. I have often wondered if it would be feasible to access the RC via a pointer hash or something, and keep it in a side table... sounds tricky, but I wonder if it's possible. You also need to handle internal pointers to embedded objects and arrays of
 objects (not pointers to objects). How do you ref count those? I guess yo=
u
 could switch all pointers into (allocated_base,offset) tuples and special
 case (ptr,0).
That's a good point. This is probably the trickiest detail. Maybe a clever way to make any pointer within the allocated range hash to the right index in the side table I referred to above. That sounds like a tricky hashing function, and probably slow. Fat pointers might be necessary. That's a bit annoying. Hmmm... Rather than (allocated_base, offset), I suspect (pointer, offset_from_base) would be better; typical dereferences would have no penalty. You could do like C++, have the ref counter be a separate object. Then
 record the properties of the pointer (such as offset), then have special
 magic checks for deallocation: testing for internal reference then decrea=
se
 the real ref counter of the parent rather than deallocate. This is quite
 compatible with having a GC, you could free the object only when proven
 safe and just ignore it and leave it for GC when you cannot prove it.
You mean like the smart pointer double indirection? I really don't like the double indirection, but it's a possibility. Or did I misunderstand? IMHO you proabably need to redesign the language in order to support
 transparent or efficient automatic memory handling. If you retain C-legac=
y,
 you also retain manual memory handling or a limited set of opportunites f=
or
 automatic garbage collection.
I'm sure there's a clever solution out there which would allow the ARC to detect if it's a raw C pointer or not...
Feb 05 2014
next sibling parent reply Michel Fortin <michel.fortin michelf.ca> writes:
On 2014-02-05 15:01:04 +0000, Manu <turkeyman gmail.com> said:

 Aligned data is a challenge. I have often wondered if it would be feasible
 to access the RC via a pointer hash or something, and keep it in a side
 table... sounds tricky, but I wonder if it's possible.
That's what Apple is doing (as seen in the CoreFoundation source code). They actually have eight such tables on OS X, each protected by a spin lock. The right table is chosen according to a few bits of the pointer. The obvious advantage is you can put immutable data in read-only memory without making the reference count immutable. The downside is that it's more complicated to access the counter.
 Fat pointers might be necessary. That's a bit annoying. Hmmm...
Since we're talking about adding reference counts to GC-allocated memory, you could use the GC to find the base address of the memory block. What is the cost of that?
 I'm sure there's a clever solution out there which would allow the ARC to
 detect if it's a raw C pointer or not...
Ask the GC for the base address of the memory block. If it does not come from a GC block, there's no reference counter to update. -- Michel Fortin michel.fortin michelf.ca http://michelf.ca
Feb 05 2014
parent reply Manu <turkeyman gmail.com> writes:
On 6 February 2014 01:22, Michel Fortin <michel.fortin michelf.ca> wrote:

 On 2014-02-05 15:01:04 +0000, Manu <turkeyman gmail.com> said:

  Aligned data is a challenge. I have often wondered if it would be feasible
 to access the RC via a pointer hash or something, and keep it in a side
 table... sounds tricky, but I wonder if it's possible.
That's what Apple is doing (as seen in the CoreFoundation source code). They actually have eight such tables on OS X, each protected by a spin lock. The right table is chosen according to a few bits of the pointer. The obvious advantage is you can put immutable data in read-only memory without making the reference count immutable. The downside is that it's more complicated to access the counter.
Indeed. Good to know someone else is doing it. Sounds like a realistic option then :) Fat pointers might be necessary. That's a bit annoying. Hmmm...

 Since we're talking about adding reference counts to GC-allocated memory,
 you could use the GC to find the base address of the memory block. What is
 the cost of that?
Can you elaborate? How would the GC know this? I'm sure there's a clever solution out there which would allow the ARC to
 detect if it's a raw C pointer or not...
Ask the GC for the base address of the memory block. If it does not come from a GC block, there's no reference counter to update. -- Michel Fortin michel.fortin michelf.ca http://michelf.ca
Feb 05 2014
parent Michel Fortin <michel.fortin michelf.ca> writes:
On 2014-02-05 15:52:36 +0000, Manu <turkeyman gmail.com> said:

 On 6 February 2014 01:22, Michel Fortin <michel.fortin michelf.ca> wrote:
 
 On 2014-02-05 15:01:04 +0000, Manu <turkeyman gmail.com> said:
 
 Since we're talking about adding reference counts to GC-allocated memory,
 you could use the GC to find the base address of the memory block. What is
 the cost of that?
Can you elaborate? How would the GC know this?
How do you think the GC tracks internal pointers today? ;-) Just call addrOf if you need to know: We'd have to call it too for incrementing/decrementing the reference count. It'd work, even though it seems rather heavyweight. The slow part of that function is the call to findPool, which does a binary search: https://github.com/D-Programming-Language/druntime/blob/master/src/gc/gc.d#L1581 That said, the GC is already doing that for every word of memory it scans, so it might not be as heavyweight as it seems (especially if the GC has less to scan later because of ARC). See the mark function: https://github.com/D-Programming-Language/druntime/blob/master/src/gc/gc.d#L2274 I'd tend to say that if you're in control of the reference count system, the code genereration, the allocation pools, as well as the GC algorithm you can probably do something that'll work well enough, by architecting things to work well together. But it requires an integrated approach. -- Michel Fortin michel.fortin michelf.ca http://michelf.ca
Feb 05 2014
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 5 February 2014 at 15:01:27 UTC, Manu wrote:
 Or I wonder if there's opportunity to pinch a single bit from 
 pointers to
 mark that it is raw or RC allocated? Probably fine on 64bit.
Yes, on 64 bit that is ok. I think current x86 map something like 53 bits, the top and bottom half of the 64 bit address space. The middle is unused. Anyway, you could have two heaps if the OS allows you to.
 32bit probably needs to match a bit pattern or something.
Or one could forget about 32 bit for ARC.
 Aligned data is a challenge. I have often wondered if it would 
 be feasible
 to access the RC via a pointer hash or something, and keep it 
 in a side
 table... sounds tricky, but I wonder if it's possible.
If you have your own allocator you probably could? Segment memory regions into allocations of a particular size and have a RC-count index at the start indexed by the masked MSBs of the pointer address and have smart pointers that know the object size. Kind of tricky to get acceptable speed, but possible to do. The problem is that you will get a latency of perhaps 200+ cycles on a cache miss. Then again, you could probably make do with 32 bit counters and they are probably accessed in proximity if they hold the same type of object. One cache line is 64 bytes, so you get 16 counters per cache line. With smart allocation you might get good cache locality of the counters (8MB of L3 cache is quite a bit). (I guess alignment is primarily a problem when you want 4KiB alignment (page size), maybe not worth worrying about.)
 Rather than (allocated_base, offset), I suspect (pointer, 
 offset_from_base)
 would be better; typical dereferences would have no penalty.
Yes, probably. I was thinking about avoiding GC of internal pointers too. I think scanning might be easier if all pointers point to the allocation base. That way the GC does not have to consider offsets.
 You mean like the smart pointer double indirection? I really 
 don't like the
 double indirection, but it's a possibility. Or did I 
 misunderstand?
Yes: struct { void* object_ptr; /* offset */ uint weakcounter; uint strongcounter; } The advantage is that it works well with RC ignorant collectors/allocators. "if in doubt just set the pointer to null and forget about freeing".
 I'm sure there's a clever solution out there which would allow 
 the ARC to
 detect if it's a raw C pointer or not...
Well, for a given OS/architecture you could probably always allocate your heap in a fixed memory range on 64 bit systems then test against that range.
Feb 05 2014
parent reply Manu <turkeyman gmail.com> writes:
On 6 February 2014 01:32, <"Ola Fosheim Gr=C3=B8stad\"
<ola.fosheim.grostad+dlang gmail.com>" puremagic.com> wrote:

 On Wednesday, 5 February 2014 at 15:01:27 UTC, Manu wrote:

 Or I wonder if there's opportunity to pinch a single bit from pointers t=
o
 mark that it is raw or RC allocated? Probably fine on 64bit.
Yes, on 64 bit that is ok. I think current x86 map something like 53 bits=
,
 the top and bottom half of the 64 bit address space. The middle is unused=
.
 Anyway, you could have two heaps if the OS allows you to.


  32bit probably needs to match a bit pattern or something.

 Or one could forget about 32 bit for ARC.
The applications I describe where it is a necessity will often be 32bit systems. Aligned data is a challenge. I have often wondered if it would be feasible
 to access the RC via a pointer hash or something, and keep it in a side
 table... sounds tricky, but I wonder if it's possible.
If you have your own allocator you probably could? Segment memory regions into allocations of a particular size and have a RC-count index at the start indexed by the masked MSBs of the pointer address and have smart pointers that know the object size. Kind of tricky to get acceptable spee=
d,
 but possible to do. The problem is that you will get a latency of perhaps
 200+ cycles on a cache miss. Then again, you could probably make do with =
32
 bit counters and they are probably accessed in proximity if they hold the
 same type of object. One cache line is 64 bytes, so you get 16 counters p=
er
 cache line. With smart allocation you might get good cache locality of th=
e
 counters (8MB of L3 cache is quite a bit).
Cache locality is a problem that can easily be refined. It would just need lots of experimental data. (I guess alignment is primarily a problem when you want 4KiB alignment
 (page size), maybe not worth worrying about.)


  Rather than (allocated_base, offset), I suspect (pointer,
 offset_from_base)
 would be better; typical dereferences would have no penalty.
Yes, probably. I was thinking about avoiding GC of internal pointers too. I think scanning might be easier if all pointers point to the allocation base. That way the GC does not have to consider offsets. You mean like the smart pointer double indirection? I really don't like
 the
 double indirection, but it's a possibility. Or did I misunderstand?
Yes: struct { void* object_ptr; /* offset */ uint weakcounter; uint strongcounter; } The advantage is that it works well with RC ignorant collectors/allocators. "if in doubt just set the pointer to null and forg=
et
 about freeing".
I see. Well, I don't like it :) ... but it's an option. I'm sure there's a clever solution out there which would allow the ARC to
 detect if it's a raw C pointer or not...
Well, for a given OS/architecture you could probably always allocate your heap in a fixed memory range on 64 bit systems then test against that ran=
ge.

Feb 05 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 5 February 2014 at 16:01:09 UTC, Manu wrote:
 On 6 February 2014 01:32, <"Ola Fosheim Grøstad\"
 <ola.fosheim.grostad+dlang gmail.com>" puremagic.com> wrote:
 The applications I describe where it is a necessity will often 
 be 32bit
 systems.
You mean for embedded? Mobile CPUs are going 64 bit…
 Cache locality is a problem that can easily be refined. It 
 would just need
 lots of experimental data.
Well, in that case the math is not so difficult. You could have 1 index every 4MiB, and if your smallest allocation unit is 256 bytes then you get a counter index of 16384 uint32 (64Kib) The access would be easy and something like (probably not 100% correct): counter_addr = (ptr&~0xffff) + ( (ptr>>12)&0xfffc )
 Yes:

 struct {
     void* object_ptr;
     /* offset */
     uint weakcounter;
     uint strongcounter;
 }

 The advantage is that it works well with RC ignorant
 collectors/allocators. "if in doubt just set the pointer to 
 null and forget
 about freeing".
I see. Well, I don't like it :) ... but it's an option.
The aesthetics isn't great, it is not a minimalist approach, but consider the versatility: You could put in a function pointer to a deallocator (c-malloc, QT, GTK or some other library deallocator etc) and other kind of meta information that makes you able to treat reference counting in a uniform manner even for external resources. With the right semantics you can have pointers to cached objects that suddenly disappear etc (by using weak counter in a clever way).
Feb 05 2014
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 5 February 2014 at 17:24:46 UTC, Ola Fosheim 
Grøstad wrote:
 The access would be easy and something like (probably not 100% 
 correct):

 counter_addr = (ptr&~0xffff) + ( (ptr>>12)&0xfffc )
It was of course wrong, that would make the smallest allocation unit 16KiB. Anyway, if tuned to the indexed loads of the CPU then it would not be all the slow. On the x86 you should be able to do something like (pseudo): uint64 reg1 = ptr & 0xfff....f0000 uint32 reg2 = ptr >> 8 uint64 reg3 = load_effective_address( reg1 + 4*reg2 ) increment( *reg3 ) So only 4-5 cheap instructions for single threaded counting. You could also use the most significant bit for bookkeeping of single-threaded vs multi-threaded ref counting: test (*reg3) if (positive) goto nolock: lockprefix nolock: increment( *reg3 )
Feb 05 2014
prev sibling parent reply Marco Leise <Marco.Leise gmx.de> writes:
Am Wed, 05 Feb 2014 17:24:44 +0000
schrieb "Ola Fosheim Gr=C3=B8stad"
<ola.fosheim.grostad+dlang gmail.com>:

 You could put in a function pointer to a deallocator (c-malloc,=20
 QT, GTK or some other library deallocator etc) and other kind of=20
 meta information that makes you able to treat reference counting=20
 in a uniform manner even for external resources.
Now that sounds interesting. That's an area where AGC typically doesn't kick in, because it can only see the small amount of memory used on the D wrapper side. --=20 Marco
Feb 05 2014
parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Wednesday, 5 February 2014 at 19:09:57 UTC, Marco Leise wrote:
 You could put in a function pointer to a deallocator 
 (c-malloc, QT, GTK or some other library deallocator etc) and 
 other kind of meta information that makes you able to treat 
 reference counting in a uniform manner even for external 
 resources.
Now that sounds interesting.
It sounds to me like... a separate type. The deallocator is the destructor!
Feb 05 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 5 February 2014 at 19:13:01 UTC, Adam D. Ruppe 
wrote:
 It sounds to me like... a separate type. The deallocator is the 
 destructor!
Yes, but "destructor" of the reference counter object is "decrease count by one", when it goes to zero. Not when it is destroyed. Sure, you can use classes for it, but I don't think you want to. It is better to have a uniform reference counter object that can be fully inlined with no function calls as the general case and then only call the external "deallocator" with itself as parameter when it is present.
Feb 05 2014
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/5/14, 4:39 AM, Manu wrote:
 On 5 February 2014 09:51, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org <mailto:SeeWebsiteForEmail erdani.org>>
 wrote:

     Consider we add a library slice type called RCSlice!T. It would have
     the same primitives as T[] but would use reference counting through
     and through. When the last reference count is gone, the buffer
     underlying the slice is freed. The underlying allocator will be the
     GC allocator.

     Now, what if someone doesn't care about the whole RC thing and aims
     at convenience? There would be a method .toGC that just detaches the
     slice and disables the reference counter (e.g. by setting it to
     uint.max/2 or whatever).

     Then people who want reference counting say

     auto x = fun();

     and those who don't care say:

     auto x = fun().toGC();


     Destroy.


 This doesn't excite me at all.
 What about all other types of allocations? I don't want to mangle my
 types.
How do you mean that?
 What about closures?
 What about allocations from phobos?
Phobos would get added support for RC slices.
 What
 about allocations from 3rd party libs that I have no control over?
Outside the charter of this discussion.
 I don't like that it requires additional specification, and special
 treatment to have it detach to the GC.
But you didn't like that GC is implicit. Which way do you want it?
 There's nothing transparent about that.
I don't think we can attain 100% transparency.
 Another library solution like
 RefCounted doesn't address the problem.

 Counter question; why approach it this way?
 Is there a reason that it needs to be of one kind or the other?
This is technically possible today. Andrei
Feb 05 2014
prev sibling next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Tuesday, 4 February 2014 at 23:51:35 UTC, Andrei Alexandrescu 
wrote:
 ...

 Destroy.

 Andrei
I simply don't understand what problem this is trying to solve. I can't remember anyone wanting RC just of the sake of RC - it is just a compromise to enable language features that require allocation with undeterministic lifetime in absense of GC. Only thing having RC on top of GC may help with is reducing overall memory footprint but it is not an issue anyway. I must be missing something, but this looks perfectly useless.
Feb 05 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/5/14, 7:10 AM, Dicebot wrote:
 On Tuesday, 4 February 2014 at 23:51:35 UTC, Andrei Alexandrescu wrote:
 ...

 Destroy.

 Andrei
I simply don't understand what problem this is trying to solve.
Allow people to clean up memory allocated by Phobos. Andrei
Feb 05 2014
parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Wednesday, 5 February 2014 at 18:25:31 UTC, Andrei 
Alexandrescu wrote:
 Allow people to clean up memory allocated by Phobos.
A better solution would be for Phobos to allocate less memory. Instead of returning strings, accept output ranges that receive it. Make slices work as output ranges well with control over growing. (We can still offer the simple string returning functions for convenience)
Feb 05 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/5/14, 10:56 AM, Adam D. Ruppe wrote:
 On Wednesday, 5 February 2014 at 18:25:31 UTC, Andrei Alexandrescu wrote:
 Allow people to clean up memory allocated by Phobos.
A better solution would be for Phobos to allocate less memory. Instead of returning strings, accept output ranges that receive it.
You do figure that complicates usage considerably, right?
 Make slices
 work as output ranges well with control over growing.

 (We can still offer the simple string returning functions for convenience)
I was thinking RCSlice would be a better alternative. Andrei
Feb 05 2014
parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Wednesday, 5 February 2014 at 19:39:43 UTC, Andrei 
Alexandrescu wrote:
 You do figure that complicates usage considerably, right?
I don't see much evidence for that. Many, many newer modules in Phobos are currently allocation free yet still pretty easy to use. A major source of little allocations in my code is std.conv and std.string. But these aren't difficult to change to external allocation, in theory at least: string s = to!string(50); // GC allocates (I'd keep this for convenience and compatibility) char[16] buffer; char[] s = toBuffer(buffer[], 50); // same thing, using a buffer char[] s = toLowerBuffer(buffer[], "FOO"); assert(buffer.ptr is s); assert(s == "foo"); That's not hard to use (though remembering that s is a borrowed reference to a stack buffer might be - escape analysis is something we should really have). And it gives full control over both allocation and deallocation. It'd take some changes in phobos, but so does the RCSlice sooo yeah, and this actually decouples it from the GC. The tricky part might be making it work with buffers, growable buffers, sink functions, etc., but we've solved similar problems with input ranges.
 I was thinking RCSlice would be a better alternative.
I very rarely care about when little slices are freed. Large blocks of memory might be another story (I've used malloc+free for a big internal buffer in my png.d after getting memory leaks from false poitners with teh gc) but those can be handled on a case by case basis. std.base64 for example might make sense to return one of these animals. I don't have a problem with refcounting on principle but most the time, it just doesn't matter.
Feb 05 2014
next sibling parent reply Johannes Pfau <nospam example.com> writes:
Am Wed, 05 Feb 2014 20:18:33 +0000
schrieb "Adam D. Ruppe" <destructionator gmail.com>:

 On Wednesday, 5 February 2014 at 19:39:43 UTC, Andrei 
 Alexandrescu wrote:
 You do figure that complicates usage considerably, right?
I don't see much evidence for that. Many, many newer modules in Phobos are currently allocation free yet still pretty easy to use. A major source of little allocations in my code is std.conv and std.string. But these aren't difficult to change to external allocation, in theory at least: string s = to!string(50); // GC allocates (I'd keep this for convenience and compatibility) char[16] buffer; char[] s = toBuffer(buffer[], 50); // same thing, using a buffer char[] s = toLowerBuffer(buffer[], "FOO"); assert(buffer.ptr is s); assert(s == "foo");
I think using a template parameter to allow for all kinds of allocators (std.allocator) is better. But of course it should have zero-overhead for a static-buffer allocator. Or we just special case static buffers _and_ add allocators ;-) But as Andrei said that discussion is not part of this thread. The main reason for RCSlice is not returning from Phobos functions, it's passing slices to Phobos functions. You can always create a RCSlice!string with any kind of string and as Andrei wants them GC-backed anyway, you can just create the RCSlice!string after the function returned a GC allocated string. (As long as the function doesn't keep a reference) But if we pass a RCd slice to a phobos function as a normal string, we'd have to stop reference counting. While this doesn't matter for GC-backed slices it basically means that manually allocated slices would never be freed. However, I wonder if that's really a problem in phobos. I'd guess most functions accepting slice input don't store a reference. We should probably start documenting that. (Or finish 'scope' as you already said implicitly ;-).
Feb 05 2014
parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Wednesday, 5 February 2014 at 20:46:32 UTC, Johannes Pfau 
wrote:
 However, I wonder if that's really a problem in phobos. I'd 
 guess most functions accepting slice input don't store a 
 reference.
 We should probably start documenting that. (Or finish 'scope' 
 as you already said implicitly ;-).
Aye. If the reference never escapes, it doesn't need to be counted or freed (indeed, it really MUST never be freed, since whomever passed you that reference may still be using it and is responsible for freeing it (or passing the buck to the GC))
Feb 05 2014
next sibling parent "Brad Anderson" <eco gnuk.net> writes:
On Wednesday, 5 February 2014 at 21:00:32 UTC, Adam D. Ruppe 
wrote:
 On Wednesday, 5 February 2014 at 20:46:32 UTC, Johannes Pfau 
 wrote:
 However, I wonder if that's really a problem in phobos. I'd 
 guess most functions accepting slice input don't store a 
 reference.
 We should probably start documenting that. (Or finish 'scope' 
 as you already said implicitly ;-).
Aye. If the reference never escapes, it doesn't need to be counted or freed (indeed, it really MUST never be freed, since whomever passed you that reference may still be using it and is responsible for freeing it (or passing the buck to the GC))
I wonder if passing it in as "scope" could be utilized in some way to forgo reference count increments/decrements.
Feb 05 2014
prev sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Wednesday, 5 February 2014 at 21:00:32 UTC, Adam D. Ruppe 
wrote:
 On Wednesday, 5 February 2014 at 20:46:32 UTC, Johannes Pfau 
 wrote:
 However, I wonder if that's really a problem in phobos. I'd 
 guess most functions accepting slice input don't store a 
 reference.
 We should probably start documenting that. (Or finish 'scope' 
 as you already said implicitly ;-).
Aye. If the reference never escapes, it doesn't need to be counted or freed (indeed, it really MUST never be freed, since whomever passed you that reference may still be using it and is responsible for freeing it (or passing the buck to the GC))
This. I think simply implementing scope will be much more important and effective step in optimizing D memory management model than current roadmap Andrei has posted recently. Simply because it enables non-breaking enhancements to Phobos that provide both allocation-free and safe functionality at the same time and is completely orthogonal to underlying allocation model.
Feb 05 2014
parent reply "Dicebot" <public dicebot.lv> writes:
tl; dr: it does not matter if this topic proposal works or not, 
it is simply an effort put into the wrong place
Feb 05 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/5/14, 1:10 PM, Dicebot wrote:
 tl; dr: it does not matter if this topic proposal works or not, it is
 simply an effort put into the wrong place
Noted. On the face of it it seems odd that reference counted chunks of typed memory are deemed useless, at the tail of a discussion discussing in great detail the important advantages of reference counting. I should also add that imparting useful semantics to scope is much more difficult than it might seem. In contrast, reference counted slices are realizable in the current language with a relatively small (scoped! :o)) library effort and can be put to use immediately. I see a lot of good reasons to add them to Phobos regardless of the larger memory allocation changes we plan to make. Andrei
Feb 05 2014
parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Wednesday, 5 February 2014 at 22:32:52 UTC, Andrei 
Alexandrescu wrote:
 On the face of it it seems odd that reference counted chunks of 
 typed memory are deemed useless
I don't think anybody has actually said that. They have their places, it is just useless to talk about throwing them in everywhere.
 I should also add that imparting useful semantics to scope is 
 much more difficult than it might seem.
I'm not so sure about that*, but the fact is scope would be enormously useful if it was implemented. * Let's say it meant "assigning to any higher scope is prohibited". That should be trivially easy to check and ensures that variable itself doesn't escape. The tricky part would be preventing: int[] global; void foo(scope int[] a) { int[] b = a; global = b; } And that's easy to fix too: make ALL variables scope, unless specifically marked otherwise at the type declaration site (or if they are value types OR references to immutable data, which are very similar to value types in use). The type declaration can be marked as a reference encapsulation and those are allowed to be passed up (if the type otherwise allows; e.g. postblit is not disabled). That would break a fair chunk of existing code**, but it'd make memory management explicit, correct, and user extensible. ** I think moving to not null by default at the same time would be good, just rip off teh whole band aid.
Feb 05 2014
next sibling parent reply Manu <turkeyman gmail.com> writes:
On 6 February 2014 09:16, Adam D. Ruppe <destructionator gmail.com> wrote:

 On Wednesday, 5 February 2014 at 22:32:52 UTC, Andrei Alexandrescu wrote:

 On the face of it it seems odd that reference counted chunks of typed
 memory are deemed useless
I don't think anybody has actually said that. They have their places, it is just useless to talk about throwing them in everywhere. I should also add that imparting useful semantics to scope is much more
 difficult than it might seem.
I'm not so sure about that*, but the fact is scope would be enormously useful if it was implemented. * Let's say it meant "assigning to any higher scope is prohibited". That should be trivially easy to check and ensures that variable itself doesn't escape. The tricky part would be preventing: int[] global; void foo(scope int[] a) { int[] b = a; global = b; } And that's easy to fix too: make ALL variables scope, unless specifically marked otherwise at the type declaration site (or if they are value types OR references to immutable data, which are very similar to value types in use).
Surely a simpler solution is to mark b scope too? Does that break-down at some point? The type declaration can be marked as a reference encapsulation and those
 are allowed to be passed up (if the type otherwise allows; e.g. postblit is
 not disabled).

 That would break a fair chunk of existing code**, but it'd make memory
 management explicit, correct, and user extensible.

 ** I think moving to not null by default at the same time would be good,
 just rip off teh whole band aid.
Feb 05 2014
parent "Brad Anderson" <eco gnuk.net> writes:
On Thursday, 6 February 2014 at 00:32:05 UTC, Manu wrote:
 On 6 February 2014 09:16, Adam D. Ruppe 
 <destructionator gmail.com> wrote:
 And that's easy to fix too: make ALL variables scope, unless 
 specifically
 marked otherwise at the type declaration site (or if they are 
 value types
 OR references to immutable data, which are very similar to 
 value types in
 use).
Surely a simpler solution is to mark b scope too? Does that break-down at some point?
scope in declarations is currently used as a storage class for classes on the stack (a deprecated feature) so it couldn't be used for class references until it's been removed from the language for awhile. It does seem like it'd help the compiler a lot if you disallowed assigning references to variables not also marked as scope though. It's probably hard to evaluate how big of a pain it'd be without using it in real world code. It'd technically be a breaking change but scope isn't implemented at all anyway so I think this current users of scope would probably welcome a change that would make it actually start working.
Feb 05 2014
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/5/14, 3:16 PM, Adam D. Ruppe wrote:
 On Wednesday, 5 February 2014 at 22:32:52 UTC, Andrei Alexandrescu wrote:
 On the face of it it seems odd that reference counted chunks of typed
 memory are deemed useless
I don't think anybody has actually said that. They have their places, it is just useless to talk about throwing them in everywhere.
I think part of the problem is a disconnect in assumptions and expectations. My idea was to simply make a first simple and obvious step toward improving the situation. Apparently that wasn't quite understood because ten people have eleven notions about what's desirable and even possible with regard to alternate memory management schemes. One school of thought seems to be that D should be everything it is today, just with reference counting throughout instead of garbage collection. One build flag to rule them all would choose one or the other. One other school of thought (to which I subscribe) is that one should take advantage of reference counting where appropriate within a GC milieu, regardless of more radical RC approaches that may be available.
 I should also add that imparting useful semantics to scope is much
 more difficult than it might seem.
I'm not so sure about that*, but the fact is scope would be enormously useful if it was implemented. * Let's say it meant "assigning to any higher scope is prohibited". That should be trivially easy to check and ensures that variable itself doesn't escape. The tricky part would be preventing: int[] global; void foo(scope int[] a) { int[] b = a; global = b; } And that's easy to fix too: make ALL variables scope, unless specifically marked otherwise at the type declaration site (or if they are value types OR references to immutable data, which are very similar to value types in use).
Yah, that does break a bunch of code. Things like the type of "this" in class objects also comes to mind. Binding ref is also a related topic. All of these are complex matters, and I think a few simple sketches don't do them justice. Andrei
Feb 05 2014
next sibling parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Thursday, 6 February 2014 at 00:42:20 UTC, Andrei Alexandrescu 
wrote:
 One other school of thought (to which I subscribe) is that one 
 should take advantage of reference counting where appropriate 
 within a GC milieu, regardless of more radical RC approaches 
 that may be available.
I agree with that stance, but I don't think there's a blanket rule there. I think RC freeing small slices will waste more time than it saves. Large allocations, on the other hand, might be worth it. So std.file.read for example returns a large block - that's a good candidate for refcounting since it might be accidentally subject to false pointers, or sit around too long creating needless memory pressure, etc. (My png.d used to use large GC allocations internally and it ended up being problematic. I switched to malloc/free for this specific task and took care of that problem. But the little garbage created by stuff like toLower has never been a problem to me. (Well, except in a tight loop, but I wouldn't want to refcount in a tight loop either, reusing a static buffer is better tere.)) Anywho, I'd just go through on a case-by-case basis and tackle the big fish. Of course, a user could just do scope(exit) GC.free(ret); too.
 Yah, that does break a bunch of code. Things like the type of 
 "this" in class objects also comes to mind.
I talked about this yesterday: this should be scope too, since an object doesn't know its own allocation method. If the class object is on the stack, escaping this is wrong.. and thanks to emplace, it might be on the stack without the object ever knowing. Thus it must be conservative.
 Binding ref is also a related topic. All of these are complex 
 matters, and I think a few simple sketches don't do them 
 justice.
I'd rather discuss these details than adding RCSlice and toGC everywhere for more cost than benefit. Note that working scope would also help with library RC, in efficiency, correctness, and ease of use.
Feb 05 2014
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/5/14, 4:57 PM, Adam D. Ruppe wrote:
 On Thursday, 6 February 2014 at 00:42:20 UTC, Andrei Alexandrescu wrote:
 One other school of thought (to which I subscribe) is that one should
 take advantage of reference counting where appropriate within a GC
 milieu, regardless of more radical RC approaches that may be available.
I agree with that stance, but I don't think there's a blanket rule there. I think RC freeing small slices will waste more time than it saves. Large allocations, on the other hand, might be worth it. So std.file.read for example returns a large block - that's a good candidate for refcounting since it might be accidentally subject to false pointers, or sit around too long creating needless memory pressure, etc.
That sounds reasonable. One possibility would be to define FreshSlice!T to mean this is a freshly-allocated slice; then it can be converted to a refcounted one or just a GC one.
 Anywho, I'd just go through on a case-by-case basis and tackle the big
 fish. Of course, a user could just do scope(exit) GC.free(ret); too.
That won't work because user code can't always know whether something received from the library had been freshly allocated or not.
 Binding ref is also a related topic. All of these are complex matters,
 and I think a few simple sketches don't do them justice.
I'd rather discuss these details than adding RCSlice and toGC everywhere for more cost than benefit.
Have at it, of course. This is not constant sum, just don't hijack this discussion. I should warn you however we've been discussing this literally for years; your examples are just scratching the surface. Andrei
Feb 05 2014
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 6 February 2014 at 00:42:20 UTC, Andrei Alexandrescu
wrote:
 One school of thought seems to be that D should be everything 
 it is today, just with reference counting throughout instead of 
 garbage collection. One build flag to rule them all would 
 choose one or the other.

 One other school of thought (to which I subscribe) is that one 
 should take advantage of reference counting where appropriate 
 within a GC milieu, regardless of more radical RC approaches 
 that may be available.
The third school of thought is that one should be able to have different types of allocation schemes without changing the object type, but somehow tie it to the allocator and if needed to the pointer type/storage class. If you allocate as fully owned, it stays owned. If you allocate as shared with immediate release (RC) it stays shared. If you allocate as shared with delayed collection (GC) it stays that way. The RC/GC metadata is a hidden feature and allocator/runtime/compiler dependent component. Possibly you won't have GC or RC, but one pure GC runtime, one integrated RC/GC runtime, one pure ARC runtime, one integrated ARC/GC runtime etc. That's probably most realistic since the allocation metadata might be in conflict. You should be able to switch to the runtime you care about if needed as a compile time switch: 1. Pure Owned/borrowed: hard core performance, OS level development 2. Manual RC (+GC): high throughput, low latency 3. ARC (+GC): ease of use, low throughput, low latency 4. GC: ease of use, high throughput, higher latency, long lived 5. Realtime GC 6. ?? I see no reason for having objects treated differently if they are "owned", just because they have a different type of ownership. If the function dealing with it does not own it, but borrows it, then it should not matter. The object should have the same layout, the ownership/allocation metadata should be encapsulated and hidden. It is only when you transfer ownership that you need to know if the object is under RC or not. You might not even want to use counters in a particular implementation, maybe it is better to use a linked list in some scenarios. "reference counting" is a misnomer, it should be called "ownership tracker". The current default is that all pointers are shared. What D needs is defined semantics for ownership. Then you can start switching one runtime for another one and have the compiler/runtime act as an efficient unit.
Feb 05 2014
parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Thursday, 6 February 2014 at 02:31:19 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 6 February 2014 at 00:42:20 UTC, Andrei 
 Alexandrescu
 wrote:
 One school of thought seems to be that D should be everything 
 it is today, just with reference counting throughout instead 
 of garbage collection. One build flag to rule them all would 
 choose one or the other.

 One other school of thought (to which I subscribe) is that one 
 should take advantage of reference counting where appropriate 
 within a GC milieu, regardless of more radical RC approaches 
 that may be available.
The third school of thought is that one should be able to have different types of allocation schemes without changing the object type, but somehow tie it to the allocator and if needed to the pointer type/storage class. If you allocate as fully owned, it stays owned. If you allocate as shared with immediate release (RC) it stays shared. If you allocate as shared with delayed collection (GC) it stays that way. The RC/GC metadata is a hidden feature and allocator/runtime/compiler dependent component. Possibly you won't have GC or RC, but one pure GC runtime, one integrated RC/GC runtime, one pure ARC runtime, one integrated ARC/GC runtime etc. That's probably most realistic since the allocation metadata might be in conflict. You should be able to switch to the runtime you care about if needed as a compile time switch: 1. Pure Owned/borrowed: hard core performance, OS level development 2. Manual RC (+GC): high throughput, low latency 3. ARC (+GC): ease of use, low throughput, low latency 4. GC: ease of use, high throughput, higher latency, long lived 5. Realtime GC 6. ?? I see no reason for having objects treated differently if they are "owned", just because they have a different type of ownership. If the function dealing with it does not own it, but borrows it, then it should not matter. The object should have the same layout, the ownership/allocation metadata should be encapsulated and hidden. It is only when you transfer ownership that you need to know if the object is under RC or not. You might not even want to use counters in a particular implementation, maybe it is better to use a linked list in some scenarios. "reference counting" is a misnomer, it should be called "ownership tracker". The current default is that all pointers are shared. What D needs is defined semantics for ownership. Then you can start switching one runtime for another one and have the compiler/runtime act as an efficient unit.
That won't play ball with third party libraries distributed in binary form. This is one of the reasons why Apple's Objective-C GC failed. -- Paulo
Feb 06 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 6 February 2014 at 08:06:54 UTC, Paulo Pinto wrote:
 That won't play ball with third party libraries distributed in 
 binary form.
That is not obvious, you specify the runtime. Anyway, whole program analysis also does not play well with binary libraries without detailed semantic metadata. Does shared_ptr in C++11 work with binary libraries that use it, if the it is compiled with a compiler from another vendor?
Feb 06 2014
parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Thursday, 6 February 2014 at 08:38:57 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 6 February 2014 at 08:06:54 UTC, Paulo Pinto wrote:
 That won't play ball with third party libraries distributed in 
 binary form.
That is not obvious, you specify the runtime. Anyway, whole program analysis also does not play well with binary libraries without detailed semantic metadata.
So what do you do when different libraries require different runtimes? To be more specific to my previous comment. Objective-C GC required special compilation flags and care needed to be taken in GC enabled code, like in C GCs. This did not played well when mixing code that used the GC enabled runtime, with code that did not. Thus the endless causes of core dumps in Objective-C code that made use of the GC. The Apple decision to create ARC and dump the GC wasn't because it is better as they later sold it, but because the compiler inserts for the developer the usual [... retain]/[... release] calls that they were already writing since the NeXT days. So no distinct runtimes were required as the generated code is no different than an Objective-C developer would have written by hand. This was the best way to achieve some form of automatic memory management, while preserving compatibility across libraries delivered in binary form.
 Does shared_ptr in C++11 work with binary libraries that use 
 it, if the it is compiled with a compiler from another vendor?
As far as I am aware, no. In any case there isn't a standard C++ ABI defined. Well, there are a few, but vendors don't use them.
Feb 06 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 6 February 2014 at 09:27:19 UTC, Paulo Pinto wrote:
 So what do you do when different libraries require different 
 runtimes?
I guess a given compiler could have a "cross compiler option" that generates libraries for all the available runtimes the compiler supports?
 To be more specific to my previous comment. Objective-C GC 
 required special compilation flags and care needed to be taken 
 in GC enabled code, like in C GCs.
I understand. I am not sure if having multiple flags that creates a combinatorial explosion would be a good idea. I think you should have a set of invidual runtimes targeting typical scenarios, supporting different sets of functionality. (embedded, kernel, multimedia, server, batch, hpc…) However, I think it would only work for the same compiler, because you really don't want to prevent innovation…
 So no distinct runtimes were required as the generated code is 
 no different than an Objective-C developer would have written 
 by hand.
You might be able to design the runtime/object code in such a way that you get link errors.
 In any case there isn't a standard C++ ABI defined. Well, there 
 are a few, but vendors don't use them.
Yeah, well I am not personally bothered by it. The only time I consider using binary-only libraries is for graphics and OS level stuff that is heavily used by others so that it is both well tested and workaround is available on the net. (OpenGL, Direct-X, Cocoa etc) Not having the source code to a library is rather risky in terms of having to work around bugs by trail and error, without even knowing what the bug actually is. Thankfully, most useful libraries are open source.
Feb 06 2014
next sibling parent reply dennis luehring <dl.soluz gmx.net> writes:
Am 06.02.2014 11:21, schrieb "Ola Fosheim Grøstad"
 Not having the source code to a library is rather risky in terms
 of having to work around bugs by trail and error, without even
 knowing what the bug actually is.
so you're not work in the normal software development business where non-source code third party dependicies are fully normal :)
Feb 06 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 6 February 2014 at 10:29:20 UTC, dennis luehring 
wrote:
 so you're not work in the normal software development business 
 where
 non-source code third party dependicies are fully normal :)
Not in terms of libraries no, what libraries would that be? In terms of infrastructure, yes, but I try to avoid using features I cannot replace with alternatives and strongly prefer open source alternatives.
Feb 06 2014
parent reply dennis luehring <dl.soluz gmx.net> writes:
Am 06.02.2014 11:43, schrieb "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>":
 On Thursday, 6 February 2014 at 10:29:20 UTC, dennis luehring
 wrote:
 so you're not work in the normal software development business
 where
 non-source code third party dependicies are fully normal :)
Not in terms of libraries no, what libraries would that be?
all libraries that other deparments don't want you to see or you don't need to see - so unnormal in your environment?
Feb 06 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 6 February 2014 at 11:08:51 UTC, dennis luehring 
wrote:
 all libraries that other deparments don't want you to see or 
 you don't
 need to see - so unnormal in your environment?
My environment is kind of not the point (but yes that would have been un-normal :-). If it is internal then you surely have selected the compile options as a project policy and that makes this a non-issue. It is external libraries that are an issue, and depending on closed source external libraries is risky if they cannot be replaced. Depending on your contract you might have to reimplement the entire library for scratch for free if you cannot fix a problem. And unfortunately, small players file for bankruptcy all the time, so open source is the best insurance you can get.
Feb 06 2014
prev sibling parent reply Manu <turkeyman gmail.com> writes:
On 6 February 2014 20:21, <"Ola Fosheim Gr=C3=B8stad\"
<ola.fosheim.grostad+dlang gmail.com>" puremagic.com> wrote:

 On Thursday, 6 February 2014 at 09:27:19 UTC, Paulo Pinto wrote:

 So what do you do when different libraries require different runtimes?
I guess a given compiler could have a "cross compiler option" that generates libraries for all the available runtimes the compiler supports? To be more specific to my previous comment. Objective-C GC required
 special compilation flags and care needed to be taken in GC enabled code=
,
 like in C GCs.
I understand. I am not sure if having multiple flags that creates a combinatorial explosion would be a good idea. I think you should have a s=
et
 of invidual runtimes targeting typical scenarios, supporting different se=
ts
 of functionality. (embedded, kernel, multimedia, server, batch, hpc=E2=80=
=A6)
 However, I think it would only work for the same compiler, because you
 really don't want to prevent innovation=E2=80=A6


  So no distinct runtimes were required as the generated code is no
 different than an Objective-C developer would have written by hand.
You might be able to design the runtime/object code in such a way that yo=
u
 get link errors.


  In any case there isn't a standard C++ ABI defined. Well, there are a
 few, but vendors don't use them.
Yeah, well I am not personally bothered by it. The only time I consider using binary-only libraries is for graphics and OS level stuff that is heavily used by others so that it is both well tested and workaround is available on the net. (OpenGL, Direct-X, Cocoa etc) Not having the source code to a library is rather risky in terms of havin=
g
 to work around bugs by trail and error, without even knowing what the bug
 actually is.

 Thankfully, most useful libraries are open source.
Some that I regularly encounter: system libs, opengl, directx, fmod, physics (havok, phyzx, etc), animation (euphoria, natural motion), bink, lip-sync libraries, proprietary engine libraries, art-package integration libraries (3ds max, maya, photoshop), fbx, and many, many more. Yes these are C libs, but the idea that people don't regularly use proprietary libs is fantasy.
Feb 06 2014
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 6 February 2014 at 11:29:35 UTC, Manu wrote:
 Yes these are C libs, but the idea that people don't regularly 
 use
 proprietary libs is fantasy.
I never claimed that people don't do it. I claimed that it is risky if you don't have a replacement for it. And I don't think one should lobotomise a development environment to support it. C++ gets traction without being very good at that. Plenty of languages that are source-distributed are getting traction too.
Feb 06 2014
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 6 February 2014 at 11:29:35 UTC, Manu wrote:
 Some that I regularly encounter: system libs, opengl, directx, 
 fmod,
 physics (havok, phyzx, etc), animation (euphoria, natural
And just to nitpick: 1. Games are hit or miss with a very short life cycle. This is not typical. Most software have a life cycle counted in years with contractual support requirements that can be harsh, not months with very little possibility of damage claims for the end user. 2. The life cycle of games is porting when your product succeeds. You are F*CK*D if you don't have source code access and want to port to an emerging platform, so I believe you can obtain source code for libraries like Havoc and FMOD for that reason alone. I really don't think closed source libraries should be the focus of D if it prevents having a good infrastructure.
Feb 06 2014
next sibling parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Thursday, 6 February 2014 at 11:59:49 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 6 February 2014 at 11:29:35 UTC, Manu wrote:
 Some that I regularly encounter: system libs, opengl, directx, 
 fmod,
 physics (havok, phyzx, etc), animation (euphoria, natural
And just to nitpick: 1. Games are hit or miss with a very short life cycle. This is not typical. Most software have a life cycle counted in years with contractual support requirements that can be harsh, not months with very little possibility of damage claims for the end user. 2. The life cycle of games is porting when your product succeeds. You are F*CK*D if you don't have source code access and want to port to an emerging platform, so I believe you can obtain source code for libraries like Havoc and FMOD for that reason alone. I really don't think closed source libraries should be the focus of D if it prevents having a good infrastructure.
D will never be taken serious by its target audience if no proper support is available. In the enterprise world I work on, very few projects have 100% source code available. -- Paulo
Feb 06 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 6 February 2014 at 12:12:31 UTC, Paulo Pinto wrote:
 D will never be taken serious by its target audience if no 
 proper support is available.
What is the target audience? Is it clearly defined?
 In the enterprise world I work on, very few projects have 100% 
 source code available.
In my view discouraging libraries as binary blobs is a net positive, if that means you loose a specific audience I still think it is a win. Because I don't think binary blobs are positive for the eco system. I am quite certain that the plethora of libraries that you find for Python, Ruby and Perl exist due to the encouragement of source distribution and ease of library modification. (e.g. you cannot use the library without source access). Binary blobs in C are less problematic (but still problematic) because of the language stability and compiler maturity. Binary blobs that are DMD2 means you cannot move to DMD3 or recompile it to fix a compiler induced bug among other things. Maybe some commercial players will make DMD2 blobs available and then pull out due to lack of profit, that's not unlikely, and it sucks more than not having the libraries in the first place.
Feb 06 2014
parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Thursday, 6 February 2014 at 12:43:19 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 6 February 2014 at 12:12:31 UTC, Paulo Pinto wrote:
 D will never be taken serious by its target audience if no 
 proper support is available.
What is the target audience? Is it clearly defined?
 In the enterprise world I work on, very few projects have 100% 
 source code available.
In my view discouraging libraries as binary blobs is a net positive, if that means you loose a specific audience I still think it is a win. Because I don't think binary blobs are positive for the eco system. I am quite certain that the plethora of libraries that you find for Python, Ruby and Perl exist due to the encouragement of source distribution and ease of library modification. (e.g. you cannot use the library without source access). Binary blobs in C are less problematic (but still problematic) because of the language stability and compiler maturity. Binary blobs that are DMD2 means you cannot move to DMD3 or recompile it to fix a compiler induced bug among other things. Maybe some commercial players will make DMD2 blobs available and then pull out due to lack of profit, that's not unlikely, and it sucks more than not having the libraries in the first place.
You would be amazed how many times I have written FFI code that decrypts source code on load. -- Paulo
Feb 06 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 6 February 2014 at 13:03:43 UTC, Paulo Pinto wrote:
 You would be amazed how many times I have written FFI code that 
 decrypts source code on load.
You can probably create a decent decompiler for D-code given the reliance on GC/stack frames etc, so I am not sure if that is a rational point (perhaps a political one). Let me put it this way then: If havoc is available as a poorly adapted blob it will discourage development of a native idiomatic open source D physics engine, because it pays off more to spend time working around blob-related issues and get stellar performance. With no physics engine you will have something primitive in D instead, but the moment it is good enough for creating simple apps people that are interested will improve on that rather than working around Havoc issues. So, slower start, but better for the eco system in the long term.
Feb 06 2014
prev sibling parent reply Manu <turkeyman gmail.com> writes:
On 6 February 2014 21:59, <"Ola Fosheim Gr=C3=B8stad\"
<ola.fosheim.grostad+dlang gmail.com>" puremagic.com> wrote:

 On Thursday, 6 February 2014 at 11:29:35 UTC, Manu wrote:

 Some that I regularly encounter: system libs, opengl, directx, fmod,
 physics (havok, phyzx, etc), animation (euphoria, natural
And just to nitpick: 1. Games are hit or miss with a very short life cycle. This is not typical. Most software have a life cycle counted in years with contractua=
l
 support requirements that can be harsh, not months with very little
 possibility of damage claims for the end user.
I don't think you've mage a game recently. Most big games are multi-year projects with teams numbering well in the hundreds, and then downloadable-content (ie, after-market content) is basically a given these days, and often supported by a different team than initially wrote the code. 2. The life cycle of games is porting when your product succeeds. You are
 F*CK*D if you don't have source code access and want to port to an emergi=
ng
 platform, so I believe you can obtain source code for libraries like Havo=
c
 and FMOD for that reason alone.

 I really don't think closed source libraries should be the focus of D if
 it prevents having a good infrastructure.
I didn't say they should be a focus, I'm saying they must however be supported.
Feb 09 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 9 February 2014 at 10:06:12 UTC, Manu wrote:
 I don't think you've mage a game recently.
Pointless comment.
 Most big games are multi-year projects with teams numbering 
 well in the
Most games are not big. Most games fail in the marketplace.
 I didn't say they should be a focus, I'm saying they must 
 however be supported.
Must is a strong word, but since D is focusing on separate compilation it probably is a focus. Why are most comments about the application domain for D centered on "prestigious" projects such as AAA games, high volume trading system and safety critical appliations? The most likely application domain is a lot less "exciting": tools and simple servers. Get down to earth, plz.
Feb 09 2014
next sibling parent reply "Francesco Cattoglio" <francesco.cattoglio gmail.com> writes:
On Sunday, 9 February 2014 at 10:16:20 UTC, Ola Fosheim Grøstad 
wrote:
 On Sunday, 9 February 2014 at 10:06:12 UTC, Manu wrote:
 I don't think you've mage a game recently.
Pointless comment. [snip] Get down to earth, plz.
I love how you wrote that "Pointless comment" then added that "Get down to earth, plz" at the end. :P [ok, sorry for metaposting, but I couldn't resist]
Feb 09 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 9 February 2014 at 17:48:21 UTC, Francesco Cattoglio 
wrote:
 I love how you wrote that "Pointless comment" then added that 
 "Get down to earth, plz" at the end. :P
Yes, you did love that, didn't you? It is generally a good idea to set down the foot firmly when people go ad hominem. However, the last point was directed to the D community. The language needs to be more focused on being very good at some key areas, not cover everything.
Feb 09 2014
parent reply "francesco cattoglio" <francesco.cattoglio gmail.com> writes:
 However, the last point was directed to the D community. The 
 language needs to be more focused on being very good at some 
 key areas, not cover everything.
I totally agree on this, but the problem here is that there are game developers out there, willing to use D. I also see lots of movement from hobbysts. We can't ignore them completely. Undefinedly long pauses are really bad for them, and something needs to be done. Be it in user code, library solution or as core part of the language. I agree that AAA titles are not the main right now, but this doesn't mean indie projects shouldn't be doable. After all, 150 milliseconds pauses are really annoying for pretty much any first person game.
Feb 09 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 9 February 2014 at 20:15:51 UTC, francesco cattoglio 
wrote:
 I totally agree on this, but the problem here is that
 there are game developers out there, willing to use D. I also 
 see lots of movement from hobbysts. We can't ignore them 
 completely. Undefinedly long pauses are really bad for them, 
 and something needs to be done. Be it in user code, library 
 solution or as core part of the language.
I totally agree with you. I totally agree that the best focus for D would be to aim for towards being a kick-ass low level system language with programmer directed optimization (compiler hints), good memory layout control, and low latency, for medium sized code bases, and without historical c/c++ cruft. Because that spot is open and it rings a bell... For some reason the D community does not pursue it… as a shared goal. And that means D will never get there. I think one needs profiled whole program analysis to get there. I think one needs to aim for upcoming hardware architectures. I think one need to back it up with programmer specified constraints and hints. I think one needs high level optimization in addition to low level. I think one needs a very light and transparent runtime with tuneable low cost allocation schemes. I also think one needs to reduce the focus on separate compilation with no knowledge about what goes on inside an object file and C++/C ABIs. The compiler should have as much information available as possible. programming in the large, garbage collection and enforced correctness. To stay focused. However the goal of D appears to be to blend all these thing in one big melting pot. That makes things move slowly in a manner where I have trouble seeing the vision. So the outlook today appears to be that D is primarily useful for the in-between-spot which is really tools and perhaps some server stuff.
 I agree that AAA titles are not the main right now, but this 
 doesn't mean indie projects shouldn't be doable. After all, 150 
 milliseconds pauses are really annoying for pretty much any 
 first person game.
Yeah, it is indeed both uneccessary and undesirable with that kind of latency, but if it happens once per hour it has no real consequences in most scenarios. For some reason the GC is the holy grail of D, so I am just kind of figuring out what the benefits/damage is. Personally I have given up all hope of D becoming useful for client side stuff: 1. because of all the resistance in the forum. 2. because C++ portability is hard to beat (android, ios, win, pnacl etc) 3. because C++ provides low latency 4. because the capabilities of HTML5 browsers are eating heavily into the client space and might change the landscape in a few years I think the perceived D-vision-of-2013 might be useful on the efficient-server-side and there the GC actually makes sense every once in a while to clean up leaks. And that is OK too, even thought that space is a bit more crowded than the low level space. Just let's avoid the high profile argument. Be exceptionally good at some modest stuff first then expand into other areas next iteration. I think the fallacy is the notion of General Purpose Programming Language, as if those actually exist. :-) They don't, because human beings are following trends and you need a focused culture to sustain the eco system. So yeah, I agree with you, but I don't see the clearly communicated vision and therefore don't believe in the growth of a eco system around D for client side programming. :-/
Feb 09 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/9/14, 1:06 PM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
[snip]
 I think one needs profiled whole program analysis to get there. I think
 one needs to aim for upcoming hardware architectures. I think one need
 to back it up with programmer specified constraints and hints. I think
 one needs high level optimization in addition to low level. I think one
 needs a very light and transparent runtime with tuneable low cost
 allocation schemes.

 I also think one needs to reduce the focus on separate compilation with
 no knowledge about what goes on inside an object file and C++/C ABIs.
Your thoughts are appreciated (all 6 of them in as many sentences). There is something to be said, however, about armchair quarterbacking and holier-than-thou kibitzing on what others should be doing. This community is as close as it gets to a meritocracy, so if you think you know what's good, you do good. If you want your stupendously many "I think"s to carry weight, follow them with some "I do"s as well. Hop on github. This endless walk through your knowledge base just isn't useful. Andrei
Feb 09 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 10 February 2014 at 04:22:32 UTC, Andrei Alexandrescu 
wrote:
 Your thoughts are appreciated (all 6 of them in as many 
 sentences). There is something to be said, however, about 
 armchair quarterbacking and holier-than-thou kibitzing on what 
 others should be doing. This community is as close as it gets 
 to a meritocracy, so if you think you know what's good, you do 
 good. If you want your stupendously many "I think"s to carry 
 weight, follow them with some "I do"s as well. Hop on github. 
 This endless walk through your knowledge base just isn't useful.
Good job, my initial response to Manu was a critique of going Ad Hominem and you as a person time and time again fail in that regard in many discussions. You do however deserve a round of ad hominem because you as one of the two people who are in a position to communicate the project vision and set forth MEASURABLE goals that can be tracked and evaluated, but you refuse to do so. All talk of meritocracy is essentially hypocrisy because all projects need to establish boundaries and a goal post, and you fail miserably in that regard. That's why D is a slow mover. "This endless walk through [my] knowledgebase" is of course not a walk through my knowledgebase, it is an assessment of the project that YOU FAIL to attempt to do. It is my attempt to try to figure out where this project is heading. You are right, I should not have to do it. YOU SHOULD DO IT. AND PRESENT IT. That way people won't be let down. I like the initial vision Walter Bright put forth years ago, that is to make a better C++. That has somehow evolved into making a You and Walter Bright are leads. I expect any project and you to put forth: 1. A clear vision that establish a firm boundary. 2. A small set of clear measurable goals that give the project direction. 3. A list of points stating what the project is not going to address in the immediate future. This endless walk through what is wrong with D project management just isn't useful, because you don't want to listen.
Feb 10 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/10/14, 12:48 AM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Monday, 10 February 2014 at 04:22:32 UTC, Andrei Alexandrescu wrote:
 Your thoughts are appreciated (all 6 of them in as many sentences).
 There is something to be said, however, about armchair quarterbacking
 and holier-than-thou kibitzing on what others should be doing. This
 community is as close as it gets to a meritocracy, so if you think you
 know what's good, you do good. If you want your stupendously many "I
 think"s to carry weight, follow them with some "I do"s as well. Hop on
 github. This endless walk through your knowledge base just isn't useful.
Good job, my initial response to Manu was a critique of going Ad Hominem and you as a person time and time again fail in that regard in many discussions.
Totally. For the most part I take posts one at a time and at face value only, it's just that sometimes larger patterns develop themselves. But as I told Walter, for better or (sometimes definitely) worse, our character flaws make history inside the D community.
 You do however deserve a round of ad hominem because you as
 one of the two people who are in a position to communicate the project
 vision and set forth MEASURABLE goals that can be tracked and evaluated,
 but you refuse to do so.
A fresh perspective is always good to take under consideration. It's also a good opportunity to bring more transparency to what we're doing, as I'll do below.
 All talk of meritocracy is essentially hypocrisy because all projects
 need to establish boundaries and a goal post, and you fail miserably in
 that regard. That's why D is a slow mover. "This endless walk through
 [my] knowledgebase" is of course not a walk through my knowledgebase, it
 is an assessment of the project that YOU FAIL to attempt to do. It is my
 attempt to try to figure out where this project is heading.

 You are right, I should not have to do it. YOU SHOULD DO IT. AND PRESENT
 IT. That way people won't be let down.
I have done so. Several times. Two very simple examples from recent history: 1. I stressed that good work on bugs with bounties is a gesture of good will with Facebook that will bring more support from the company. It's the trifecta: the bugs are not harder than those people work on anyway, it's good impact on the future of the language, and it's even non-negligible money. E.g. I wrote on 2014-01-11:
 My hope is to convince that the message Facebook is conveying here is
 much stronger than the actual sums involved; it's an initiation of
 cooperation and involvement with a community, and it would be awesome
 to respond in kind.
Taking a look at https://www.bountysource.com/trackers/383571-d-programming-language, however, reveals that there's little attention to those bugs, in SPITE of the fact that contributions on HARDER problems on the SAME project continued as furiously as ever, if not more. 2. I said many times our inability to review github contributions at the rate they arrive is an important problem we're facing. We currently have 216 open pull requests across our projects. I think this bottleneck very concretely limits the growth speed of D. This is a typical problem. Reviewing contributions is hard and thankless work. I know how we solved it at Facebook for our many open-sourced projects: we created a team for it, with a manager, tracking progress, the works. This is _exactly_ the kind of thing that can't be done in a volunteer community. The reality is that on a volunteer-driven project, it's not easy to tell people what to do. They're by definition in it for working what _they_ want to work on. Applying classic management techniques naively is unlikely to work because all management techniques are using resources toward goals and assume the appropriately qualified human resources will work on what the project requires be done. So I wasn't glib when I sent you to github. In a very concrete sense, you'd be helping there a ton more in a fraction of the time you spend posting.
 I like the initial vision Walter Bright put forth years ago, that is to

 Can you please ASSESS that.
I think D must not define itself in relation to any other language.
 You and Walter Bright are leads.

 I expect any project and you to put forth:

 1. A clear vision that establish a firm boundary.
 2. A small set of clear measurable goals that give the project direction.
 3. A list of points stating what the project is not going to address in
 the immediate future.
Some of these are useful to put together at least as (a) thoughts on what I believe would be high-impact topics, (b) things that I plan personally to work on.
 This endless walk through what is wrong with D project management just
 isn't useful, because you don't want to listen.
Honestly, as one who's been at this for a long time and has done and witnessed a number of such attempts, I think you're exceedingly naive about what can be done with traditional project management approaches in this case. Three simple anecdotes out of many: 1. I've had a long chat with a Linux senior kernel guy who's been there since the very early days. Back then Linux did not have any form of project management, and nobody told people what to work on. People just worked on whatever itch they wanted to scratch. The way it succeeded is getting the attention of sufficiently many people that there was someone on each possible itch :o). Still, for a very long time (years after actual heavy corporate support emerged) many Linux tools looked like proofs of concept compared to the mature Windows equivalents. 2. A few months ago a prominent member of the community made (privately to Walter and myself) a strong argument along the same lines as yours: D could move much faster if some good management could be used with it, and offered to ask as a manager of the project. I explained him with (other) examples what I'm explaining you now, of which the most important point was that resource management can be done if there are resources to manage. He understood my point (and was gracious enough to continue work within the community). I don't think things have changed in that regard since he made is bid. 3. Only a few _days_ ago Walter and I were discussing with another prominent community member. He is the author of a project that beautifully plays into D's strengths to the end of really being the best in the world at a very measurable metric. Walter and I emphasized how finalizing and streamlining this project would both launch his career on a meteoric orbit, and have a strong impact on D. He, however, is busy with schoolwork and some other D projects that are comparatively just irrelevant - and that's where the discussion just kind of ended. Had he been a report of mine, I would have simple ensured that I assign all his other tasks to someone else, and discuss goals and milestones with him for the high-impact project. But he's not, so I can't. This alone would have been enough to disabuse me of any illusions I could do project management on Dlang. Ola, I'm sure you mean well. I trust you will find it within yourself the best way to contribute to this community. Andrei
Feb 10 2014
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/10/14, 3:15 PM, Andrei Alexandrescu wrote:
 2. A few months ago a prominent member of the community made (privately
 to Walter and myself) a strong argument along the same lines as yours: D
 could move much faster if some good management could be used with it,
 and offered to ask as a manager of the project.
s/ask/act/
Feb 10 2014
prev sibling next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Monday, 10 February 2014 at 23:15:35 UTC, Andrei Alexandrescu 
wrote:
 Taking a look at 
 https://www.bountysource.com/trackers/383571-d-programming-language, 
 however, reveals that there's little attention to those bugs, 
 in SPITE of the fact that contributions on HARDER problems on 
 the SAME project continued as furiously as ever, if not more.
Interesting, I had directly the opposite impression when went through the bountysource list. Lot of issues have pull requests provided but are stalled because of slow feedback cycle.
Feb 10 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/10/14, 3:24 PM, Dicebot wrote:
 On Monday, 10 February 2014 at 23:15:35 UTC, Andrei Alexandrescu wrote:
 Taking a look at
 https://www.bountysource.com/trackers/383571-d-programming-language,
 however, reveals that there's little attention to those bugs, in SPITE
 of the fact that contributions on HARDER problems on the SAME project
 continued as furiously as ever, if not more.
Interesting, I had directly the opposite impression when went through the bountysource list. Lot of issues have pull requests provided but are stalled because of slow feedback cycle.
Same difference. A bunch of people can pull, I'm hardly the bottleneck. It just means I failed to convince the community to get those bounties done. Andrei
Feb 10 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Tuesday, 11 February 2014 at 02:29:49 UTC, Andrei Alexandrescu 
wrote:
 Same difference. A bunch of people can pull, I'm hardly the 
 bottleneck. It just means I failed to convince the community to 
 get those bounties done.

 Andrei
Still that list (people with pull rights) is relatively short so it does not make much sense to appeal to wider auditory. Also I don't think volunteer effort can't be organized. It is a matter of people identifying themselves as part of well-defined organization as opposed to independent crowd of collaborators. It is a common wisdom that if no one feels directly responsible for an issue, no one will ever pay attention to it. Do you need any specific proposals?
Feb 11 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/11/14, 6:34 AM, Dicebot wrote:
 On Tuesday, 11 February 2014 at 02:29:49 UTC, Andrei Alexandrescu wrote:
 Same difference. A bunch of people can pull, I'm hardly the
 bottleneck. It just means I failed to convince the community to get
 those bounties done.

 Andrei
Still that list (people with pull rights) is relatively short so it does not make much sense to appeal to wider auditory. Also I don't think volunteer effort can't be organized. It is a matter of people identifying themselves as part of well-defined organization as opposed to independent crowd of collaborators.
I think we could organize ourselves better, too. It's all a matter of finding the right angle. But I think management is not what we need. We need better leadership. Big difference.
 It is a common wisdom
 that if no one feels directly responsible for an issue, no one will ever
 pay attention to it.
Yah, on the trite side even I'd opine :o). I said this several times now, and I'll say it again: I have asked SEVERAL TIMES people INDIVIDUALLY to do things that are HIGH IMPACT for the D language instead of something else that was more important to them, to no avail. Of course, it would be learned helplessness to draw sweeping conclusions from the experience so far.
 Do you need any specific proposals?
Suggestions for doing things better are gladly considered. Andrei
Feb 11 2014
next sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
11-Feb-2014 21:12, Andrei Alexandrescu пишет:
 On 2/11/14, 6:34 AM, Dicebot wrote:
 Do you need any specific proposals?
Suggestions for doing things better are gladly considered.
I'd risk suggesting introducing something simple and self-organizable. To be concrete: define "interest groups" by major areas of D ecosystem (Fronted, one for each backend, druntime as a whole, GC alone, Phobos in bits and pieces ...) and let people join/leave/lead these. At the very least it would make it obvious who is into what at which point of time. Even more importantly - who you'd need to ask about what. The only question is where to track this stuff - maybe Wiki? P.S. Trello was a failed experiment, but IMHO it failed largely due to being behind the closed doors. -- Dmitry Olshansky
Feb 11 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/11/14, 9:27 AM, Dmitry Olshansky wrote:
 11-Feb-2014 21:12, Andrei Alexandrescu пишет:
 On 2/11/14, 6:34 AM, Dicebot wrote:
 Do you need any specific proposals?
Suggestions for doing things better are gladly considered.
I'd risk suggesting introducing something simple and self-organizable. To be concrete: define "interest groups" by major areas of D ecosystem (Fronted, one for each backend, druntime as a whole, GC alone, Phobos in bits and pieces ...) and let people join/leave/lead these. At the very least it would make it obvious who is into what at which point of time. Even more importantly - who you'd need to ask about what. The only question is where to track this stuff - maybe Wiki?
Could you please set up a sample wiki page so we get a better feel. Thanks.
 P.S. Trello was a failed experiment, but IMHO it failed largely due to
 being behind the closed doors.
Yah, the whole Trello experiment has been on my tongue during this discussion. It's been made public long before its demise. Andrei
Feb 11 2014
next sibling parent reply "Brad Anderson" <eco gnuk.net> writes:
On Tuesday, 11 February 2014 at 17:37:36 UTC, Andrei Alexandrescu 
wrote:
 Yah, the whole Trello experiment has been on my tongue during 
 this discussion. It's been made public long before its demise.


 Andrei
I think it got off to a bad start because it was private and even after it was public it wasn't talked about much within the community (it was hard to even find because it wouldn't turn up in Trello's search). I use and enjoy Trello at work so personally I think I'd try to get everyone to give it another shot if it were up to me.
Feb 11 2014
next sibling parent "Francesco Cattoglio" <francesco.cattoglio gmail.com> writes:
On Tuesday, 11 February 2014 at 18:12:41 UTC, Brad Anderson wrote:
 On Tuesday, 11 February 2014 at 17:37:36 UTC, Andrei 
 Alexandrescu wrote:
 Yah, the whole Trello experiment has been on my tongue during 
 this discussion. It's been made public long before its demise.


 Andrei
I think it got off to a bad start because it was private and even after it was public it wasn't talked about much within the community (it was hard to even find because it wouldn't turn up in Trello's search). I use and enjoy Trello at work so personally I think I'd try to get everyone to give it another shot if it were up to me.
I've been here on and off for lots of time, and it is the first time I hear about Trello at all :D
Feb 11 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/11/14, 10:12 AM, Brad Anderson wrote:
 I think it got off to a bad start because it was private and even after
 it was public it wasn't talked about much within the community (it was
 hard to even find because it wouldn't turn up in Trello's search). I use
 and enjoy Trello at work so personally I think I'd try to get everyone
 to give it another shot if it were up to me.
I'm also in favor of a second go, particularly since I've initiated the first one :o). I'm still using Trello for tracking personal stuff, though not nearly as frequently as to make a big difference. Andrei
Feb 11 2014
parent reply "Joseph Cassman" <jc7919 outlook.com> writes:
On Tuesday, 11 February 2014 at 18:52:40 UTC, Andrei Alexandrescu 
wrote:
 On 2/11/14, 10:12 AM, Brad Anderson wrote:
 I think it got off to a bad start because it was private and 
 even after
 it was public it wasn't talked about much within the community 
 (it was
 hard to even find because it wouldn't turn up in Trello's 
 search). I use
 and enjoy Trello at work so personally I think I'd try to get 
 everyone
 to give it another shot if it were up to me.
I'm also in favor of a second go, particularly since I've initiated the first one :o). I'm still using Trello for tracking personal stuff, though not nearly as frequently as to make a big difference. Andrei
Although I haven't yet contributed to the project by sending any pull requests I am interested in doing so. Perhaps the main thing that keeps holding me back is I feel it is hard to know what currently is being worked on with the current setup (i.e. who is doing what, will my efforts duplicate someone else's, will I be stepping on anyone's toes, will my solution idea be incompatible with the direction in which the language is moving). I frequent the forums and still find it hard to get an overall picture. This is probably my lack of not trying hard enough (e.g. I never have time to spend on IRC). I saw the Trello board before and I felt that it helped me get up to speed on who is doing what, what is being worked on, etc. If you brought it, or something like it, back that would be appreciated. Sorry to change the subject but one thing that keeps discouraging me from trying to contribute code changes is the large number of unmerged pull requests in GitHub. Since I am not a reviewer I am afraid that any effort I perform will get ignored and so I hold back since I have other stuff to do too. I know that this is something that concerns you a lot from reading other forums discussions so I apologize if this sounds like a complaint. I appreciate the work of all involved and know that no one has infinite time or energy. Not even sure if I could contribute to the code-base but would be nice to try. Managing a project is never easy. My hat's off to you and Walter for your efforts in that regard. Joseph
Feb 11 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/11/14, 12:31 PM, Joseph Cassman wrote:
 Although I haven't yet contributed to the project by sending any pull
 requests I am interested in doing so. Perhaps the main thing that keeps
 holding me back is I feel it is hard to know what currently is being
 worked on with the current setup (i.e. who is doing what, will my
 efforts duplicate someone else's, will I be stepping on anyone's toes,
 will my solution idea be incompatible with the direction in which the
 language is moving). I frequent the forums and still find it hard to get
 an overall picture. This is probably my lack of not trying hard enough
 (e.g. I never have time to spend on IRC). I saw the Trello board before
 and I felt that it helped me get up to speed on who is doing what, what
 is being worked on, etc. If you brought it, or something like it, back
 that would be appreciated.
Thanks for the feedback. Hanging out on IRC should not be necessary.
 Sorry to change the subject but one thing that keeps discouraging me
 from trying to contribute code changes is the large number of unmerged
 pull requests in GitHub.
Yes. I think that's a disaster. We need to figure out the right approach to solving that.
 Since I am not a reviewer I am afraid that any
 effort I perform will get ignored and so I hold back since I have other
 stuff to do too. I know that this is something that concerns you a lot
 from reading other forums discussions so I apologize if this sounds like
 a complaint. I appreciate the work of all involved and know that no one
 has infinite time or energy. Not even sure if I could contribute to the
 code-base but would be nice to try.

 Managing a project is never easy. My hat's off to you and Walter for
 your efforts in that regard.
Thanks! We're doing best we can but I am convinced we can do a lot better, and am looking at some out-of-the-box things to try. Andrei
Feb 11 2014
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Wednesday, 12 February 2014 at 01:54:15 UTC, Andrei
Alexandrescu wrote:
 Yes. I think that's a disaster. We need to figure out the right 
 approach to solving that.
For a while, we were doing the review sunday. That was fun, everybody was talking about the pull in IRC and we were going from one to the other. Maybe it is time to start this tradition again ?
Feb 11 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/11/14, 6:21 PM, deadalnix wrote:
 On Wednesday, 12 February 2014 at 01:54:15 UTC, Andrei
 Alexandrescu wrote:
 Yes. I think that's a disaster. We need to figure out the right
 approach to solving that.
For a while, we were doing the review sunday. That was fun, everybody was talking about the pull in IRC and we were going from one to the other. Maybe it is time to start this tradition again ?
Are you guys up for it? I would. Andrei
Feb 11 2014
parent "Adam Wilson" <flyboynw gmail.com> writes:
On Tue, 11 Feb 2014 18:36:10 -0800, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 2/11/14, 6:21 PM, deadalnix wrote:
 On Wednesday, 12 February 2014 at 01:54:15 UTC, Andrei
 Alexandrescu wrote:
 Yes. I think that's a disaster. We need to figure out the right
 approach to solving that.
For a while, we were doing the review sunday. That was fun, everybody was talking about the pull in IRC and we were going from one to the other. Maybe it is time to start this tradition again ?
Are you guys up for it? I would. Andrei
I loved it. I was fun to watch the autotester go nuts and get in there and review the changes prior to committing. -- Adam Wilson GitHub/IRC: LightBender Aurora Project Coordinator
Feb 11 2014
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Wednesday, 12 February 2014 at 02:21:38 UTC, deadalnix wrote:
 On Wednesday, 12 February 2014 at 01:54:15 UTC, Andrei
 Alexandrescu wrote:
 Yes. I think that's a disaster. We need to figure out the 
 right approach to solving that.
For a while, we were doing the review sunday. That was fun, everybody was talking about the pull in IRC and we were going from one to the other. Maybe it is time to start this tradition again ?
I definitively would !
Feb 11 2014
prev sibling parent "francesco cattoglio" <francesco.cattoglio gmail.com> writes:
On Wednesday, 12 February 2014 at 02:21:38 UTC, deadalnix wrote:
 On Wednesday, 12 February 2014 at 01:54:15 UTC, Andrei
 Alexandrescu wrote:
 Yes. I think that's a disaster. We need to figure out the 
 right approach to solving that.
For a while, we were doing the review sunday. That was fun, everybody was talking about the pull in IRC and we were going from one to the other. Maybe it is time to start this tradition again ?
Sounds like a really nice idea! I would come in and "listen silently" for sure! At least untill I'm knowledgeable enough :P
Feb 12 2014
prev sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 2/11/14, Joseph Cassman <jc7919 outlook.com> wrote:
 Sorry to change the subject but one thing that keeps discouraging
 me from trying to contribute code changes is the large number of
 unmerged pull requests in GitHub.
For what it's worth new Phobos pull requests get a review and are merged pretty quickly lately. The number of open pull requests is drastically falling as of late. Take a look at the blue line: http://i.imgur.com/ELfvwEO.png Generated from: http://d.puremagic.com/test-results/chart.ghtml?projectid=1 ----- Don't hesitate to contribute by submitting a pull request, there will be someone around that can review your work, and guide you in the process if you have any trouble along the way. Thanks!
Feb 16 2014
parent "Joseph Cassman" <jc7919 outlook.com> writes:
On Sunday, 16 February 2014 at 20:48:50 UTC, Andrej Mitrovic 
wrote:
 On 2/11/14, Joseph Cassman <jc7919 outlook.com> wrote:
 Sorry to change the subject but one thing that keeps 
 discouraging
 me from trying to contribute code changes is the large number 
 of
 unmerged pull requests in GitHub.
For what it's worth new Phobos pull requests get a review and are merged pretty quickly lately. The number of open pull requests is drastically falling as of late. Take a look at the blue line: http://i.imgur.com/ELfvwEO.png Generated from: http://d.puremagic.com/test-results/chart.ghtml?projectid=1 ----- Don't hesitate to contribute by submitting a pull request, there will be someone around that can review your work, and guide you in the process if you have any trouble along the way. Thanks!
Appreciate the graphic. Looks encouraging. Joseph
Feb 16 2014
prev sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
11-Feb-2014 21:37, Andrei Alexandrescu пишет:
 On 2/11/14, 9:27 AM, Dmitry Olshansky wrote:
 11-Feb-2014 21:12, Andrei Alexandrescu пишет:
 On 2/11/14, 6:34 AM, Dicebot wrote:
I'd risk suggesting introducing something simple and self-organizable. To be concrete: define "interest groups" by major areas of D ecosystem (Fronted, one for each backend, druntime as a whole, GC alone, Phobos in bits and pieces ...) and let people join/leave/lead these. At the very least it would make it obvious who is into what at which point of time. Even more importantly - who you'd need to ask about what. The only question is where to track this stuff - maybe Wiki?
Could you please set up a sample wiki page so we get a better feel. Thanks.
Here is a sketch. Sorry couldn't make it more realistic - I'm packing my stuff for tomorrow's flight (about 6 hours to go): http://wiki.dlang.org/Groups The fact that I couldn't remember a hell lot of people in each area of expertise/interest proves the point - it has to be driven by individuals and generated automatically. Think of it as a "phone book". Or better a page "Who is who in D language" for dlang.org that lists key areas in D ecosystem and contacts of people working/interested in them. The thing is I want this kind of page to be dynamic and self-organizable by very simple rules: 1. A D-guy has github account and an e-mail. (We may go beyond that e.g. IRC nickname, Skype etc. but let it grow organically) 2. Joins and leaves groups. 3. May be designated a leader, the "goto guy" in some group. The list of who's where is constantly kept up to date from this information. The extras could be grown on top this as needed. Nothing too fancy, probably could be hacked in an evening with Vibe.d (or say even PHP, who cares as long as it works).
 P.S. Trello was a failed experiment, but IMHO it failed largely due to
 being behind the closed doors.
Yah, the whole Trello experiment has been on my tongue during this discussion. It's been made public long before its demise.
Thing is that I for one lost all the steam trying to get at it early on. By the moment it was public I couldn't care less, to make things work unlike github there simply was no way to contribute for an outsider. -- Dmitry Olshansky
Feb 11 2014
parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Dmitry Olshansky"  wrote in message news:lddunp$jp7$1 digitalmars.com...

 Here is a sketch. Sorry couldn't make it more realistic -  I'm packing my 
 stuff for tomorrow's flight (about 6 hours to go):
 http://wiki.dlang.org/Groups
This looks useful, I can't always remember who works on what. I would probably prefer compiler internals questions were posted to the dmd-internals mailing list though, so everyone can see them/subscribe/search them.
Feb 12 2014
parent reply "Andrej Mitrovic" <andrej.mitrovich gmail.com> writes:
On Wednesday, 12 February 2014 at 11:01:52 UTC, Daniel Murphy 
wrote:
 "Dmitry Olshansky"  wrote in message 
 news:lddunp$jp7$1 digitalmars.com...

 Here is a sketch. Sorry couldn't make it more realistic -  I'm 
 packing my stuff for tomorrow's flight (about 6 hours to go):
 http://wiki.dlang.org/Groups
This looks useful, I can't always remember who works on what. I would probably prefer compiler internals questions were posted to the dmd-internals mailing list though, so everyone can see them/subscribe/search them.
Personally I hate dmd-internals because it's riddled with merge notification threads. Those github notifications really need to be in a separate newsgroup or we need to move discussions elsewhere because (at least for me) actual discussion threads get lost.
Feb 12 2014
parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Andrej Mitrovic"  wrote in message 
news:tandcufdqjfgwacvobpm forum.dlang.org...

 Personally I hate dmd-internals because it's riddled with merge 
 notification threads. Those github notifications really need to be in a 
 separate newsgroup or we need to move discussions elsewhere because (at 
 least for me) actual discussion threads get lost.
That's probably a good idea, it would be nice to have both dmd-internals and dmd-commits.
Feb 12 2014
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/12/14, 4:19 AM, Daniel Murphy wrote:
 "Andrej Mitrovic"  wrote in message
 news:tandcufdqjfgwacvobpm forum.dlang.org...

 Personally I hate dmd-internals because it's riddled with merge
 notification threads. Those github notifications really need to be in
 a separate newsgroup or we need to move discussions elsewhere because
 (at least for me) actual discussion threads get lost.
That's probably a good idea, it would be nice to have both dmd-internals and dmd-commits.
Could filters help? Andrei
Feb 12 2014
prev sibling next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Tuesday, 11 February 2014 at 17:12:37 UTC, Andrei Alexandrescu 
wrote:
 Also I don't think volunteer effort can't be organized. It is 
 a matter
 of people identifying themselves as part of well-defined 
 organization as
 opposed to independent crowd of collaborators.
I think we could organize ourselves better, too. It's all a matter of finding the right angle. But I think management is not what we need. We need better leadership. Big difference.
Probably. But one can't simply create leadership, this is something that comes naturally. Management is easier.
 It is a common wisdom
 that if no one feels directly responsible for an issue, no one 
 will ever
 pay attention to it.
Yah, on the trite side even I'd opine :o). I said this several times now, and I'll say it again: I have asked SEVERAL TIMES people INDIVIDUALLY to do things that are HIGH IMPACT for the D language instead of something else that was more important to them, to no avail.
Of course, because you can't ask people to fulfill duties they have not volunteered to fulfill. This is exactly what I am speaking about - amount of people actually working on the language is actually very small despite high amount of contribution. Most people just do stuff they need or are interested in and can't be obliged to do anything else. Volunteering to do stuff you don't really want is completely different thing :) And it needs to be encouraged by something more precious than tiny bounties. For example, being able to influence language-changing decisions is much more seductive reward.
 Of course, it would be learned helplessness to draw sweeping 
 conclusions from the experience so far.

 Do you need any specific proposals?
Suggestions for doing things better are gladly considered.
As I have mentioned on some occasions, I was very impressed by both simplicity and efficiency of Arch Linux "Trusted User" organization after studying it "from inside". It is group of people that are not directly affiliated with Arch developers but have volunteered to take care/responsibility about parts of ecosystem. They also have power to make decisions regarding that ecosystem by formal voting procedure (with strict quorum and success % defined). Addition of new trusted users requires sponsorship from one of existing TU's and is approved by the very same voting procedure. Usually new TU's state clearly what parts of the ecosystem they want to be responsible for during initial application and this is often taken into consideration by voters. I think D community can take some inspiration from such approach. It will allow to make decisions on more controversial topics and speed up process in general by removing bottleneck of your + Walter decision (assuming you still have veto votes in case stuff goes really bad). Also it gives clear overview of who is supposed to be responsible for what and feeling of making the difference for those who take part in it.
Feb 11 2014
parent "Dicebot" <public dicebot.lv> writes:
On Tuesday, 11 February 2014 at 17:50:54 UTC, Dicebot wrote:
 Suggestions for doing things better are gladly considered.
As I have mentioned on some occasions, I was very impressed by both simplicity and efficiency of Arch Linux "Trusted User" organization after studying it "from inside". It is group of people that are not directly affiliated with Arch developers but have volunteered to take care/responsibility about parts of ecosystem. They also have power to make decisions regarding that ecosystem by formal voting procedure (with strict quorum and success % defined). Addition of new trusted users requires sponsorship from one of existing TU's and is approved by the very same voting procedure. Usually new TU's state clearly what parts of the ecosystem they want to be responsible for during initial application and this is often taken into consideration by voters. I think D community can take some inspiration from such approach. It will allow to make decisions on more controversial topics and speed up process in general by removing bottleneck of your + Walter decision (assuming you still have veto votes in case stuff goes really bad). Also it gives clear overview of who is supposed to be responsible for what and feeling of making the difference for those who take part in it.
To give some specifics, one D example where we do have something resembling an organized process is Phobos review queue. Once I have noticed that there are lot of proposal rotting there (and that I don't like it) it was trivial to read the reiew process description, step up and proceed with all that stuff. Expectations were clear, process was (mostly) clear, same for responsibilities. And main reward for that is that I can choose what gets reviewed next and poke people about their work :)
Feb 11 2014
prev sibling parent reply "Andrej Mitrovic" <andrej.mitrovich gmail.com> writes:
On Tuesday, 11 February 2014 at 17:12:37 UTC, Andrei Alexandrescu 
wrote:
 I said this several times now, and I'll say it again: I have 
 asked SEVERAL TIMES people INDIVIDUALLY to do things that are 
 HIGH IMPACT for the D language instead of something else that 
 was more important to them, to no avail.
Which things exactly?
Feb 11 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/11/14, 10:08 AM, Andrej Mitrovic wrote:
 On Tuesday, 11 February 2014 at 17:12:37 UTC, Andrei Alexandrescu wrote:
 I said this several times now, and I'll say it again: I have asked
 SEVERAL TIMES people INDIVIDUALLY to do things that are HIGH IMPACT
 for the D language instead of something else that was more important
 to them, to no avail.
Which things exactly?
One simple example: specific regressions and blockers in bugzilla. Andrei
Feb 11 2014
parent reply "Andrej Mitrovic" <andrej.mitrovich gmail.com> writes:
On Tuesday, 11 February 2014 at 18:50:56 UTC, Andrei Alexandrescu 
wrote:
 One simple example: specific regressions and blockers in 
 bugzilla.
Are those the same as the one's which are marked as bounties? It could be that those problems are simply hard to fix. Many of them seem to be backend or debugging-related bugs. I'm saying not many people know how to fix these.
Feb 11 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/11/14, 10:55 AM, Andrej Mitrovic wrote:
 On Tuesday, 11 February 2014 at 18:50:56 UTC, Andrei Alexandrescu wrote:
 One simple example: specific regressions and blockers in bugzilla.
Are those the same as the one's which are marked as bounties?
No.
 It could
 be that those problems are simply hard to fix. Many of them seem to be
 backend or debugging-related bugs. I'm saying not many people know how
 to fix these.
1. Not at all. Many were trivial issues technically. Walter and I fixed a few ourselves in short time. Kenji also is very active on those. For other contributors, there is no clear message of how damaging regressions are and how they delay our releases. 2. People work on harder problems. As an example I learned about literally yesterday: Martin Nowak is working on a REPL for D. Was it on our roadmap? No, not on the short list, not even on a longer list if there was one. Would I tell Martin to do a REPL? No. Would I tell Martin to drop work on the REPL and work on higher-impact items such as shared libraries on Windows and OSX? No. Do I like the idea of D having a REPL? Sure, and I'm thankful for it. 3. In the mythical organization, barring exceptional circumstances, people don't tell their managers: "Well, I understand this task you assigned to me is important, but it's hard to fix. Tell you what. I'll work on something else instead." Andrei
Feb 11 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
I just want to chime in here with Andrei, and emphasize that we don't tell 
anyone what to do here.

We do try to lead, inspire, cajole, reward, acknowledge, promote, etc., with 
varying degrees of success.
Feb 11 2014
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 10 February 2014 at 23:15:35 UTC, Andrei Alexandrescu 
wrote:
 develop themselves. But as I told Walter, for better or 
 (sometimes definitely) worse, our character flaws make history 
 inside the D community.
But I am also a hard core roleplayer… so you won't know when I am me, and when I am pulling your leg. I assume the same about you. ;-] The internet is a stage. What is real, what is not real? Hard to tell. What is a person, what is a character? Difficult question.
 This is a typical problem. Reviewing contributions is hard and 
 thankless work. I know how we solved it at Facebook for our 
 many open-sourced projects: we created a team for it, with a 
 manager, tracking progress, the works. This is _exactly_ the 
 kind of thing that can't be done in a volunteer community.
Maybe you can make some parts modular after you refactor into D. Then people can take ownership of modules and social recognition will encourage more commitment. I don't know the D social arena well enough to know if that works though.
 So I wasn't glib when I sent you to github. In a very concrete 
 sense, you'd be helping there a ton more in a fraction of the 
 time you spend posting.
But I don't want to do that when I am merely assessing D. I am not commited to D. Yet.
 I think D must not define itself in relation to any other 
 language.
I respect that position. Of course, it does not help if outsiders have been told that D is a better C++. It kinda sticks. Because people really want that. I am very hard trying to convince myself that D is more like vision of a "better C++" is very firmly stuck. Of course, the problem with C++ is that it is used very differently by different people. D is appealing more to the high-level version of C++. It probably depends on when you first used C++ or what parts of C++ you are interested in.
 done and witnessed a number of such attempts, I think you're 
 exceedingly naive about what can be done with traditional 
 project management approaches in this case.
There you go ad hominem again. I have studied online virtual worlds where people volunteer for worse… But my point was more that you need to communicate a vision that is such that the people you want to attract don't sit on the fence. I am quite certain that more skilled C++ programmers would volunteer if they saw a vision they believed in. So maybe they don't do more, but more hands available…
 Ola, I'm sure you mean well. I trust you will find it within 
 yourself the best way to contribute to this community.
You really need to avoid this ad hominem stuff… You see, as a hardcore roleplayer I could be tempted to switch over into a sarcastic mode. And that would not be fair to you. ;-)
Feb 10 2014
next sibling parent reply "Francesco Cattoglio" <francesco.cattoglio gmail.com> writes:
On Tuesday, 11 February 2014 at 00:25:35 UTC, Ola Fosheim Grøstad 
wrote:
 There you go ad hominem again.

 You really need to avoid this ad hominem stuff… You see, as a 
 hardcore roleplayer I could be tempted to switch over into a 
 sarcastic mode. And that would not be fair to you. ;-)
Ok, I'm a bit fed up by your attitude so I'll express a personal opinion now: it seems to me that when people express any kind of judgment, it's ad hominem against you. When you express judgment, is not ad hominem, it's the fruit of some kind of analysis you made. You know that "ad hominem" doesn't mean "I disagree with you", right?
Feb 11 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 11 February 2014 at 09:55:09 UTC, Francesco Cattoglio 
wrote:
 Ok, I'm a bit fed up by your attitude so I'll express a 
 personal opinion now: it seems to me that when people express 
 any kind of judgment, it's ad hominem against you.
You either respond to assertions about technology or ignore it. The moment you address the person and not the argument you are going in the wrong direction. Making an analysis of project as an artifact is not "ad hominem", even if the creator of the artifact strongly disagrees with the analysis. If the creator dislike the analysis, ignore it. If the creator thinks it is interesting, respond to it. It is neither friendly or unfriendly. It is usually not necessary, because most projects are pretty clear on where they are heading. I the case of D, it is not clear. It is not clear if there is enough momentum in the D community to sustain a real-time D either. These "debates" make the fog less thick so one can see possible directions for D. Which is a combination of management and the willingness of the D community to "be vocal about real time issues". Being complacent and meek is not going to change the direction of D.
Feb 11 2014
parent "Francesco Cattoglio" <francesco.cattoglio gmail.com> writes:
On Tuesday, 11 February 2014 at 12:47:01 UTC, Ola Fosheim Grøstad 
wrote:
 Making an analysis of project as an artifact is not "ad hominem"
Expressing an opinion on your knowledge of a subject is not "ad hominem" either. A few posts ago, Manu wrote "I don't think you have made a game" = "I don't think you have the same amound of knowledge of the game industry". That is not ad hominem. You think that Andrei telling you to "do something" is "ad hominem". Again, it's not. He is just stating that sometimes talking is not enough. If you really care about knowing how many people are interested in getting a runtime D, open a voting pool somewhere on the internet, link it in a proper thread in the announce section, and try to see what results you get, try to interpret them and so on. If you believe the people here can achieve something great, you should try to help coordinating the community. It's true that we lack organization basics. Are you knowledgeable about the subject? Are you interested in helping the community? If your answer is "yes" to bot questions, than you should really do something, and I talking about non-coding stuff. Being a top contributor is not a requirement for helping in a tangible way.
 I the case of D, it is not clear.

 It is not clear if there is enough momentum in the D community 
 to sustain a real-time D either.
It's a community effort. AA and WB are not leaders in the strict sense. As I already told you, if you want to contribute by helping coordinating efforts, I think all of us would be really happy about that.
Feb 11 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/10/14, 4:25 PM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Monday, 10 February 2014 at 23:15:35 UTC, Andrei Alexandrescu wrote:
 This is a typical problem. Reviewing contributions is hard and
 thankless work. I know how we solved it at Facebook for our many
 open-sourced projects: we created a team for it, with a manager,
 tracking progress, the works. This is _exactly_ the kind of thing that
 can't be done in a volunteer community.
Maybe you can make some parts modular after you refactor into D. Then people can take ownership of modules and social recognition will encourage more commitment.
I think at this stage we need more people to start with. Someone pointed out recently we have 77 lifetime contributors to github, as opposed to e.g. Rust which has 292.
 I think D must not define itself in relation to any other language.
I respect that position. Of course, it does not help if outsiders have been told that D is a better C++. It kinda sticks. Because people really want that.
I don't think so, at all. Anyone working on D must drop the moniker "D is a better C++" like a bad habit, no two ways about that. Most of it does it sets C++ as the benchmark. People who already like C++ would be like "you wish" and people who hate C++ would be like "better crap is not what I need anyway".
 I am very hard trying to convince myself that D is more like compiled

 "better C++" is very firmly stuck.
For someone who hasn't been around for a while, maybe. I fail to see C++ We want to make D a great language all around, with system-level access and also convenience features.
 But my point was more that you need to communicate a vision that is such
 that the people you want to attract don't sit on the fence. I am quite
 certain that more skilled C++ programmers would volunteer if they saw a
 vision they believed in.
It's there in <h2> at the top of our homepage: "Modern convenience. Modeling power. Native efficiency." By the way, this whole "plop a vision page" thing doesn't seem to be quite popular: https://www.google.com/search?q=rust%20language#q=vision+site:rust-lang.org&safe=off https://www.google.com/search?q=vision%20site%3Apython.org https://www.google.com/search?q=vision%20site%3Aisocpp.org https://www.google.com/search?q=scala#q=vision+site:scala-lang.org&safe=off https://www.google.com/search?q=vision%20site%3Agolang.org Andrei
Feb 11 2014
next sibling parent reply "Meta" <jared771 gmail.com> writes:
On Tuesday, 11 February 2014 at 17:55:36 UTC, Andrei Alexandrescu 
wrote:
 I think at this stage we need more people to start with. 
 Someone pointed out recently we have 77 lifetime contributors 
 to github, as opposed to e.g. Rust which has 292.
It's unfortunate that D's biggest competitor is Rust. Being pushed by Mozilla means they automatically have access to a larger pool of contributors that contribute to other Mozilla stuff, who probably wouldn't have contributed to Rust otherwise. However, I think DDMD will bring about a small bump in contributors. Personally, I haven't used C++ in awhile, and I've grown rusty. At this point, D is probably the language I'm most comfortable with, especially for hacking on something complex like a compiler. I imagine there are at least a few people who feel the same way and are waiting for DDMD before they take a crack at contributing.
Feb 11 2014
parent reply "Joseph Cassman" <jc7919 outlook.com> writes:
On Tuesday, 11 February 2014 at 20:02:45 UTC, Meta wrote:
 On Tuesday, 11 February 2014 at 17:55:36 UTC, Andrei 
 Alexandrescu wrote:
 I think at this stage we need more people to start with. 
 Someone pointed out recently we have 77 lifetime contributors 
 to github, as opposed to e.g. Rust which has 292.
It's unfortunate that D's biggest competitor is Rust. Being pushed by Mozilla means they automatically have access to a larger pool of contributors that contribute to other Mozilla stuff, who probably wouldn't have contributed to Rust otherwise. However, I think DDMD will bring about a small bump in contributors. Personally, I haven't used C++ in awhile, and I've grown rusty. At this point, D is probably the language I'm most comfortable with, especially for hacking on something complex like a compiler. I imagine there are at least a few people who feel the same way and are waiting for DDMD before they take a crack at contributing.
I have to agree. It's been over ten years since I have done any C++ seriously and as a result feel a lot of friction when trying to get up to speed on working with the D compiler code-base. Truth be told, that is part of the reason I like D in the end, it's not C++. Being able to use D to hack on the compiler would make it much more feasible for me to contribute. Joseph
Feb 11 2014
parent "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Joseph Cassman"  wrote in message 
news:yqtkzxlmxepvdxkghqzg forum.dlang.org...

 On Tuesday, 11 February 2014 at 20:02:45 UTC, Meta wrote:
 However, I think DDMD will bring about a small bump in contributors. 
 Personally, I haven't used C++ in awhile, and I've grown rusty. At this 
 point, D is probably the language I'm most comfortable with, especially 
 for hacking on something complex like a compiler. I imagine there are at 
 least a few people who feel the same way and are waiting for DDMD before 
 they take a crack at contributing.
I have to agree. It's been over ten years since I have done any C++ seriously and as a result feel a lot of friction when trying to get up to speed on working with the D compiler code-base. Truth be told, that is part of the reason I like D in the end, it's not C++. Being able to use D to hack on the compiler would make it much more feasible for me to contribute. Joseph
Seriously, don't wait. DMD uses a D-like subset of C++, and DDMD uses a C++-like subset of D. Working on DMD is unlike any other C++ codebase I have ever worked on because of this, and is remarkably pleasant.
Feb 12 2014
prev sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 11 February 2014 at 17:55:36 UTC, Andrei Alexandrescu 
wrote:
 better C++. It kinda sticks. Because people really want that.
I don't think so, at all. Anyone working on D must drop the moniker "D is a better C++" like a bad habit, no two ways about that. Most of it does it sets C++ as the benchmark. People who already like C++ would be like "you wish" and people who hate C++ would be like "better crap is not what I need anyway".
(I don't really agree, because whenever I use C++ I feel like implementing my own language or recently: take another look at D.) But for people looking for a system level programming language, C++ is a baseline. So you will invariably be measured against it. If you think in terms of "lost opportunities", I believe D looses out by not providing a compiler switch that turns off the GC and issues errors whenever GC-dependent constructs are being used. Not elegant, but simple and makes it easy to start doing real time programming because you then have achieved near parity with C++ baseline requirements. If you have limited resources you need to switch to thinking about baselines, key requirements and where you loose your audience. D looses opportunities for growth by not having a stable release that is perceived as current. You could (in theory) make a base line release, by focusing on the compiler and the runtime, pushing features that are not essential for forward compatibility into the next release and only release the aspect of phobos you are happy with. A known current, stable and supported release makes it possible to plan for others wanting to use D for production and would open up for branches for their own projects, like real-time features. I believe that by delaying a stable release in order to make it feature complete, you loose out, because you move out of the planning window (let's say 6-12 months) for those assessing possible technologies.
 We want to make D a great language all around, with 
 system-level access and also convenience features.
If you go too hard in the Application Language direction the System Level will be less visible. And there is more of a future for a system level compiled language with limited tool support.
 It's there in <h2> at the top of our homepage: "Modern 
 convenience. Modeling power. Native efficiency."
Slogans doesn't tell me much, I think we talk past each other here.
 By the way, this whole "plop a vision page" thing doesn't seem 
 to be quite popular:
I think Rust's homepage and documentation is clear. By "vision" I mean that future which the tool will bring with it as it appears to the reader. That which brings excitement and engagement: "I want that future". Basically the long term goals. Those that you might not achieve in generation 1, 2, 3… but which you are approaching. What I found exciting about W.B.'s original D1 was those aspects where D also would bring along semantics that make it possible to optimize better than C/C++, such as whole program optimization. It does not have to be in this generation of compilers to be communicated as language goals. Immutability, pure, templates etc are useful, but not exciting. And not unique for D. Anyway, I wish you the best of luck. I look forward to your next stable release and hope to be able to adapt the stable runtime to real time uses and make it available as patches for those interested. A long running development branch is outside my time window (e.g. I might have moved on to other things by the time it can be expected to reach maturity). Cheers, Ola.
Feb 14 2014
prev sibling next sibling parent Manu <turkeyman gmail.com> writes:
On 10 February 2014 06:15, francesco cattoglio <
francesco.cattoglio gmail.com> wrote:

 However, the last point was directed to the D community. The language
 needs to be more focused on being very good at some key areas, not cover
 everything.
I totally agree on this, but the problem here is that there are game developers out there, willing to use D. I also see lots of movement from hobbysts. We can't ignore them completely. Undefinedly long pauses are really bad for them, and something needs to be done. Be it in user code, library solution or as core part of the language. I agree that AAA titles are not the main right now, but this doesn't mean indie projects shouldn't be doable. After all, 150 milliseconds pauses are really annoying for pretty much any first person game.
The only company I know of that has made a commercial commitment to D is a AAA games company...
Feb 09 2014
prev sibling next sibling parent reply Manu <turkeyman gmail.com> writes:
On 10 February 2014 14:15, Manu <turkeyman gmail.com> wrote:

 On 10 February 2014 06:15, francesco cattoglio <
 francesco.cattoglio gmail.com> wrote:

 However, the last point was directed to the D community. The language
 needs to be more focused on being very good at some key areas, not cover
 everything.
I totally agree on this, but the problem here is that there are game developers out there, willing to use D. I also see lots of movement from hobbysts. We can't ignore them completely. Undefinedly long pauses are really bad for them, and something needs to be done. Be it in user code, library solution or as core part of the language. I agree that AAA titles are not the main right now, but this doesn't mean indie projects shouldn't be doable. After all, 150 milliseconds pauses are really annoying for pretty much any first person game.
The only company I know of that has made a commercial commitment to D is a AAA games company...
Sorry, I obviously mean, "the only *games* company..." 150ms pause isn't only annoying, it's the sort of bug that might have your title refused for release by the platform vendors. And people seem to forget promptly after every single time I repeat myself: * The GC frequency of execution is directly proportional to the amount of _free memory_. In console games; NONE. * The length of the associated pause is directly proportional to the amount of memory currently in use. In console games; all of it. This doesn't only describe games, it describes any embedded environment.
Feb 09 2014
next sibling parent reply "francesco cattoglio" <francesco.cattoglio gmail.com> writes:
On Monday, 10 February 2014 at 04:26:10 UTC, Manu wrote:
 Sorry, I obviously mean, "the only *games* company..."
That was a given. However I think AAA titles have the manpower to avoid those pauses, since the amount of work toward optimization is huge anyway, am I right? Ofc you still need minimal backend from the compiler and runtime support. If you lack control on internals, there's no way for you to optimize anything.
 And people seem to forget promptly after every single time I 
 repeat myself:
  * The GC frequency of execution is directly proportional to 
 the amount of
 _free memory_. In console games; NONE.
  * The length of the associated pause is directly proportional 
 to the
 amount of memory currently in use. In console games; all of it.
For "simple" games, it would be nice to have a better GC and cut down allocations from the standard library. I guess that would suffice, no need to move to ARC.
Feb 09 2014
parent reply Manu <turkeyman gmail.com> writes:
On 10 February 2014 17:58, francesco cattoglio <
francesco.cattoglio gmail.com> wrote:

 On Monday, 10 February 2014 at 04:26:10 UTC, Manu wrote:

 Sorry, I obviously mean, "the only *games* company..."
That was a given. However I think AAA titles have the manpower to avoid those pauses, since the amount of work toward optimization is huge anyway, am I right? Ofc you still need minimal backend from the compiler and runtime support. If you lack control on internals, there's no way for you to optimize anything.
If we wanted to spend that time+manpower (read, money & overtime/sanity) on bullshit like that, we have no reason to adopt D; we already have C/C++, and we already have decades of experience mitigating that nightmare. The point is, we are REALLY sick of it. Why would we sign up to replace it with more of the same thing. And people seem to forget promptly after every single time I repeat myself:
  * The GC frequency of execution is directly proportional to the amount of
 _free memory_. In console games; NONE.
  * The length of the associated pause is directly proportional to the
 amount of memory currently in use. In console games; all of it.
For "simple" games, it would be nice to have a better GC and cut down allocations from the standard library. I guess that would suffice, no need to move to ARC.
much better, and support is offered by major multinational corporations. Not to say that they shouldn't be supported in D too, but that's not a target of interest to me, and I don't think it's an area which makes a particularly compelling argument for adoption of D. I've said before, console games is an industry desperate for salvation, and D makes a very strong case here in lieu of any other realistic alternatives... as long as this memory stuff is addressed acceptably. If there were to be some killer potential markets identified for D, I think this is definitely one of them.
Feb 10 2014
next sibling parent reply "Francesco Cattoglio" <francesco.cattoglio gmail.com> writes:
On Monday, 10 February 2014 at 08:58:29 UTC, Manu wrote:
 If we wanted to spend that time+manpower (read, money & 
 overtime/sanity) on
 bullshit like that, we have no reason to adopt D; we already 
 have C/C++,
 and we already have decades of experience mitigating that 
 nightmare.
 The point is, we are REALLY sick of it. Why would we sign up to 
 replace it
 with more of the same thing.
What I meant: you can't get a language that performs at its best without doing your homework. Ideal performance requires time and money, no matter the tools. This doesn't mean I expect you to do the same amount of work you have to do in C++. Perhaps I expressed myself in a bad way, so here is a second try at that: I expect only simpler games to run at ideal speed without efforts. "Simple" here doesn't mean "generic 2D platformer", it should be interpreted as "not cutting edge". If you really need to squeeze every bit of performance, you can't only rely on automation.
Feb 10 2014
parent Manu <turkeyman gmail.com> writes:
On 10 February 2014 19:17, Francesco Cattoglio <
francesco.cattoglio gmail.com> wrote:

 On Monday, 10 February 2014 at 08:58:29 UTC, Manu wrote:

 If we wanted to spend that time+manpower (read, money & overtime/sanity)
 on
 bullshit like that, we have no reason to adopt D; we already have C/C++,
 and we already have decades of experience mitigating that nightmare.
 The point is, we are REALLY sick of it. Why would we sign up to replace it
 with more of the same thing.
What I meant: you can't get a language that performs at its best without doing your homework. Ideal performance requires time and money, no matter the tools. This doesn't mean I expect you to do the same amount of work you have to do in C++. Perhaps I expressed myself in a bad way, so here is a second try at that: I expect only simpler games to run at ideal speed without efforts. "Simple" here doesn't mean "generic 2D platformer", it should be interpreted as "not cutting edge". If you really need to squeeze every bit of performance, you can't only rely on automation.
Fair enough, and I'd say you're right. D will not change the careful application of region allocators or pools or any number of tightly controlled, context-specific allocation patterns. But D has a lot of language features that allocate; closures, strings, concatenation, AAs, 3rd party libraries that also allocate on your behalf, and not ALL your code should have to apply carefully controlled allocation patterns. The existing GC is plainly unacceptable, but some form of GC must be present, otherwise D doesn't work. GC backed ARC is a realistic option as far as I can tell, and it's the only one I've ever heard that ticks all the boxes, and would seem to satisfy everyone in an acceptable way (ie, no additional complexity for 'don't care' users). I'm not married to it by any means, but it seems to be the only horse in the race. I haven't heard any other options that tick all the boxes.
Feb 10 2014
prev sibling parent Xavier Bigand <flamaros.xavier gmail.com> writes:
Le 10/02/2014 09:58, Manu a écrit :
 On 10 February 2014 17:58, francesco cattoglio
 <francesco.cattoglio gmail.com <mailto:francesco.cattoglio gmail.com>>
 wrote:

     On Monday, 10 February 2014 at 04:26:10 UTC, Manu wrote:

         Sorry, I obviously mean, "the only *games* company..."

     That was a given. However I think AAA titles have the manpower to
     avoid those pauses, since the amount of work toward optimization is
     huge anyway, am I right? Ofc you still need minimal backend from the
     compiler and runtime support. If you lack control on internals,
     there's no way for you to optimize anything.


 If we wanted to spend that time+manpower (read, money & overtime/sanity)
 on bullshit like that, we have no reason to adopt D; we already have
 C/C++, and we already have decades of experience mitigating that nightmare.
 The point is, we are REALLY sick of it. Why would we sign up to replace
 it with more of the same thing.

         And people seem to forget promptly after every single time I
         repeat myself:
           * The GC frequency of execution is directly proportional to
         the amount of
         _free memory_. In console games; NONE.
           * The length of the associated pause is directly proportional
         to the
         amount of memory currently in use. In console games; all of it.

     For "simple" games, it would be nice to have a better GC and cut
     down allocations from the standard library. I guess that would
     suffice, no need to move to ARC.



 is much better, and support is offered by major multinational corporations.
 Not to say that they shouldn't be supported in D too, but that's not a
 target of interest to me, and I don't think it's an area which makes a
 particularly compelling argument for adoption of D.
 I've said before, console games is an industry desperate for salvation,
 and D makes a very strong case here in lieu of any other realistic
 alternatives... as long as this memory stuff is addressed acceptably.

 If there were to be some killer potential markets identified for D, I
 think this is definitely one of them.
IMO there is a big hole that C/C++ developers dream to see filled. We want : - a less error prone language : DONE - a better syntax : DONE - advanced meta-programming features : DONE - a fast build : DONE - a rich framework : Have some potential (miss a lot of QtCore equivalent, GUI libraries), progress really slowly - be multi-platform : almost DONE - be cross-platform (binary portable) : a potential way with LVM bytecode - no performance concessions : GC issues - better tools (IDE with refactor tools working, better debugger,...) - buildin tools (unittest, static analyser,...) : DONE For the moment D GC was a real pain for me on DQuick, I am not able to control when releasing my OpenGL resources easily. I can't wait a GC collect, cause handles are fewer than the central memory. The destruction order is also a real issue. I certainly have to learn new patterns, but I try to add a release method on Resource objects and add a check in debug to see if it was correctly called. I wasn't able to use Throwable.TraceInfo cause it's a class which means can't be printed in the destructor. So if a user forgive to call release method on a resource (leak) I just can't warm him with a great message... For me leaks aren't just unreferenced pointers, but also and mainly chunk of resources still retained when not necessary cause those are harder to track and are critical on small devices. A GC won't help you a lot here, because IMO it act like a pool on every objects. It seems a lot of developers aren't concern by memory usage when there is a GC. I am also concern of having all applications using a GC, cause it force user of multi-task OS to buy more memory or close few applications. I just buy 4Go more cause I can't let my smartgit,VS,Chrome,... when Chrome create a process per tab??? Please stop pushing us create applications memory consuming, it's not cheap. --- Maybe D is too ambitious, what we really need is a language that can be updated more easily than C++ to be as soon as possible usable in industry. Seriously I work for a little game company, we are few developers (about 9 developers on 3 different projects), we don't have means to use IncrediBuild or such tools to save times. Reducing the compile time and having a great framework are critical points for us. We have relatively few memory issues by managing it ourselves. We use C++ cause our targets are numerous : - Windows Pocket PC : not anymore - Symbian : not anymore - Nintendo DS : not anymore - Nintendo Wii : not anymore - iOS - Windows - MacOS X - Android - Xbox 360 - PS 3 Our applications/games can have some critical performances aspect, but we don't have to optimize all our code, cash misses are accepted :-) Our optimization are essentially high level or only concern the OpenGL render, maybe some few algorithm like simple occlusion, tiling,... Sadly for next games we didn't use our internal game engine but Unity 3D. We made some test on doing a scene like those of RTMI (http://www.youtube.com/watch?v=fGtfhKrg3l0) on Unity without success, cause of specific optimizations required for animations done with a lot of textures updates. and easier to regain control by other developers.
Feb 10 2014
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 10 February 2014 at 04:26:10 UTC, Manu wrote:
 The only company I know of that has made a commercial 
 commitment to D is a
 AAA games company...
Unfortunately a AAA games company is not setting down the goal post for D. As long as the leads for the project have as their primary interests: non-real-time stuff and STL-like-libraries things won't develop in your (and mine and Fransescos) direction. It won't happen until the leads of the project COMMIT to a MEASURABLE goal and a major effort is made to meet that goal. That means putting other goals aside until that measurable goal has been met.
 Sorry, I obviously mean, "the only *games* company..."
Yeah, but that games company needs to commit to taking a lead role so that the goal post and vision changes in that direction.
 And people seem to forget promptly after every single time I 
 repeat myself:
  * The GC frequency of execution is directly proportional to 
 the amount of
 _free memory_. In console games; NONE.
  * The length of the associated pause is directly proportional 
 to the
 amount of memory currently in use. In console games; all of it.

 This doesn't only describe games, it describes any embedded 
 environment.
I've already stated that I don't believe in using D for anything multi-media. It is not part of the project vision to be good at that from what I am seeing, and I am not going to believe it is going to be good for that until the project leads commit to measurable goals. The leads believe in meritocracy, that means the project will flail around in any direction that is fun. That means there are no rails. There is no reason to pull or push a train that is not on rails. To get D to be a true better C++ you need a concerted effort.
Feb 10 2014
next sibling parent reply Manu <turkeyman gmail.com> writes:
On 10 February 2014 18:59, <"Ola Fosheim Gr=C3=B8stad\"
<ola.fosheim.grostad+dlang gmail.com>" puremagic.com> wrote:

 On Monday, 10 February 2014 at 04:26:10 UTC, Manu wrote:

 The only company I know of that has made a commercial commitment to D is=
a
 AAA games company...
Unfortunately a AAA games company is not setting down the goal post for D=
.
 As long as the leads for the project have as their primary interests:
 non-real-time stuff and STL-like-libraries things won't develop in your
 (and mine and Fransescos) direction.
I'm confused. A couple of posts ago, you seemed to be annoyed at me for consistently raising games as a target application space that was unrealistic, or not 'down to earth', or some fairly niche and irrelevant target workload. Video games is a bigger industry than the movie industry. Casual/phones have captured a large slice in recent years, but the rest of the pie is almost entirely games consoles, which I don't think is a diminishing industry so much as the casual/phone space is rather growing the pie in overall volume. The industry is expanding as a whole. Yes both subsets of the industry are important, but in the casual space there are already other realistic and established languages as options, and D is much further away from use on those industries; mature cross-compilers, OS support, cross-language support, etc are all requirements for casual/phone games in D. It won't happen until the leads of the project COMMIT to a MEASURABLE goal
 and a major effort is made to meet that goal. That means putting other
 goals aside until that measurable goal has been met.
I don't think anyone in the D community really has that power. If Walter were to dictate direction that was unpopular enough, the developer base would promptly dissolve. I agree in some sense, that I would like to see some set of specific goals agreed and targeted with each release cycle. Ideally, with a roadmap towards particular goalposts which may enable new usage spaces. It may be possible that Walter and Andrei might have that sort of rallying power, but if the goal is not of interest to the majority of contributors, it just won't happen regardless how many people post happy thoughts about the goal. Contributing to D is, in some way, a form of recreation for contributors. Sorry, I obviously mean, "the only *games* company..."

 Yeah, but that games company needs to commit to taking a lead role so tha=
t
 the goal post and vision changes in that direction.
Are you saying I don't complain enough? :) (at least, last year before I left) I would never want to assert authority on the language direction on behalf of a single company, like you say, it's a niche target, although a very big niche which I think will really benefit from D. I just make sure that people never forget that the niche exists, what the requirements are, and that tends to result in those targets being factored into conversations and designs. And people seem to forget promptly after every single time I repeat myself= :
  * The GC frequency of execution is directly proportional to the amount =
of
 _free memory_. In console games; NONE.
  * The length of the associated pause is directly proportional to the
 amount of memory currently in use. In console games; all of it.

 This doesn't only describe games, it describes any embedded environment.
I've already stated that I don't believe in using D for anything multi-media.
That's a shame, I see that as one of it's greatest (yet unrealised) potentials. What are some other reasons anyone would reach for a native language these days? If it's not an operating system, or some enterprising web service... what else commands native hardware access and performance than embedded development in a *highly* aggressive and competitive industry? It is not part of the project vision to be good at that from what I am
 seeing, and I am not going to believe it is going to be good for that unt=
il
 the project leads commit to measurable goals.

 The leads believe in meritocracy, that means the project will flail aroun=
d
 in any direction that is fun. That means there are no rails. There is no
 reason to pull or push a train that is not on rails. To get D to be a tru=
e
 better C++ you need a concerted effort.
Yeah, I agree in theory... I think the short-term goals need to be set by the target requirements of the people actually using it, then they can produce stories about how it went well for them.
Feb 10 2014
next sibling parent reply "Francesco Cattoglio" <francesco.cattoglio gmail.com> writes:
On Monday, 10 February 2014 at 09:36:53 UTC, Manu wrote:
 Are you saying I don't complain enough? :) (at least, last year 
 before I left)
Just out of curiosity: what do you mean exactly?
Feb 10 2014
parent reply Manu <turkeyman gmail.com> writes:
On 10 February 2014 20:12, Francesco Cattoglio <
francesco.cattoglio gmail.com> wrote:

 On Monday, 10 February 2014 at 09:36:53 UTC, Manu wrote:

 Are you saying I don't complain enough? :) (at least, last year before I
 left)
Just out of curiosity: what do you mean exactly?
I left Remedy a year back, so I don't speak on their behalf anymore. Is that what you mean? He said "that games company needs to commit to taking a lead role so that the goal post and vision changes in that direction", and I'm not sure what that could mean in terms of tangible action, other than my advocating the things that were of critical importance to our project at the time.
Feb 10 2014
next sibling parent "Francesco Cattoglio" <francesco.cattoglio gmail.com> writes:
On Monday, 10 February 2014 at 10:53:05 UTC, Manu wrote:
 On 10 February 2014 20:12, Francesco Cattoglio <
 francesco.cattoglio gmail.com> wrote:
 I left Remedy a year back, so I don't speak on their behalf 
 anymore. Is
 that what you mean?
Yes, exactly :) I understood "I stopped complaining because I left the forums" and I was like "wait, what?".
 He said "that games company needs to commit to taking a lead 
 role so that
 the goal post and vision changes in that direction", and I'm 
 not sure what
 that could mean in terms of tangible action, other than my 
 advocating the
 things that were of critical importance to our project at the 
 time.
Reports on previous experience is indeed invaluable for future progression.
Feb 10 2014
prev sibling parent reply "Szymon Gatner" <noemail gmail.com> writes:
On Monday, 10 February 2014 at 10:53:05 UTC, Manu wrote:

 I left Remedy a year back, so I don't speak on their behalf 
 anymore. Is
 that what you mean?
Sorry for being OT, but where do you work now?
Feb 10 2014
parent Manu <turkeyman gmail.com> writes:
On 10 February 2014 22:00, Szymon Gatner <noemail gmail.com> wrote:

 On Monday, 10 February 2014 at 10:53:05 UTC, Manu wrote:


 I left Remedy a year back, so I don't speak on their behalf anymore. Is
 that what you mean?
Sorry for being OT, but where do you work now?
I'm very happily unemployed! :) ... but that'll probably have to change some time soon. Might have to move country again, not much happening in Australia anymore after the near total obliteration of the industry here that devastated most of my friends and colleagues.
Feb 10 2014
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 10 February 2014 at 09:36:53 UTC, Manu wrote:
 I'm confused. A couple of posts ago, you seemed to be annoyed 
 at me for
 consistently raising games as a target application space that 
 was
 unrealistic, or not 'down to earth', or some fairly niche and 
 irrelevant
 target workload.
Sorry about that. I have been following D since 2005, on and off, and kept waiting for the "better C++" to materialize so I can use it to do fun stuff with it (audio, 3D, raytracing etc). One hobby of mine is to read Intel/AMD CPU docs, raytracing papers and compiler stuff, and discussing those aspects and improving my understanding of those areas is fun. I am loosing hope in that direction for D, because I don't think D has anyone with a strong interest in project management that can drive it in that direction. The responses from the D leads shows signs, not of a lack of skills, but a lack of interest in project management "theory" (and unfortunately, that is an area where I know the theory quite well since I majored in that area). On the fun side I want what you want. I would love to see you be the third lead on D, to get a person that "falls to sleep thinking real time" into that position would make me believe in the project. On the "pay for bread" side I am looking at D from the perspective of having an alternative to Go on the server side. I guess that has made me "janus-faced" in this discussion. What would make me tilt in favour of Go instead of D, is that it has corporate backing and therefore give priority to production level stability. Even though I like the semantics of D better. Stability is important to me since I personally pay the price (literally) for technical flaws since I offer fixed priced solutions. Instead of A.A. and W.B. going defensive (and yes it is painful to see your child get a needle in the foot at the doctor to get that vaccine that will keep the child healthy in the long term) they should try to get someone into the team of leads that has an interest in software development process and software process improvement. Or at the very least, one person with real time focus. (Please note that I found it quite amusing that you claimed that I was ignorant of long running games, since I studied Anarchy Online from inception to end in a qualitative manner while trying to figure out the design properties of the design domain, from a system development perspective. You don't have to convince me, I do understand where you are coming from and enjoy reading about your perspective. ;^)
 Video games is a bigger industry than the movie industry. 
 Casual/phones
 have captured a large slice in recent years, but the rest of 
 the pie is
 almost entirely games consoles, which I don't think is a 
 diminishing
 industry so much as the casual/phone space is rather growing 
 the pie in
 overall volume. The industry is expanding as a whole.
Yes, unfortunately the revenue in the mobile app space is very low for the majority of developers which requires tools that make them very productive at the cost of technical quality. So lots of stuff is being done with cheap (and not really performant) tech to cut down on dev time. A more performant and productive language could certainly make a difference, but to get there you need to focus on that niche, otherwise it will take too many years to catch up with the alternatives (with their eco system). And the landscape keeps changing very quickly. Companies that offer 3rd party solutions fold all the time. So mobile devs are "jaded".
 I don't think anyone in the D community really has that power. 
 If Walter
 were to dictate direction that was unpopular enough, the 
 developer base
 would promptly dissolve.
Yes, some would leave, but others would join. Those who today look at D and say: - "This is kind of cool, but not quite there yet" - "when can I expect to see it land in the area where it makes me productive" - "is this cart worth pushing, can we actually make a significant improvement here or do I have to push this cart all by myself" I would imagine that there are more people sitting on the fence than not. What made Linux work out was that they were aiming for a well defined vision, Unix. Progress was easy to measure. What made Linux fail on the desktop that they did not have a well defined vision, so the community spread out on N alternatives and progress was hard to measure. This is a bit simplistic, but Open Source projects that does not have a strongly projected vision tends to wither and dissolve over time.
 the goal. Contributing to D is, in some way, a form of 
 recreation for contributors.
But you still need a clear vision and well defined goals, because for every "fun" bit there is 2 "unfun" bits. For every "excellent feature", you have to axe "2 nice to haves". (kind of)
 Are you saying I don't complain enough? :) (at least, last year 
 before I left)
 I would never want to assert authority on the language 
 direction on behalf
 of a single company, like you say, it's a niche target, 
 although a very big
 niche which I think will really benefit from D.
Actually, I think you have the passion to put forth a vision that could bring D to real time and thus make it a project that is making "fun" possible. With no "real time" person on the team I probably will take the "hobby focus" and enjoy discussing technological possibilites (such as the discussion we had about ref counting recently). If that makes A.A. upset. Great. He should be. I am implying that D needs leadership. He should take leadership. If he does not want to listen. Well, in that case I am not forcing him to read what I write. But pointing to github is pointing in the wrong direction. Github tracks missing bolts and nuts, not a skewed skeleton.
 I just make sure that people never forget that the niche 
 exists, what the
 requirements are, and that tends to result in those targets 
 being factored
 into conversations and designs.
I am perfectly cool with that. If AAA games is the vision. Good. My prime gripe is the lack of a clearly stated vision. I could go
 That's a shame, I see that as one of it's greatest (yet 
 unrealised) potentials. What are some other reasons anyone 
 would reach for a native language these days?
Scalable, low resource, servers. Servers that boot up real fast and handle many connections. I am currently musing at OSv. It is a kernel written in C++ that can run on top of KVM. Having something like Go or D on that platform could be interesting. Backing caches/databases/web services for low revenue mobile apps.
 If it's not an operating system, or some enterprising web 
 service... what
 else commands native hardware access and performance than 
 embedded
 development in a *highly* aggressive and competitive industry?
Again, I don't disagree. *smooch* ;)
Feb 10 2014
next sibling parent Xavier Bigand <flamaros.xavier gmail.com> writes:
Le 10/02/2014 13:04, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" a écrit :
 On Monday, 10 February 2014 at 09:36:53 UTC, Manu wrote:
 I'm confused. A couple of posts ago, you seemed to be annoyed at me for
 consistently raising games as a target application space that was
 unrealistic, or not 'down to earth', or some fairly niche and irrelevant
 target workload.
Sorry about that. I have been following D since 2005, on and off, and kept waiting for the "better C++" to materialize so I can use it to do fun stuff with it (audio, 3D, raytracing etc). One hobby of mine is to read Intel/AMD CPU docs, raytracing papers and compiler stuff, and discussing those aspects and improving my understanding of those areas is fun. I am loosing hope in that direction for D, because I don't think D has anyone with a strong interest in project management that can drive it in that direction. The responses from the D leads shows signs, not of a lack of skills, but a lack of interest in project management "theory" (and unfortunately, that is an area where I know the theory quite well since I majored in that area). On the fun side I want what you want. I would love to see you be the third lead on D, to get a person that "falls to sleep thinking real time" into that position would make me believe in the project. On the "pay for bread" side I am looking at D from the perspective of having an alternative to Go on the server side. I guess that has made me "janus-faced" in this discussion. What would make me tilt in favour of Go instead of D, is that it has corporate backing and therefore give priority to production level stability. Even though I like the semantics of D better. Stability is important to me since I personally pay the price (literally) for technical flaws since I offer fixed priced solutions. Instead of A.A. and W.B. going defensive (and yes it is painful to see your child get a needle in the foot at the doctor to get that vaccine that will keep the child healthy in the long term) they should try to get someone into the team of leads that has an interest in software development process and software process improvement. Or at the very least, one person with real time focus. (Please note that I found it quite amusing that you claimed that I was ignorant of long running games, since I studied Anarchy Online from inception to end in a qualitative manner while trying to figure out the design properties of the design domain, from a system development perspective. You don't have to convince me, I do understand where you are coming from and enjoy reading about your perspective. ;^)
 Video games is a bigger industry than the movie industry. Casual/phones
 have captured a large slice in recent years, but the rest of the pie is
 almost entirely games consoles, which I don't think is a diminishing
 industry so much as the casual/phone space is rather growing the pie in
 overall volume. The industry is expanding as a whole.
Yes, unfortunately the revenue in the mobile app space is very low for the majority of developers which requires tools that make them very productive at the cost of technical quality. So lots of stuff is being done with cheap (and not really performant) tech to cut down on dev time.
Yes I can't tell you how hard it is. Plus editors that are much more concern historically by market than product quality, cause "mobile apps" are seen as cheap software. So few people take it seriously, maybe only Apple understand friendly applications have to be perfectly polished to be a commercial success.
 A more performant and productive language could certainly make a
 difference, but to get there you need to focus on that niche, otherwise
 it will take too many years to catch up with the alternatives (with
 their eco system). And the landscape keeps changing very quickly.
 Companies that offer 3rd party solutions fold all the time. So mobile
 devs are "jaded".

 I don't think anyone in the D community really has that power. If Walter
 were to dictate direction that was unpopular enough, the developer base
 would promptly dissolve.
Yes, some would leave, but others would join. Those who today look at D and say: - "This is kind of cool, but not quite there yet" - "when can I expect to see it land in the area where it makes me productive" - "is this cart worth pushing, can we actually make a significant improvement here or do I have to push this cart all by myself" I would imagine that there are more people sitting on the fence than not. What made Linux work out was that they were aiming for a well defined vision, Unix. Progress was easy to measure. What made Linux fail on the desktop that they did not have a well defined vision, so the community spread out on N alternatives and progress was hard to measure. This is a bit simplistic, but Open Source projects that does not have a strongly projected vision tends to wither and dissolve over time.
 the goal. Contributing to D is, in some way, a form of recreation for
 contributors.
But you still need a clear vision and well defined goals, because for every "fun" bit there is 2 "unfun" bits. For every "excellent feature", you have to axe "2 nice to haves". (kind of)
 Are you saying I don't complain enough? :) (at least, last year before
 I left)
 I would never want to assert authority on the language direction on
 behalf
 of a single company, like you say, it's a niche target, although a
 very big
 niche which I think will really benefit from D.
Actually, I think you have the passion to put forth a vision that could bring D to real time and thus make it a project that is making "fun" possible. With no "real time" person on the team I probably will take the "hobby focus" and enjoy discussing technological possibilites (such as the discussion we had about ref counting recently). If that makes A.A. upset. Great. He should be. I am implying that D needs leadership. He should take leadership. If he does not want to listen. Well, in that case I am not forcing him to read what I write. But pointing to github is pointing in the wrong direction. Github tracks missing bolts and nuts, not a skewed skeleton.
 I just make sure that people never forget that the niche exists, what the
 requirements are, and that tends to result in those targets being
 factored
 into conversations and designs.
I am perfectly cool with that. If AAA games is the vision. Good. My prime gripe is the lack of a clearly stated vision. I could go with any
Me too, and a lot of people in multimedia/game industry that follow D from far. I know few person interested that put their eyes back to C++ immediately cause of GC (it's maybe too soon).
 That's a shame, I see that as one of it's greatest (yet unrealised)
 potentials. What are some other reasons anyone would reach for a
 native language these days?
Scalable, low resource, servers. Servers that boot up real fast and handle many connections. I am currently musing at OSv. It is a kernel written in C++ that can run on top of KVM. Having something like Go or D on that platform could be interesting. Backing caches/databases/web services for low revenue mobile apps.
 If it's not an operating system, or some enterprising web service... what
 else commands native hardware access and performance than embedded
 development in a *highly* aggressive and competitive industry?
Again, I don't disagree. *smooch* ;)
Feb 10 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/10/14, 4:04 AM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 One hobby of mine is to read Intel/AMD CPU docs, raytracing papers and
 compiler stuff, and discussing those aspects and improving my
 understanding of those areas is fun. I am loosing hope in that direction
 for D, because I don't think D has anyone with a strong interest in
 project management that can drive it in that direction. The responses
 from the D leads shows signs, not of a lack of skills, but a lack of
 interest in project management "theory" (and unfortunately, that is an
 area where I know the theory quite well since I majored in that area).
Terrific. The challenge here is to adapt project management theory to the realities of a volunteer project.
 Instead of A.A. and W.B. going defensive (and yes it is painful to see
 your child get a needle in the foot at the doctor to get that vaccine
 that will keep the child healthy in the long term) they should try to
 get someone into the team of leads that has an interest in software
 development process and software process improvement.
We're not getting defensive here. Clearly we could and should move faster than we do, and there must be ways to be better at that. I'm all ears on advice on how to do better. I just have difficulty seeing how more management would help. Andrei
Feb 10 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 10 February 2014 at 23:20:50 UTC, Andrei Alexandrescu 
wrote:
 Terrific. The challenge here is to adapt project management 
 theory to the realities of a volunteer project.
Yes, I understand that. I guess it often means that key people have to do the "unfun" stuff and let the more intermittent volunteers do the "fun" stuff… But I believe that "democratic design" requires very strong communication of vision if you want to create something new (otherwise you tend to end up with bastardized copies of what exist, since people will navigate towards the safe common ground).
 that. I'm all ears on advice on how to do better. I just have 
 difficulty seeing how more management would help.
Just a suggestion of some possibilities: Externally (front of web page): - More clear communication of the boundaries of the project. - Stating clearly what ground you do not cover. - Defining short term/long term goals on your front page. - Make it visible when they are met or changed. Internally (you probably do this already in some form): - Map out a list of dependencies between long term goals and use it for planning so that people who are more inclined to do proof-of-concept stuff can help out. - Some sort of social reward mechanism (borrow from MMOs?). I like the bounty stuff, but social rewards are much stronger. - Make commitments in key areas. Like stating how the project will change if you cannot meet deadlines. Like a priori stating what you will do with the GC if benchmark X is not met by date N. That could make those wanting GC push harder and those not wanting GC more complacent. Ok, maybe the wrong thing to do for the GC, but you get the idea, get the direction set. (Is it reasonable to limit features/production-quality to the shared common denominator of 3 backends? Can you somehow mitigate that?)
Feb 10 2014
prev sibling next sibling parent reply "Francesco Cattoglio" <francesco.cattoglio gmail.com> writes:
On Monday, 10 February 2014 at 08:59:28 UTC, Ola Fosheim Grøstad 
wrote:
 It won't happen until the leads of the project COMMIT to a 
 MEASURABLE goal and a major effort is made to meet that goal. 
 That means putting other goals aside until that measurable goal 
 has been met.
I'm sorry, but I think you are misinterpreting how the community works. Don't get me wrong, I've only be here for 2 years and I'm not a guru of social interactions, but to me it's clear that the D "project" is a bit different from your usual "leaders decide, everyone else follows". The "heads" (read Andrei and Walter) surely have some "powers". Perhaps they have abused those in the past (introducing features without general consensus), they might be able to impose a veto on a feature, but they won't prevent you from contributing, if that contribution is approved by the rest of the user base.
 Yeah, but that games company needs to commit to taking a lead 
 role so that the goal post and vision changes in that direction.
"Leading role" is rather generic in this kind of organization. A "leader" is more or less someone that you listen to, someone you trust because you think what they are asking and proposing rational stuff. Personally, as an example, I listen to Manu and I listen to Daniel Murphy, because they appear to have a nice project that can give some great visibility to D. There is a problem: most of the times the user of a tool has no time to work on the tool itself. What should Manu do other than going to Dconf, presenting their hard work, and convincing his coworkers? The "project leaders" have really limited resources. Stating a vision for the D as a language is useless if you don't have the resources to achieve it. Sure, if you have your voice heard (like Andrei) it's easier to convince other people to share your vision, but this doesn't mean you can force people on working on something extremely specific. This nets to having zero "traditional" leader power. What could be done is doing a massive crowdfunding campaign, get a few full-time hired developers, and change that.
 The leads believe in meritocracy, that means the project will 
 flail around in any direction that is fun. That means there are 
 no rails. There is no reason to pull or push a train that is 
 not on rails. To get D to be a true better C++ you need a 
 concerted effort.
No, first of all you need the same amount of economic backing. The one backing the project will shape it the most. True democracy is pure utopia. People have different interests and different ideas, which are often conflicting. In the end it's all about resources.
Feb 10 2014
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 10 February 2014 at 10:04:53 UTC, Francesco Cattoglio 
wrote:
 I'm sorry, but I think you are misinterpreting how the 
 community works. Don't get me wrong, I've only be here for 2 
 years and I'm not a guru of social interactions, but to me it's 
 clear that the D "project" is a bit different from your usual 
 "leaders decide, everyone else follows".
I am not going into this, because then I would have to go down to a theoretical dicussion on group dynamics, formal/informal hiearchies and lots of different schools of thought and models. That would be a long road to walk. ;-) All groups need to define their boundaries. If they don't do it clearly, each person will define his own boundaries and you will enter a process of negotiation. Then you will have a clash of those boundaries causing various dysfunctional dynamics. Not defining your boundaries is not an option. It is going to happen. The question is: how much time do you want to spend on that process before entering a productive state?
Feb 10 2014
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 10 February 2014 at 10:04:53 UTC, Francesco Cattoglio 
wrote:
 On Monday, 10 February 2014 at 08:59:28 UTC, Ola Fosheim 
 Grøstad wrote:
 The leads believe in meritocracy, that means the project will 
 flail around in any direction that is fun. That means there 
 are no rails. There is no reason to pull or push a train that 
 is not on rails. To get D to be a true better C++ you need a 
 concerted effort.
No, first of all you need the same amount of economic backing. The one backing the project will shape it the most. True democracy is pure utopia. People have different interests and different ideas, which are often conflicting. In the end it's all about resources.
I forgot to comment on this. No, I don't think it is only a matter of resources. For instance, if I had the time I would most certainly consider writing a pack-rat parser for a modified subset a D that builds an AST for clang. That's actually doable for 1-3 people.
Feb 10 2014
next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Monday, 10 February 2014 at 14:24:06 UTC, Ola Fosheim Grøstad 
wrote:
 No, I don't think it is only a matter of resources. For 
 instance, if I had the time I would most certainly consider 
 writing a pack-rat parser for a modified subset a D that builds 
 an AST for clang.
"if I had the time". This exactly is the difference and reason why resources matter a lot.
Feb 10 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 10 February 2014 at 14:47:15 UTC, Dicebot wrote:
 "if I had the time". This exactly is the difference and reason 
 why resources matter a lot.
Yes, but there are enough people in these forums claiming that they desire a real time, production quality, better-than-c++ compiler to pull it off. But not for me alone. Lack of clear planning, communication of visions and establishing short term and long term measurable goals is not really a resource issue. It is a matter of taking those issue seriously. Basically a management issue. To me it would be reasonable to have: 1. short term goal: production level stability for what D is being used for today 2. long term goal: low latency, real time features/runtime Then work on 1 while planning milestones for point 2. I guess D1 was supposed to address 1, but nobody would start a project from scratch using D1 today.
Feb 10 2014
prev sibling next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Monday, 10 February 2014 at 14:24:06 UTC, Ola Fosheim Grøstad 
wrote:
 No, I don't think it is only a matter of resources. For 
 instance, if I had the time I would most certainly consider 
 writing a pack-rat parser for a modified subset a D that builds 
 an AST for clang.
You do that :D I'll be waiting. Some people just need to run into the roadblock to notice it exists.
Feb 10 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 10 February 2014 at 23:06:06 UTC, deadalnix wrote:
 You do that :D

 I'll be waiting. Some people just need to run into the 
 roadblock to notice it exists.
Which roadblock?
Feb 10 2014
next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Monday, 10 February 2014 at 23:07:06 UTC, Ola Fosheim Grøstad 
wrote:
 On Monday, 10 February 2014 at 23:06:06 UTC, deadalnix wrote:
 You do that :D

 I'll be waiting. Some people just need to run into the 
 roadblock to notice it exists.
Which roadblock?
deadalanix has been working on SDC for quite a while - alternative implementation of D frontend using LLVM for code gen.
Feb 10 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 10 February 2014 at 23:12:08 UTC, Dicebot wrote:
 deadalanix has been working on SDC for quite a while - 
 alternative implementation of D frontend using LLVM for code 
 gen.
Ah ok, I didn't suggest implementing D, but a subset that maps directly to C++. Then you can map directly to the AST.
Feb 10 2014
prev sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Monday, 10 February 2014 at 23:07:06 UTC, Ola Fosheim Grøstad
wrote:
 On Monday, 10 February 2014 at 23:06:06 UTC, deadalnix wrote:
 You do that :D

 I'll be waiting. Some people just need to run into the 
 roadblock to notice it exists.
Which roadblock?
You seems to knows everything in and out. You tell me. If you aren't sure, please start parsing D to clang AST. After all, that sound like a great idea. I'm not sure I want to spend any time trying to convince you considering: 1 - people have been trying and it looks like a time consuming energy hungry task. 2 - You don't seems interested to actually contribute anything. So start you clang idea, if it sounds great to you. And come back enlightened.
Feb 10 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 10 February 2014 at 23:13:56 UTC, deadalnix wrote:
 You seems to knows everything in and out. You tell me. If you
 aren't sure, please start parsing D to clang AST. After all, 
 that sound like a great idea.
No, I don't know everything, but I said "modified subset of D", that would be a different language. Suitable for my needs (I don't need RTTI or exceptions etc). I know I can do it, because source-2-source compilation is not that difficult and could be the first step. And (slow) parsers are not so difficult to write with the tools we have today. After looking at the SVN repositories for clang I am becoming a bit more familiar with the source code which looked chaotic the first time I looked at it. The AST looks ok from skimming through the repository, I assume I can get it through a visualization tool and generate visual graphs of the hierarchies.
Feb 10 2014
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Monday, 10 February 2014 at 23:29:25 UTC, Ola Fosheim Grøstad 
wrote:
 On Monday, 10 February 2014 at 23:13:56 UTC, deadalnix wrote:
 You seems to knows everything in and out. You tell me. If you
 aren't sure, please start parsing D to clang AST. After all, 
 that sound like a great idea.
No, I don't know everything, but I said "modified subset of D", that would be a different language. Suitable for my needs (I don't need RTTI or exceptions etc). I know I can do it, because source-2-source compilation is not that difficult and could be the first step. And (slow) parsers are not so difficult to write with the tools we have today. After looking at the SVN repositories for clang I am becoming a bit more familiar with the source code which looked chaotic the first time I looked at it. The AST looks ok from skimming through the repository, I assume I can get it through a visualization tool and generate visual graphs of the hierarchies.
Long story short, if you want to map thing on top of clang AST, you'll have to reduce significantly the subset of D you want to use, or build a full fledged front end, at which point you'd better use LLVM directly or GCC, or whatever backend suit your desires. For instance, static ifs, template constraints and CTFE require full understanding of language semantic by the frontend. Runtime reflection, is also out. That mean UDA could work but become useless. Type inference also require almost complete understanding of the language by the frontend. And even with these drastic restrictions, you'll still need a significant amount of semantic analysis in the frontend before feeding clang. You'll have to instantiate templates yourself (or bye bye alias parameters, string parameters and so on). Track variables used in closures (well, on that one you can arguably allocate every stack frame on heap and pas the whole stack frame to C++ lambda, that should work, but that will excruciatingly slow and memory consuming) to allocate them on heap. With CTFE, compile time reflection and static ifs/mixins, D wraps around itself in a way that makes it all or nothing in implementation.
Feb 10 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 11 February 2014 at 00:21:22 UTC, deadalnix wrote:
 With CTFE, compile time reflection and static ifs/mixins, D 
 wraps around itself in a way that makes it all or nothing in 
 implementation.
Ok, but I think I didn't communicate clearly what I am looking for. I am just looking for a "nicer" C++. I don't really need the full D semantics. I am content with the C++ subset that I can desugar into. To me D is just that, a "nicer" C++. I don't need the high level stuff. Those are "nice to have", but not "needed".
Feb 10 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/10/14, 6:24 AM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 No, I don't think it is only a matter of resources. For instance, if I
 had the time
Oh, the unbelievable irony. Andrei
Feb 10 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 11 February 2014 at 02:15:37 UTC, Andrei Alexandrescu 
wrote:
 On 2/10/14, 6:24 AM, "Ola Fosheim Grøstad" 
 <ola.fosheim.grostad+dlang gmail.com>" wrote:
 No, I don't think it is only a matter of resources. For 
 instance, if I
 had the time
Oh, the unbelievable irony.
Not really. If you have too many outstanding issues it means you have added to many features. It means you failed to do feature freeze at an earlier stage. It could also mean that you don't give priority to mentoring. Sometimes it is better to let your best people do mentoring and help bringing "master level students" up to speed. People are not loyal to a project. People are loyal to other people. If a mentor invests time in you, you will feel a social debt. This is the principle of gifting. You can create a strategy for mentoring. One obvious one is to focus on making the code base suitable for academia. Then you can offer supervision of master students. Academics love to have good external supervisors taking some load off their backs. That means lowering the requirements for compilation speed in order to get in some high level optimization and other features that you cannot otherwise have. You can give priority to getting in support for more social bonding between developers, like give priority to an IDE that supports CSCW style collaboration (seeing the code view of others). With Skype that could make pair programming (from XP) possible. There are many options.
Feb 11 2014
next sibling parent reply "Abdulhaq" <alynch4047 gmail.com> writes:
On Tuesday, 11 February 2014 at 09:42:44 UTC, Ola Fosheim Grøstad 
wrote:
 On Tuesday, 11 February 2014 at 02:15:37 UTC, Andrei
Hi Ola, as a clever guy who says he has studied project dynamics you must understand that for everyone else here it feels like you dropped into the forums from outer space, confidently asserted your strong (usually well informed) opinion about this and that all over the place and now it seems you expect the project leaders to jump. Well, you should realise that until you actually start contributing code rather just talking, it feels very pushy. There are very clever people here who have spent years contributing spare time to the code and so their opinion will always carry more weight. Andrei, who displays remarkable tolerance on these boards, momentarily lost his rag a bit with you and now you're threating to bring out your sarcasm super-powers - a strange way to win friends and influence people. You've obviously got a lot to offer but picking fights with Andrei is highly conuter productive.
Feb 11 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 11 February 2014 at 12:09:23 UTC, Abdulhaq wrote:
 just talking, it feels very pushy. There are very clever people 
 here who have spent years contributing spare time to the code 
 and so their opinion will always carry more weight.
I don't mind if it feels pushy. If they are pushy, it means there is something to it that you don't want to see. I am not forcing anyone to follow opinions or read them. I want to know where this project is heading. Is it heading in a real-time direction, or not?
 You've obviously got a lot to offer but picking fights with 
 Andrei is highly conuter productive.
I did not pick a fight with him. He picked a fight with me. I am sorry, but I don't accept ad hominem. If people do that, I stand up to it. Whether is on my own behalf or on the behalf of others. If people keep doing it after being warned it is at sometimes better to drive the point home with whatever means the medium offers. It is not counter productive, I got a lot of information out of this: 1. There is no plan. 2. The main reason of slow moving is on the management side. (Bounties is not a solution, in fact they can be demotivational).
Feb 11 2014
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/11/14, 1:42 AM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Tuesday, 11 February 2014 at 02:15:37 UTC, Andrei Alexandrescu wrote:
 On 2/10/14, 6:24 AM, "Ola Fosheim Grøstad"
 <ola.fosheim.grostad+dlang gmail.com>" wrote:
 No, I don't think it is only a matter of resources. For instance, if I
 had the time
Oh, the unbelievable irony.
Not really. If you have too many outstanding issues it means you have added to many features. It means you failed to do feature freeze at an earlier stage. It could also mean that you don't give priority to mentoring. Sometimes it is better to let your best people do mentoring and help bringing "master level students" up to speed. People are not loyal to a project. People are loyal to other people. If a mentor invests time in you, you will feel a social debt. This is the principle of gifting. You can create a strategy for mentoring. One obvious one is to focus on making the code base suitable for academia. Then you can offer supervision of master students. Academics love to have good external supervisors taking some load off their backs. That means lowering the requirements for compilation speed in order to get in some high level optimization and other features that you cannot otherwise have. You can give priority to getting in support for more social bonding between developers, like give priority to an IDE that supports CSCW style collaboration (seeing the code view of others). With Skype that could make pair programming (from XP) possible. There are many options.
I confess I don't understand all of this (not sure whether the pointed-out irony has been acknowledged, not sure even whether it's subtle trolling), but upon reading it a couple of times I get the sense it's the exact management gobbledygook I'd like to protect this community from. Andrei
Feb 11 2014
prev sibling parent reply Xavier Bigand <flamaros.xavier gmail.com> writes:
Le 10/02/2014 09:59, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" a écrit :
 On Monday, 10 February 2014 at 04:26:10 UTC, Manu wrote:
 The only company I know of that has made a commercial commitment to D
 is a
 AAA games company...
Unfortunately a AAA games company is not setting down the goal post for D. As long as the leads for the project have as their primary interests: non-real-time stuff and STL-like-libraries things won't develop in your (and mine and Fransescos) direction. It won't happen until the leads of the project COMMIT to a MEASURABLE goal and a major effort is made to meet that goal. That means putting other goals aside until that measurable goal has been met.
 Sorry, I obviously mean, "the only *games* company..."
Yeah, but that games company needs to commit to taking a lead role so that the goal post and vision changes in that direction.
 And people seem to forget promptly after every single time I repeat
 myself:
  * The GC frequency of execution is directly proportional to the
 amount of
 _free memory_. In console games; NONE.
  * The length of the associated pause is directly proportional to the
 amount of memory currently in use. In console games; all of it.

 This doesn't only describe games, it describes any embedded environment.
I've already stated that I don't believe in using D for anything multi-media.
???? So in these case I will forget D, and cry all the tears of my body. It will just a shame for a system language. And it's certainly kind of applications actually miss for D, to improve his visibility. Just take a look around you all applications are interactive, with more animations,...
 It is not part of the project vision to be good at that from what I am
 seeing, and I am not going to believe it is going to be good for that
 until the project leads commit to measurable goals.

 The leads believe in meritocracy, that means the project will flail
 around in any direction that is fun. That means there are no rails.
 There is no reason to pull or push a train that is not on rails. To get
 D to be a true better C++ you need a concerted effort.
Feb 10 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 10 February 2014 at 23:06:56 UTC, Xavier Bigand wrote:
 ????
 So in these case I will forget D, and cry all the tears of my 
 body. It will just a shame for a system language.
 And it's certainly kind of applications actually miss for D, to 
 improve his visibility.
Yes, but nobody that are in the "decision making body" of D has shown any resemblance of understanding or priority for real time applications. Seriously, the language is close to ten years in the making.
 Just take a look around you all applications are interactive, 
 with more animations,...
Yeah, but at some point you just have to accept that people who don't have a need to write real time will avoid putting it on the road map. I would like to see a roadmap that says "real time" and "no gc" and "whole program optimization", "owned pointers", "shared pointers"++ I see no road map.
Feb 10 2014
parent reply Xavier Bigand <flamaros.xavier gmail.com> writes:
Le 11/02/2014 00:12, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" a écrit :
 On Monday, 10 February 2014 at 23:06:56 UTC, Xavier Bigand wrote:
 ????
 So in these case I will forget D, and cry all the tears of my body. It
 will just a shame for a system language.
 And it's certainly kind of applications actually miss for D, to
 improve his visibility.
Yes, but nobody that are in the "decision making body" of D has shown any resemblance of understanding or priority for real time applications. Seriously, the language is close to ten years in the making.
 Just take a look around you all applications are interactive, with
 more animations,...
Yeah, but at some point you just have to accept that people who don't have a need to write real time will avoid putting it on the road map.
Boostrapping D, will certainly reveal some issue with GC? I don't know much about compilers, but certainly the parser may see some performance gains with GC and the other parts? Maybe some major loose? I am curious to see that as a benchmark comparator. It's not real-time but it's a serious system application challenge.
 I would like to see a roadmap that says "real time" and "no gc" and
 "whole program optimization", "owned pointers", "shared pointers"++

 I see no road map.
Feb 10 2014
parent "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Xavier Bigand"  wrote in message news:ldbohi$1ohh$1 digitalmars.com...
 Boostrapping D, will certainly reveal some issue with GC? I don't know 
 much about compilers, but certainly the parser may see some performance 
 gains with GC and the other parts? Maybe some major loose?
The parser allocates lots of memory without freeing any (the entire parsed ast), so a GC cannot possibly be an improvement there over the current strategy of C++ new + never delete. DDMD has predictably shown that there is a performance hit, even with collections disabled, compared with the highly tuned allocator used in the C++ version. The big plus of a GC for the compiler is that now ctfe is much less likely to cause the compiler to run out of memory, as all the temporary objects generated while interpreting will be garbage collected.
Feb 10 2014
prev sibling parent reply Xavier Bigand <flamaros.xavier gmail.com> writes:
Le 09/02/2014 21:15, francesco cattoglio a écrit :
 However, the last point was directed to the D community. The language
 needs to be more focused on being very good at some key areas, not
 cover everything.
I totally agree on this, but the problem here is that there are game developers out there, willing to use D. I also see lots of movement from hobbysts. We can't ignore them completely. Undefinedly long pauses are really bad for them, and something needs to be done. Be it in user code, library solution or as core part of the language. I agree that AAA titles are not the main right now, but this doesn't mean indie projects shouldn't be doable. After all, 150 milliseconds pauses are really annoying for pretty much any first person game.
With a pause of 150ms, you just can't play an animation of any kind. A lot of sample application like a movie play will require to use threads just cause of a GC? I don't find that simple. It's the same for GUI application, having 150ms of pause after a button hit can be horrible, again I don't think it has to be necessary to thread all UI applications even if it's simpler with a language like D.
Feb 10 2014
parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 10.02.2014 22:23, schrieb Xavier Bigand:
 Le 09/02/2014 21:15, francesco cattoglio a écrit :
 However, the last point was directed to the D community. The language
 needs to be more focused on being very good at some key areas, not
 cover everything.
I totally agree on this, but the problem here is that there are game developers out there, willing to use D. I also see lots of movement from hobbysts. We can't ignore them completely. Undefinedly long pauses are really bad for them, and something needs to be done. Be it in user code, library solution or as core part of the language. I agree that AAA titles are not the main right now, but this doesn't mean indie projects shouldn't be doable. After all, 150 milliseconds pauses are really annoying for pretty much any first person game.
With a pause of 150ms, you just can't play an animation of any kind. A lot of sample application like a movie play will require to use threads just cause of a GC? I don't find that simple. It's the same for GUI application, having 150ms of pause after a button hit can be horrible, again I don't think it has to be necessary to thread all UI applications even if it's simpler with a language like D.
A bit off topic, but can you still get new single core chips?
Feb 10 2014
next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Monday, 10 February 2014 at 21:57:42 UTC, Paulo Pinto wrote:
 A bit off topic, but can you still get new single core chips?
Sure you can. But that is far from common, unless you have really strict constraints.
Feb 10 2014
parent Paulo Pinto <pjmlp progtools.org> writes:
Am 11.02.2014 00:11, schrieb deadalnix:
 On Monday, 10 February 2014 at 21:57:42 UTC, Paulo Pinto wrote:
 A bit off topic, but can you still get new single core chips?
Sure you can. But that is far from common, unless you have really strict constraints.
I know, I was just big a bit sarcastic about not using threads in this day and age.
Feb 10 2014
prev sibling parent "Francesco Cattoglio" <francesco.cattoglio gmail.com> writes:
On Monday, 10 February 2014 at 21:57:42 UTC, Paulo Pinto wrote:
 A bit off topic, but can you still get new single core chips?
Yes you can! http://en.wikipedia.org/wiki/Intel_Edison will be made available in 2014, single core. probably off limits. I must say D would be a perfect language for it! :D
Feb 10 2014
prev sibling next sibling parent Manu <turkeyman gmail.com> writes:
On 9 February 2014 20:16, <"Ola Fosheim Gr=C3=B8stad\"
<ola.fosheim.grostad+dlang gmail.com>" puremagic.com> wrote:

 On Sunday, 9 February 2014 at 10:06:12 UTC, Manu wrote:

 I don't think you've mage a game recently.
Pointless comment. Most big games are multi-year projects with teams numbering well in the

 Most games are not big.
 Most games fail in the marketplace.
So D is an indy/mobile games language? Well, it certainly won't be that until Android and iPhone is well supported. AAA games however is technically possible right now, but needs more work before it will appeal to the industry without objection. I didn't say they should be a focus, I'm saying they must however be
 supported.
Must is a strong word, but since D is focusing on separate compilation it probably is a focus. Why are most comments about the application domain for D centered on "prestigious" projects such as AAA games, high volume trading system and safety critical appliations?
Perhaps it's because these are precisely the first few major businesses who have made a commercial commitment to D? If the language fails to satisfy ambitious early adopters, why should others follow? The most likely application domain is a lot less "exciting": tools and
 simple servers.

 Get down to earth, plz.
I don't write 'unexciting' tools and simple servers, so they are not my focal points. There are plenty of other people here that keep those usage targets in check. There are relatively few (although numbers are growing surprisingly fast) who keep the big-games, realtime, or embedded/resource-limited usage targets in check. I'm one of them, and I want a future for D in my industry. If it's declared that that's not a goal for D, then I will leave the community on that day. I am standing firmly on planet earth, and those jobs are what pay my bills. I know the requirements of my industry.
Feb 09 2014
prev sibling parent reply Xavier Bigand <flamaros.xavier gmail.com> writes:
Le 09/02/2014 11:16, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" a écrit :
 On Sunday, 9 February 2014 at 10:06:12 UTC, Manu wrote:
 I don't think you've mage a game recently.
Pointless comment.
 Most big games are multi-year projects with teams numbering well in the
Most games are not big. Most games fail in the marketplace.
 I didn't say they should be a focus, I'm saying they must however be
 supported.
Must is a strong word, but since D is focusing on separate compilation it probably is a focus. Why are most comments about the application domain for D centered on "prestigious" projects such as AAA games, high volume trading system and safety critical appliations? The most likely application domain is a lot less "exciting": tools and simple servers. Get down to earth, plz.
Maybe if performances are not really critical, developers can already IMO D have to target all applications have to be written in C/C++, where performances or portability, scalability,... are critical. developers doesn't have, we need love too :-) D claims to be a system language so it's normal to expect to be able to use it on critical ways easily, and maybe lesser for simple applications can already done with actual proven technologies. keywords,... but it's not surprising for a system language with advanced features than other languages doesn't support. IMO D programmers want have a great control on memory even if it can be a pain. We can let D satisfy ego of few developers doesn't already have the need of a such language on some points since it is already much less error prone than C++. :-)
Feb 10 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 10 February 2014 at 21:17:22 UTC, Xavier Bigand wrote:
 Maybe if performances are not really critical, developers can 

Or javascript actually, if you use the animation capabilities of the browser engine…
 IMO D have to target all applications have to be written in 
 C/C++, where performances or portability, scalability,... are 
 critical.

 C/C++ developers doesn't have, we need love too :-)
 D claims to be a system language so it's normal to expect to be 
 able to use it on critical ways easily, and maybe lesser for 
 simple applications can already done with actual proven 
 technologies.
I share your views actually… so I trying to take the view that D easier to accept the current state. :-) I am no longer sure if I am able to view D as a system language. Too many features that are not really important on the low level. Too big runtime. And no strategy that points towards whole program optimization. I'd personally much prefer less features, more performance control and whole program optimization. I distinctly remember doing profiling based whole program optimization of c-programs on unix machines in the 1990s. Seriously, that's 18+ years ago.
Feb 10 2014
prev sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 02/06/2014 12:16 AM, Adam D. Ruppe wrote:
 On Wednesday, 5 February 2014 at 22:32:52 UTC, Andrei Alexandrescu wrote:
 ...
 I should also add that imparting useful semantics to scope is much
 more difficult than it might seem.
I'm not so sure about that*, but the fact is scope would be enormously useful if it was implemented. * Let's say it meant "assigning to any higher scope is prohibited".
Then the type system should track scope depth (i.e. regions) in order to support abstraction properly. (This is Walter's usual identity function test, and it won't get less of a problem unless D gets a sane polymorphic type system.)
 That
 should be trivially easy to check and ensures that variable itself
 doesn't escape.
How would you express that a slice of a static stack array of GC-allocated references (and those may be escaped) shouldn't be escaped?
 The tricky part would be preventing:

 int[] global;
 void foo(scope int[] a) {
     int[] b = a;
     global = b;
 }
Well, to some extent this is a solved problem. Just decide which escape analysis strategy to use and maybe extend its precision with type information.
Feb 08 2014
prev sibling parent reply "Brad Anderson" <eco gnuk.net> writes:
On Wednesday, 5 February 2014 at 20:18:33 UTC, Adam D. Ruppe 
wrote:
 On Wednesday, 5 February 2014 at 19:39:43 UTC, Andrei 
 Alexandrescu wrote:
 You do figure that complicates usage considerably, right?
I don't see much evidence for that. Many, many newer modules in Phobos are currently allocation free yet still pretty easy to use. A major source of little allocations in my code is std.conv and std.string. But these aren't difficult to change to external allocation, in theory at least: string s = to!string(50); // GC allocates (I'd keep this for convenience and compatibility) char[16] buffer; char[] s = toBuffer(buffer[], 50); // same thing, using a buffer char[] s = toLowerBuffer(buffer[], "FOO"); assert(buffer.ptr is s); assert(s == "foo"); That's not hard to use (though remembering that s is a borrowed reference to a stack buffer might be - escape analysis is something we should really have). And it gives full control over both allocation and deallocation. It'd take some changes in phobos, but so does the RCSlice sooo yeah, and this actually decouples it from the GC.
Yeah, because RCSlice would require changes to Phobos too I'd much rather have this approach because it is just so much more flexible and hardly adds any inconvenience. Combined with the upcoming allocators it would be incredibly powerful. You could have an output range that uses an allocator which stores on the stack unless it grows too big (and the stack size could be completely customizable by the user who knows best). Or you could pass in an output range that reference counts its memory. Or an output range that must remain unique and frees its contents when it goes out of scope. I think three things would work together really well for addressing users that want to avoid the GC while making use of Phobos. 1) Increasing the support for output ranges, 2) Andrei's slick allocator design, and 3) nogc. With those three I really think managing memory and avoiding the GC will be rather pleasant. nogc would enable people trying to avoid all the tough to spot implicit GC allocations to identify them easily. Once uncovered, they just switch to the output range version of a function in Phobos and they then use std.allocator with the output range they feed in to create an ideal allocation strategy for their use case (whether it stack, GC, scope freed heap, reference counted, a memory pool, or some hybrid of those).
 The tricky part might be making it work with buffers, growable 
 buffers, sink functions, etc., but we've solved similar 
 problems with input ranges.


 I was thinking RCSlice would be a better alternative.
I very rarely care about when little slices are freed. Large blocks of memory might be another story (I've used malloc+free for a big internal buffer in my png.d after getting memory leaks from false poitners with teh gc) but those can be handled on a case by case basis. std.base64 for example might make sense to return one of these animals. I don't have a problem with refcounting on principle but most the time, it just doesn't matter.
Feb 05 2014
parent "Joseph Cassman" <jc7919 outlook.com> writes:
On Wednesday, 5 February 2014 at 21:03:25 UTC, Brad Anderson 
wrote:
 On Wednesday, 5 February 2014 at 20:18:33 UTC, Adam D. Ruppe 
 wrote:
 [...]
 A major source of little allocations in my code is std.conv 
 and std.string. But these aren't difficult to change to 
 external allocation, in theory at least:

 string s = to!string(50); // GC allocates (I'd keep this for 
 convenience and compatibility)

 char[16] buffer;
 char[] s = toBuffer(buffer[], 50); // same thing, using a 
 buffer

 char[] s = toLowerBuffer(buffer[], "FOO");
 assert(buffer.ptr is s);
 assert(s == "foo");


 That's not hard to use (though remembering that s is a 
 borrowed reference to a stack buffer might be - escape 
 analysis is something we should really have).

 And it gives full control over both allocation and 
 deallocation. It'd take some changes in phobos, but so does 
 the RCSlice sooo yeah, and this actually decouples it from the 
 GC.
Yeah, because RCSlice would require changes to Phobos too I'd much rather have this approach because it is just so much more flexible and hardly adds any inconvenience. Combined with the upcoming allocators it would be incredibly powerful. You could have an output range that uses an allocator which stores on the stack unless it grows too big (and the stack size could be completely customizable by the user who knows best). Or you could pass in an output range that reference counts its memory. Or an output range that must remain unique and frees its contents when it goes out of scope. I think three things would work together really well for addressing users that want to avoid the GC while making use of Phobos. 1) Increasing the support for output ranges, 2) Andrei's slick allocator design, and 3) nogc. With those three I really think managing memory and avoiding the GC will be rather pleasant. nogc would enable people trying to avoid all the tough to spot implicit GC allocations to identify them easily. Once uncovered, they just switch to the output range version of a function in Phobos and they then use std.allocator with the output range they feed in to create an ideal allocation strategy for their use case (whether it stack, GC, scope freed heap, reference counted, a memory pool, or some hybrid of those). [...]
My thinking as well. That combination of functionality looks very advantageous to me. It is more flexible than just providing two choices to the programmer: GC and RC. To me both GC and RC are useful, depending on the type of program being written. However, why limit to just the two? There are other styles of memory allocation/management I might need to make use of, perhaps even in the same program. I really like the new allocator module. I had been thinking that a goal for its use was to allow replacing the compiler-supported allocation style with a custom one, either at the module level or on a function-by-function basis, as shown in the code above. In my opinion, this would give the necessary flexibility over memory allocation by giving final control to the programmer (i.e. control over external and internal allocation style). Doing so seems good to me as the programmer knows a priori the type of allocation pattern to support based on the type of program being produced (e.g. real-time, long-running process, batch system). Of course minimizing memory allocation in Phobos is an excellent goal and that work will proceed orthogonal to this effort. However, in the end, some memory will have to be allocated. Letting the programmer choose how that memory is to be allocated by giving full access to std.allocator seems the way to go. Joseph
Feb 05 2014
prev sibling next sibling parent reply Michel Fortin <michel.fortin michelf.ca> writes:
On 2014-02-04 23:51:35 +0000, Andrei Alexandrescu 
<SeeWebsiteForEmail erdani.org> said:

 Consider we add a library slice type called RCSlice!T. It would have 
 the same primitives as T[] but would use reference counting through and 
 through. When the last reference count is gone, the buffer underlying 
 the slice is freed. The underlying allocator will be the GC allocator.
 
 Now, what if someone doesn't care about the whole RC thing and aims at 
 convenience? There would be a method .toGC that just detaches the slice 
 and disables the reference counter (e.g. by setting it to uint.max/2 or 
 whatever).
 
 Then people who want reference counting say
 
 auto x = fun();
 
 and those who don't care say:
 
 auto x = fun().toGC();
 
 
 Destroy.
I don't think it makes much sense. ARC when used for D constructs should be treated an alternate GC algorithm, not a different kind of pointer. There's another possible use for ARC which is to manage reference-counted external objects not allocated by the D GC that are using reference counting (such as COM objects, or Objective-C objects). This could justify a different kind of pointer. But that's a separate issue from the GC algorithm used for D constructs. -- Michel Fortin michel.fortin michelf.ca http://michelf.ca
Feb 05 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/5/14, 7:23 AM, Michel Fortin wrote:
 On 2014-02-04 23:51:35 +0000, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> said:

 Consider we add a library slice type called RCSlice!T. It would have
 the same primitives as T[] but would use reference counting through
 and through. When the last reference count is gone, the buffer
 underlying the slice is freed. The underlying allocator will be the GC
 allocator.

 Now, what if someone doesn't care about the whole RC thing and aims at
 convenience? There would be a method .toGC that just detaches the
 slice and disables the reference counter (e.g. by setting it to
 uint.max/2 or whatever).

 Then people who want reference counting say

 auto x = fun();

 and those who don't care say:

 auto x = fun().toGC();


 Destroy.
I don't think it makes much sense. ARC when used for D constructs should be treated an alternate GC algorithm, not a different kind of pointer.
Why? The RC object has a different layout, so it may as well have a different type. Andrei
Feb 05 2014
next sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Wednesday, 5 February 2014 at 18:26:38 UTC, Andrei 
Alexandrescu wrote:
 Why? The RC object has a different layout, so it may as well 
 have a different type.
It also has different usage requirements, so it should have a different type. BTW so should GC vs borrowed pointers.
Feb 05 2014
prev sibling parent reply Michel Fortin <michel.fortin michelf.ca> writes:
On 2014-02-05 18:26:38 +0000, Andrei Alexandrescu 
<SeeWebsiteForEmail erdani.org> said:

 On 2/5/14, 7:23 AM, Michel Fortin wrote:
 I don't think it makes much sense. ARC when used for D constructs should
 be treated an alternate GC algorithm, not a different kind of pointer.
Why? The RC object has a different layout, so it may as well have a different type.
Well, it depends on your goal. If your goal is to avoid the garbage collector, you need all language constructs to use ARC. Having a single type in the language that relies on the GC defeats the purpose. What you want is simply to replace the current GC with another implantation, one that uses ARC. It shouldn't affect user code in any way, it's mostly an implementation detail (managed by the compiler and the runtime). If your goal is to have a deterministic lifetime for slices in some situations, then RCSlice as you proposes it is fine. That said, with a library type you'll have a hard time making the optimizer elide redundant increment/decrement pairs, so it'll never be optimal. I'm also not sure there's a lot of use cases for a deterministic slice lifetime working side by side with memory managed by the current GC. To me it seems you're trying to address a third problem here: that people have complained that Phobos relies on the GC too much. This comes from people who either don't want the GC to pause anything, or people who want to reduce memory allocations altogether. For the former group, replacing the current GC with an ARC+GC scheme at the language level, with the possibility to disable the GC, will fix most of Phobos (and most other libraries) with no code change required. For the later group, you need to make the API so that allocations are either not necessary, or when necessary provide a way to use a custom an allocator of some sort. -- Michel Fortin michel.fortin michelf.ca http://michelf.ca
Feb 05 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/5/14, 12:23 PM, Michel Fortin wrote:
 On 2014-02-05 18:26:38 +0000, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> said:

 On 2/5/14, 7:23 AM, Michel Fortin wrote:
 I don't think it makes much sense. ARC when used for D constructs should
 be treated an alternate GC algorithm, not a different kind of pointer.
Why? The RC object has a different layout, so it may as well have a different type.
Well, it depends on your goal. If your goal is to avoid the garbage collector, you need all language constructs to use ARC. Having a single type in the language that relies on the GC defeats the purpose. What you want is simply to replace the current GC with another implantation, one that uses ARC. It shouldn't affect user code in any way, it's mostly an implementation detail (managed by the compiler and the runtime). If your goal is to have a deterministic lifetime for slices in some situations, then RCSlice as you proposes it is fine. That said, with a library type you'll have a hard time making the optimizer elide redundant increment/decrement pairs, so it'll never be optimal. I'm also not sure there's a lot of use cases for a deterministic slice lifetime working side by side with memory managed by the current GC. To me it seems you're trying to address a third problem here: that people have complained that Phobos relies on the GC too much. This comes from people who either don't want the GC to pause anything, or people who want to reduce memory allocations altogether. For the former group, replacing the current GC with an ARC+GC scheme at the language level, with the possibility to disable the GC, will fix most of Phobos (and most other libraries) with no code change required. For the later group, you need to make the API so that allocations are either not necessary, or when necessary provide a way to use a custom an allocator of some sort.
I want to make one positive step toward improving memory allocation in the D language. Andrei
Feb 05 2014
parent reply Michel Fortin <michel.fortin michelf.ca> writes:
On 2014-02-05 22:19:27 +0000, Andrei Alexandrescu 
<SeeWebsiteForEmail erdani.org> said:

 I want to make one positive step toward improving memory allocation in 
 the D language.
I know. But I find your proposal confusing. Perhaps this is just one piece in your master plan where everything will make sense once we have all the pieces. But this piece by itself makes no sense to me; I have no idea where you're going with it. Is this the continuation of the old thread where you wanted ideas about how to eliminate hidden allocations in buildPath? Doesn't sound like it. Or is this about implementing ARC in the language for those who can't use the GC? The changes for this need to be done at a lower level (compiler, runtime), and no change would be required in Phobos. Or maybe this is to please the nogc crowd by making things reference-counted by default? While I'm not a fan of nogc, this will not work for them either as your proposal allocates from GC memory and this will sometime trigger a collect cycle. Or maybe you're trying to address the following issue: if we change D's GC to use the ARC+GC scheme, what if I don't want to increment/decrement at pointer assignment and instead rely purely on mark and sweep for certain pointers? I'm not sure if someone asked for that yet, but I guess it could be a valid concern. So, what problem are we trying to solve again? -- Michel Fortin michel.fortin michelf.ca http://michelf.ca
Feb 05 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/5/14, 4:53 PM, Michel Fortin wrote:
 On 2014-02-05 22:19:27 +0000, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> said:

 I want to make one positive step toward improving memory allocation in
 the D language.
I know. But I find your proposal confusing. Perhaps this is just one piece in your master plan where everything will make sense once we have all the pieces. But this piece by itself makes no sense to me; I have no idea where you're going with it. Is this the continuation of the old thread where you wanted ideas about how to eliminate hidden allocations in buildPath? Doesn't sound like it.
Actually buildPath is a good example because it concatenates strings. It should work transparently with RC and GC strings. Andrei
Feb 05 2014
parent reply Michel Fortin <michel.fortin michelf.ca> writes:
On 2014-02-06 04:56:28 +0000, Andrei Alexandrescu 
<SeeWebsiteForEmail erdani.org> said:

 On 2/5/14, 4:53 PM, Michel Fortin wrote:
 On 2014-02-05 22:19:27 +0000, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> said:
 
 I want to make one positive step toward improving memory allocation in
 the D language.
I know. But I find your proposal confusing. Perhaps this is just one piece in your master plan where everything will make sense once we have all the pieces. But this piece by itself makes no sense to me; I have no idea where you're going with it. Is this the continuation of the old thread where you wanted ideas about how to eliminate hidden allocations in buildPath? Doesn't sound like it.
Actually buildPath is a good example because it concatenates strings. It should work transparently with RC and GC strings.
That thread about buildPath started like this: "Walter and I were talking about eliminating the surreptitious allocations in buildPath". But reference counting will do nothing to eliminate surreptitious allocations. It can't be that problem you're trying to address. -- Michel Fortin michel.fortin michelf.ca http://michelf.ca
Feb 06 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/6/14, 3:44 AM, Michel Fortin wrote:
 On 2014-02-06 04:56:28 +0000, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> said:

 On 2/5/14, 4:53 PM, Michel Fortin wrote:
 On 2014-02-05 22:19:27 +0000, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> said:

 I want to make one positive step toward improving memory allocation in
 the D language.
I know. But I find your proposal confusing. Perhaps this is just one piece in your master plan where everything will make sense once we have all the pieces. But this piece by itself makes no sense to me; I have no idea where you're going with it. Is this the continuation of the old thread where you wanted ideas about how to eliminate hidden allocations in buildPath? Doesn't sound like it.
Actually buildPath is a good example because it concatenates strings. It should work transparently with RC and GC strings.
That thread about buildPath started like this: "Walter and I were talking about eliminating the surreptitious allocations in buildPath". But reference counting will do nothing to eliminate surreptitious allocations.
That's exactly right. Currently buildPath uses ~= several times so it will produce allocations that the user is unable to free. If buildPath used reference counting through and through, temporary allocations would be freed eagerly inside buildPath, and the user will have a shot at freeing the end result.
 It can't be that problem you're trying to address.
Are you sure you want to debate with me what's in my mind? Andrei
Feb 06 2014
parent "Sean Kelly" <sean invisibleduck.org> writes:
I know I'm coming into this a bit late, but in general I only 
feel like there's a problem with built-in dynamic types.  It 
generally isn't hard to manage the lifetime of classes manually, 
and I don't tend to churn through them.  Also, there are 
basically no routines in Phobos that operate on classes, so 
that's really entirely my problem as a user anyway.  What 
concerns me most is string processing, and in particular the 
routines in Phobos that do string processing.  And while I've 
always liked the idea of supplying a destination buffer to these 
routines, it doesn't help the case where the buffer is too small 
and allocation still needs to occur.

Instead of sorting out some crazy return type or supplying a raw 
destination buffer, what if we instead supply an appender?  Then 
the appender could grow the buffer in place or throw or whatever 
we want the behavior to be when out of space.  I think this would 
solve 90% of my concerns about unexpected GC pauses in my code.  
For the rest, any temporary allocations inside Phobos routines 
should either be eliminated, reused on each call per a static 
reference, or cleaned up.  I'm really okay with the occasional 
"new" inside a routine so long as repeated calls to that routine 
don't generate tons of garbage and thus trigger a collection.
Feb 06 2014
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 6 February 2014 at 11:44:34 UTC, Michel Fortin wrote:
 That thread about buildPath started like this: "Walter and I 
 were talking about eliminating the surreptitious allocations in 
 buildPath". But reference counting will do nothing to eliminate 
 surreptitious allocations. It can't be that problem you're 
 trying to address.
I think stuff like buildPath just shows how the language should be geared more towards compiler optimizations of allocation if performance is a real goal. The efficient thing to do is to optimize a string concat into a stack allocation (alloca) if it is a throw-away, and just change the stack frame upon return, with some heuristics and alternative execution paths in order to avoid running out of stack space. E.g. a(){ return buildPath(...) ~ "!"; } Compiles to: a: call buildPath(...) alloca(size_to_endof_path_returned_by_buildPath+1) *someaddr = '!' return (stackbufferstart,length) buildPath: stackbuffer=alloca(sum_of_lengths) copy... return tuple(stackbufferstart,length)
Feb 06 2014
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 7 February 2014 at 00:30:00 UTC, Ola Fosheim Grøstad 
wrote:
    alloca(size_to_endof_path_returned_by_buildPath+1)
Of course, since the stack usually grows that should be size_to_start_of_path and you need to preallocate the necessary padding. A pity that stacks usually don't grow upwards, that would make appending much more flexible.
Feb 06 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/6/14, 4:29 PM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Thursday, 6 February 2014 at 11:44:34 UTC, Michel Fortin wrote:
 That thread about buildPath started like this: "Walter and I were
 talking about eliminating the surreptitious allocations in buildPath".
 But reference counting will do nothing to eliminate surreptitious
 allocations. It can't be that problem you're trying to address.
I think stuff like buildPath just shows how the language should be geared more towards compiler optimizations of allocation if performance is a real goal. The efficient thing to do is to optimize a string concat into a stack allocation (alloca) if it is a throw-away, and just change the stack frame upon return, with some heuristics and alternative execution paths in order to avoid running out of stack space.
This is a very incomplete, naive sketch. It could be made to work only in a language that has no backward compatibility to worry about. Andrei
Feb 06 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 7 February 2014 at 01:21:25 UTC, Andrei Alexandrescu 
wrote:
 This is a very incomplete, naive sketch. It could be made to 
 work only in a language that has no backward compatibility to 
 worry about.
Whaddya mean? On the object code level? The compiler should be conservative and generate the alternatives if needed. In this day and age a language should aim for whole program optimization and static analysis. Not necessarily the proof-of-concept compiler, but the language spec. A system level language should go out of its way to make stack and pool allocations likely/possible/probable as well as register based optimizations on calls etc.
Feb 06 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/6/14, 6:14 PM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Friday, 7 February 2014 at 01:21:25 UTC, Andrei Alexandrescu wrote:
 This is a very incomplete, naive sketch. It could be made to work only
 in a language that has no backward compatibility to worry about.
Whaddya mean? On the object code level? The compiler should be conservative and generate the alternatives if needed. In this day and age a language should aim for whole program optimization and static analysis. Not necessarily the proof-of-concept compiler, but the language spec. A system level language should go out of its way to make stack and pool allocations likely/possible/probable as well as register based optimizations on calls etc.
I'm jaded - Walter and I bounced around for years ideas that vaguely prescribe a feature/optimization and gloss over many details and all difficulties... they're a dime a dozen. Andrei
Feb 07 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 7 February 2014 at 16:25:01 UTC, Andrei Alexandrescu 
wrote:
 I'm jaded - Walter and I bounced around for years ideas that 
 vaguely prescribe a feature/optimization and gloss over many 
 details and all difficulties... they're a dime a dozen.
I see, in this particular case it will work if dynamic stack allocation works and if you have guards on the stack size (page traps at the end of the stack). The optimization is very limited, since it only works for returning one variable length structure under certain circumstances, but then again it is a common case for types where you want to do temporary allocation and prepending/appending something before making a system call or similar.
Feb 07 2014
prev sibling next sibling parent Manu <turkeyman gmail.com> writes:
On 6 February 2014 06:23, Michel Fortin <michel.fortin michelf.ca> wrote:

 On 2014-02-05 18:26:38 +0000, Andrei Alexandrescu <
 SeeWebsiteForEmail erdani.org> said:

  On 2/5/14, 7:23 AM, Michel Fortin wrote:
 I don't think it makes much sense. ARC when used for D constructs should
 be treated an alternate GC algorithm, not a different kind of pointer.
Why? The RC object has a different layout, so it may as well have a different type.
Well, it depends on your goal. If your goal is to avoid the garbage collector, you need all language constructs to use ARC. Having a single type in the language that relies on the GC defeats the purpose. What you want is simply to replace the current GC with another implantation, one that uses ARC. It shouldn't affect user code in any way, it's mostly an implementation detail (managed by the compiler and the runtime). If your goal is to have a deterministic lifetime for slices in some situations, then RCSlice as you proposes it is fine. That said, with a library type you'll have a hard time making the optimizer elide redundant increment/decrement pairs, so it'll never be optimal. I'm also not sure there's a lot of use cases for a deterministic slice lifetime working side by side with memory managed by the current GC. To me it seems you're trying to address a third problem here: that people have complained that Phobos relies on the GC too much. This comes from people who either don't want the GC to pause anything, or people who want to reduce memory allocations altogether. For the former group, replacing the current GC with an ARC+GC scheme at the language level, with the possibility to disable the GC, will fix most of Phobos (and most other libraries) with no code change required. For the later group, you need to make the API so that allocations are either not necessary, or when necessary provide a way to use a custom an allocator of some sort.
This.
Feb 05 2014
prev sibling parent "Mike" <none none.com> writes:
On Wednesday, 5 February 2014 at 20:23:13 UTC, Michel Fortin 
wrote:

 What you want is simply to replace the current GC with another 
 implantation, one that uses ARC. It shouldn't affect user code 
 in any way, it's mostly an implementation detail (managed by 
 the compiler and the runtime).
Yes.
 To me it seems you're trying to address a third problem here: 
 that people have complained that Phobos relies on the GC too 
 much.
Yes.
 This comes from people who either don't want the GC to pause 
 anything, or people who want to reduce memory allocations 
 altogether. For the former group, replacing the current GC with 
 an ARC+GC scheme at the language level, with the possibility to 
 disable the GC, will fix most of Phobos (and most other 
 libraries) with no code change required.
Yes.
 For the later group, you need to make the API so that 
 allocations are either not necessary, or when necessary provide 
 a way to use a custom an allocator of some sort.
... and Yes
Feb 05 2014
prev sibling next sibling parent "Graham Fawcett" <fawcett uwindsor.ca> writes:
On Tuesday, 4 February 2014 at 23:51:35 UTC, Andrei Alexandrescu
wrote:
 Consider we add a library slice type called RCSlice!T. It would 
 have the same primitives as T[] but would use reference 
 counting through and through. When the last reference count is 
 gone, the buffer underlying the slice is freed. The underlying 
 allocator will be the GC allocator.

 Now, what if someone doesn't care about the whole RC thing and 
 aims at convenience? There would be a method .toGC that just 
 detaches the slice and disables the reference counter (e.g. by 
 setting it to uint.max/2 or whatever).

 Then people who want reference counting say

 auto x = fun();

 and those who don't care say:

 auto x = fun().toGC();


 Destroy.

 Andrei
I think I'd rather have a higher-level solution than this. I would worry about having to reason about code that's littered with toGC() or toARC() calls: my peanut butter (application logic) is now mixed in with my chocolate (allocation logic), making both harder to understand. I thought there was some discussion about arenas and custom allocators a while back, and that seemed like a sensible way to address these issues. I had imagined (naively?) that I would have been able to write something like: auto someFunction(T)(T params) { ComplicatedResult result; { // set up allocator at start of scope auto arena = GrowableArena(); push_allocator(arena); // thread-local allocator scope(exit) pop_allocator(); // call as many expensive functions as you like... auto tmp = allocation_intensive_computation(params); // transitive-move result from arena, at the scope's tail result = tmp.toGC(); } // arena goes out of scope and is deallocated return result; } So, allocation and lifetime issues are handled at the boundaries of scopes that have custom allocators associated with them. Um... destroy? Graham
Feb 05 2014
prev sibling next sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
05-Feb-2014 03:51, Andrei Alexandrescu пишет:
 Consider we add a library slice type called RCSlice!T. It would have the
 same primitives as T[] but would use reference counting through and
 through. When the last reference count is gone, the buffer underlying
 the slice is freed. The underlying allocator will be the GC allocator.

 Now, what if someone doesn't care about the whole RC thing and aims at
 convenience? There would be a method .toGC that just detaches the slice
 and disables the reference counter (e.g. by setting it to uint.max/2 or
 whatever).

 Then people who want reference counting say

 auto x = fun();
How abut just adding a template argument that indicates which container type to use for internal allocation? Array!T a = fun!(Array)(); //Ref-counted T[] a = fun(); //default args - GC IMHO solves Phobos side of equation. -- Dmitry Olshansky
Feb 05 2014
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/5/14, 12:35 PM, Dmitry Olshansky wrote:
 05-Feb-2014 03:51, Andrei Alexandrescu пишет:
 Consider we add a library slice type called RCSlice!T. It would have the
 same primitives as T[] but would use reference counting through and
 through. When the last reference count is gone, the buffer underlying
 the slice is freed. The underlying allocator will be the GC allocator.

 Now, what if someone doesn't care about the whole RC thing and aims at
 convenience? There would be a method .toGC that just detaches the slice
 and disables the reference counter (e.g. by setting it to uint.max/2 or
 whatever).

 Then people who want reference counting say

 auto x = fun();
How abut just adding a template argument that indicates which container type to use for internal allocation? Array!T a = fun!(Array)(); //Ref-counted T[] a = fun(); //default args - GC IMHO solves Phobos side of equation.
Good idea. Andrei
Feb 05 2014
prev sibling parent reply luka8088 <luka8088 owave.net> writes:
On 5.2.2014. 0:51, Andrei Alexandrescu wrote:
 Consider we add a library slice type called RCSlice!T. It would have the
 same primitives as T[] but would use reference counting through and
 through. When the last reference count is gone, the buffer underlying
 the slice is freed. The underlying allocator will be the GC allocator.
 
 Now, what if someone doesn't care about the whole RC thing and aims at
 convenience? There would be a method .toGC that just detaches the slice
 and disables the reference counter (e.g. by setting it to uint.max/2 or
 whatever).
 
 Then people who want reference counting say
 
 auto x = fun();
 
 and those who don't care say:
 
 auto x = fun().toGC();
 
 
 Destroy.
 
 Andrei
Here is a thought: Let's say we have class A and class B, and class A accepts references to B as children: class A { B child1; B child2; B child3; } I think that the ultimate goal is to allow the user to choose between kinds of memory management, especially between automatic and manual. The problem here is that class A needs to be aware whether memory management is manual or automatic. And it seems to me that a new type qualifier is a way to go: class A { garbageCollected(B) child1; referenceCounted(B) child2; manualMemory(B) child3; } Now suppose we want to have only one child but we want to support compatibility with other kinds of memory management: class A { manualMemory(B) child; this (B newChild) { child = newChild.toManualMemory(); } this (referenceCounted(B) newChild) { child = newChild.toManualMemory(); } this (manualMemory(B) newChild) { child = newChild; } ~this () { delete child; } } This way we could write code that supports multiple models, and let the user choose which one to use. The this that I would like to point out is that this suggestion would work with existing code as garbageCollected memory management model would be a default: auto b = new B(); auto a = new A(b); Another thing to note is that in this way a garbage collector would know that we now have two references to one object (instance of class B). One is variable b and another is child in object a. And because of the notation garbage collector is aware that if could free this object when variable b goes out of scope but it should not do it because there is a still a manually managed reference to that object. I am sure that there are many more possible loopholes but maybe it will give someone a better idea :)
Feb 06 2014
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/6/14, 12:28 AM, luka8088 wrote:
 On 5.2.2014. 0:51, Andrei Alexandrescu wrote:
 Consider we add a library slice type called RCSlice!T. It would have the
 same primitives as T[] but would use reference counting through and
 through. When the last reference count is gone, the buffer underlying
 the slice is freed. The underlying allocator will be the GC allocator.

 Now, what if someone doesn't care about the whole RC thing and aims at
 convenience? There would be a method .toGC that just detaches the slice
 and disables the reference counter (e.g. by setting it to uint.max/2 or
 whatever).

 Then people who want reference counting say

 auto x = fun();

 and those who don't care say:

 auto x = fun().toGC();


 Destroy.

 Andrei
Here is a thought: Let's say we have class A and class B, and class A accepts references to B as children: class A { B child1; B child2; B child3; } I think that the ultimate goal is to allow the user to choose between kinds of memory management, especially between automatic and manual. The problem here is that class A needs to be aware whether memory management is manual or automatic. And it seems to me that a new type qualifier is a way to go: class A { garbageCollected(B) child1; referenceCounted(B) child2; manualMemory(B) child3; }
There common theme here is that the original post introduces two distinct types of slices, depending on how they are to be freed (by refcounting or tracing). Andrei
Feb 06 2014
prev sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 6 February 2014 at 08:28:34 UTC, luka8088 wrote:
 is manual or automatic. And it seems to me that a new type 
 qualifier is
 a way to go:

 class A {
   garbageCollected(B) child1;
   referenceCounted(B) child2;
   manualMemory(B) child3;
 }
class A { shared delayedrelease nodestructor cycles B child1; shared immediaterelease nocycles B child2; owned nocycles B child3; } Based on the required qualities, static analysis and profiling the compiler choose the most efficient storage that meet the constraints and match it up to the available runtime.
Feb 06 2014