www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - DIP60: nogc attribute

reply Walter Bright <newshound2 digitalmars.com> writes:
http://wiki.dlang.org/DIP60

Start on implementation:

https://github.com/D-Programming-Language/dmd/pull/3455
Apr 15 2014
next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

 http://wiki.dlang.org/DIP60
Given the current strong constraints in the directions of D design, I like it. But in DIPs I suggest to also write a list of the advantages of a proposal, and a list of all the disadvantages. In this ER I suggested a noheap, that is meant to also forbid the C memory allocation functions: https://issues.dlang.org/show_bug.cgi?id=5219 But I guess there is no reliable way to catch those function calls too? If the discussion is against noheap then I'll close that ER as obsolete. With nogc D has most of the basic function effects covered. One missing effect that D is not yet putting under control is nontermination; to annotate functions that can not loop forever and never return (like a function with an infinite loop inside, etc). But for the kind of programs written in D, I presume this effect is not yet important enough to deserve an annotation like terminates. There is a language studied by Microsoft that faces the problem of effects algebra in a more discipled and high-level way. I don't know how much this can help D at its current development stage: http://research.microsoft.com/en-us/projects/koka/ But in general reading research papers is useful to get in a good mindframe. In Koka language the functions that can not terminate are annotated with "div", that means "divergence". But Koka has a good effect inference. Bye, bearophile
Apr 15 2014
prev sibling next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 15 Apr 2014 13:01:40 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 http://wiki.dlang.org/DIP60'
Wow, that's quite underspecified. What about syscalls? Nothrow functions allow one to call non-nothrow functions, but catch all exceptions. What if you want to use the GC for allocation temporarily, and then delete all the usage before returning? Must you use C malloc/free? What about such code that is safe, which cannot call free? Off the top of my head... -Steve
Apr 15 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/15/2014 11:41 AM, Steven Schveighoffer wrote:
 On Tue, 15 Apr 2014 13:01:40 -0400, Walter Bright <newshound2 digitalmars.com>
 wrote:

 http://wiki.dlang.org/DIP60'
Wow, that's quite underspecified.
Ok, but I don't see how.
 What about syscalls?
Not sure what you mean by that, but obviously Windows API functions would be nogc.
 Nothrow functions allow one to call non-nothrow functions,
 but catch all exceptions. What if you want to use the GC for allocation
 temporarily, and then delete all the usage before returning?
nogc doesn't allow an escape from it. That's the point of it.
 Must you use C malloc/free?
If you can GC/GCfree, then you can use malloc/free.
 What about such code that is  safe, which cannot call free?
There's no point to nogc if you can still call the GC in it.
Apr 15 2014
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 15 Apr 2014 14:58:25 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 4/15/2014 11:41 AM, Steven Schveighoffer wrote:
 On Tue, 15 Apr 2014 13:01:40 -0400, Walter Bright  
 <newshound2 digitalmars.com>
 wrote:

 http://wiki.dlang.org/DIP60'
Wow, that's quite underspecified.
Ok, but I don't see how.
i.e. the rest of my post.
 What about syscalls?
Not sure what you mean by that, but obviously Windows API functions would be nogc.
Linux syscalls are not Windows API, they are extern(C) calls. Basically, we will have to mark most as nogc, right? or are extern(C) calls automatically considered nogc?
 Nothrow functions allow one to call non-nothrow functions,
 but catch all exceptions. What if you want to use the GC for allocation
 temporarily, and then delete all the usage before returning?
nogc doesn't allow an escape from it. That's the point of it.
My point: void foo1() {throw new Exception("hi");} void foo2() nothrow { try {foo1();} catch(Exception e) {} } This is valid. int isthisok(int x, int y) nogc { // need scratch space int[] buf = new int[x * y]; scope(exit) GC.free(buf.ptr); // perform some algorithm using buf ... // return buf[$-1]; } Valid?
 Must you use C malloc/free?
If you can GC/GCfree, then you can use malloc/free.
What if you can't use GC/GCfree? i.e. nogc (the point of this thread)
 What about such code that is  safe, which cannot call free?
There's no point to nogc if you can still call the GC in it.
This is a follow-on to the previous question. Let's say you cannot use GC, but clearly, I can use C malloc/free. Safe code that needs arbitrary buffers must use C malloc/free to manage them, but safe cannot legally call free. I think we would need some sort of scoped allocator to make life bearable. -Steve
Apr 15 2014
next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Tuesday, 15 April 2014 at 19:12:30 UTC, Steven Schveighoffer 
wrote:
 int isthisok(int x, int y)  nogc
 {
    // need scratch space
    int[] buf = new int[x * y];
    scope(exit) GC.free(buf.ptr);
    // perform some algorithm using buf
    ...
    //
    return buf[$-1];
 }

 Valid?
No way. This can trigger GC collection. nogc is not about observable pre- and post- state but about prohibiting specific operation completely.
Apr 15 2014
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/15/14, 12:18 PM, Dicebot wrote:
 On Tuesday, 15 April 2014 at 19:12:30 UTC, Steven Schveighoffer wrote:
 int isthisok(int x, int y)  nogc
 {
    // need scratch space
    int[] buf = new int[x * y];
    scope(exit) GC.free(buf.ptr);
    // perform some algorithm using buf
    ...
    //
    return buf[$-1];
 }

 Valid?
No way. This can trigger GC collection. nogc is not about observable pre- and post- state but about prohibiting specific operation completely.
Very nice. This is a preemptive close of an entire class of arguments. -- Andrei
Apr 15 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/15/2014 12:12 PM, Steven Schveighoffer wrote:
 What about syscalls?
Not sure what you mean by that, but obviously Windows API functions would be nogc.
Linux syscalls are not Windows API, they are extern(C) calls. Basically, we will have to mark most as nogc, right?
Right, just like nothrow.
 or are extern(C) calls automatically considered  nogc?
No, just like for nothrow.
 int isthisok(int x, int y)  nogc
 {
     // need scratch space
     int[] buf = new int[x * y];
     scope(exit) GC.free(buf.ptr);
     // perform some algorithm using buf
     ...
     //
     return buf[$-1];
 }

 Valid?
No.
 Must you use C malloc/free?
If you can GC/GCfree, then you can use malloc/free.
What if you can't use GC/GCfree? i.e. nogc (the point of this thread)
Then use malloc/free.
 What about such code that is  safe, which cannot call free?
There's no point to nogc if you can still call the GC in it.
This is a follow-on to the previous question. Let's say you cannot use GC, but clearly, I can use C malloc/free. Safe code that needs arbitrary buffers must use C malloc/free to manage them, but safe cannot legally call free.
That's why trusted exists.
 I think we would need some sort of scoped allocator to make life bearable.
p = malloc(...); scope(exit) free(p); Let's be clear about the motivation for nogc - there are a lot of people who will not use D because of fear of GC. They want a guarantee that the GC isn't being called. They don't want code that hides calls to GC.
Apr 15 2014
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 15 Apr 2014 15:20:41 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 Let's be clear about the motivation for  nogc - there are a lot of  
 people who will not use D because of fear of GC. They want a guarantee  
 that the GC isn't being called. They don't want code that hides calls to  
 GC.
The DIP should be clear on this. Right now it says "guarantee code will not allocate using GC" I now understand the rationale, and intended result. All that is left to specify is the implementation (who does what). -Steve
Apr 15 2014
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 15 Apr 2014 14:41:36 -0400, Steven Schveighoffer  
<schveiguy yahoo.com> wrote:

 On Tue, 15 Apr 2014 13:01:40 -0400, Walter Bright  
 <newshound2 digitalmars.com> wrote:

 http://wiki.dlang.org/DIP60'
Wow, that's quite underspecified.
What specifically is prevented? GC.malloc, GC.free, clearly. What about GC.getAttr? GC.query? These will not invoke collection cycles (the point of nogc). Actually, maybe just fullcollect is the root nogc method... In any case, more concrete details should be specified in the DIP. -Steve
Apr 15 2014
next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 15 Apr 2014 15:14:47 -0400, Steven Schveighoffer  
<schveiguy yahoo.com> wrote:

 Actually, maybe just fullcollect is the root  nogc method...
Said that wrong, fullcollect is the base method you cannot mark as nogc. -Steve
Apr 15 2014
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/15/2014 12:14 PM, Steven Schveighoffer wrote:
 On Tue, 15 Apr 2014 14:41:36 -0400, Steven Schveighoffer <schveiguy yahoo.com>
 wrote:

 On Tue, 15 Apr 2014 13:01:40 -0400, Walter Bright <newshound2 digitalmars.com>
 wrote:

 http://wiki.dlang.org/DIP60'
Wow, that's quite underspecified.
What specifically is prevented?
All calls to the GC.
Apr 15 2014
parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Tuesday, 15 April 2014 at 19:40:50 UTC, Walter Bright wrote:
 On 4/15/2014 12:14 PM, Steven Schveighoffer wrote:
 On Tue, 15 Apr 2014 14:41:36 -0400, Steven Schveighoffer 
 <schveiguy yahoo.com>
 wrote:

 On Tue, 15 Apr 2014 13:01:40 -0400, Walter Bright 
 <newshound2 digitalmars.com>
 wrote:

 http://wiki.dlang.org/DIP60'
Wow, that's quite underspecified.
What specifically is prevented?
All calls to the GC.
What about GC calls that cannot cause a collection? Do they even exist?
Apr 15 2014
next sibling parent reply "Tove" <tove fransson.se> writes:
On Tuesday, 15 April 2014 at 19:44:39 UTC, John Colvin wrote:
 On Tuesday, 15 April 2014 at 19:40:50 UTC, Walter Bright wrote:
 On 4/15/2014 12:14 PM, Steven Schveighoffer wrote:
 On Tue, 15 Apr 2014 14:41:36 -0400, Steven Schveighoffer 
 <schveiguy yahoo.com>
 wrote:

 On Tue, 15 Apr 2014 13:01:40 -0400, Walter Bright 
 <newshound2 digitalmars.com>
 wrote:

 http://wiki.dlang.org/DIP60'
Wow, that's quite underspecified.
What specifically is prevented?
All calls to the GC.
What about GC calls that cannot cause a collection? Do they even exist?
Yes, please all, even "harmless" calls. This way you are guaranteed that you can include nogc modules in projects which doesn't even link with a GC.
Apr 15 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/15/2014 12:49 PM, Tove wrote:
 Yes, please all, even "harmless" calls. This way you are guaranteed that you
can
 include  nogc modules in projects which doesn't even link with a GC.
Yup.
Apr 15 2014
parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Tuesday, 15 April 2014 at 19:56:46 UTC, Walter Bright wrote:
 On 4/15/2014 12:49 PM, Tove wrote:
 Yes, please all, even "harmless" calls. This way you are 
 guaranteed that you can
 include  nogc modules in projects which doesn't even link with 
 a GC.
Yup.
One issue we might encounter, is that when a function requires a local temporary buffer, then " nogc" and "pure" will be mutually exclussive. Implementation can either use the GC, and be pure but not nogc. Or it can use malloc, and no nogc but impure. I recently worked on a more generic version of your "ScopeBuffer", "ScopeAppender", which makes some concessions to be usable in a more generic fashion. It uses the GC, even though it retains complete ownership of the allocated data, if only just to be useable in pure code. So, yeah. That might be a problem for those that want to do without the GC, but retain purity.
Apr 15 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/15/2014 1:25 PM, monarch_dodra wrote:
 So, yeah. That might be a problem for those that want to do without the GC, but
 retain purity.
There are ways to cheat if you must.
Apr 15 2014
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/15/2014 12:44 PM, John Colvin wrote:
 What about GC calls that cannot cause a collection? Do they even exist?
All functions not marked as nogc cannot be called from nogc functions. Of course, the person adding nogc attributes to code inside the GC had better know what they're doing.
Apr 15 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/15/2014 12:14 PM, Steven Schveighoffer wrote:
 What specifically is prevented? GC.malloc, GC.free, clearly. What about
 GC.getAttr? GC.query? These will not invoke collection cycles (the point of
 nogc).
All functions not marked as nogc cannot be called from a nogc function. GC.malloc(), etc., are functions. It works very much analogously to nothrow.
Apr 15 2014
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 15 Apr 2014 15:47:58 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 4/15/2014 12:14 PM, Steven Schveighoffer wrote:
 What specifically is prevented? GC.malloc, GC.free, clearly. What about
 GC.getAttr? GC.query? These will not invoke collection cycles (the  
 point of  nogc).
All functions not marked as nogc cannot be called from a nogc function. GC.malloc(), etc., are functions. It works very much analogously to nothrow.
right, but at the end of the day, nothrow doesn't allow throw statements. nogc, so far, just doesn't allow you to call other functions that aren't marked nogc. There is no primitive that is not allowed (I assume new isn't allowed, but you can access the GC other ways). These requirements, or assumptions, need to be documented. -Steve
Apr 15 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/15/2014 1:41 PM, Steven Schveighoffer wrote:
 right, but at the end of the day, nothrow doesn't allow throw statements.
 nogc,
 so far, just doesn't allow you to call other functions that aren't marked
 nogc.
"GC allocations in a nogc function will be disallowed, and that means calls to operator new, closures that allocate on the GC, array concatenation, array appends, and some array literals."
Apr 15 2014
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 15 Apr 2014 16:48:42 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 4/15/2014 1:41 PM, Steven Schveighoffer wrote:
 right, but at the end of the day, nothrow doesn't allow throw  
 statements.  nogc,
 so far, just doesn't allow you to call other functions that aren't  
 marked  nogc.
"GC allocations in a nogc function will be disallowed, and that means calls to operator new, closures that allocate on the GC, array concatenation, array appends, and some array literals."
Oops! Needed to refresh my browser, thanks. -Steve
Apr 15 2014
prev sibling next sibling parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Tuesday, 15 April 2014 at 17:01:38 UTC, Walter Bright wrote:
 http://wiki.dlang.org/DIP60

 Start on implementation:

 https://github.com/D-Programming-Language/dmd/pull/3455
I have an issue related to adding an extra attribute: Attributes of non-template functions. Currently, you have to mark most functions as already pure, nothrow and safe. If we are adding another attribute. Code will start looking alike this: int someTrivialFunction(int i) safe pure nothrow nogc; *If* we introduce "yet" another attribute, then I think it should be worth considering adding the simple " infered" or " everything" wildcard or something. Having a function whose attributes takes more room than the signature proper is pretty bad :/ Kind of not *directly* related to nogc, but this DIP adds relevance to the "issue". I mean, if we want this to actually be useful in Phobos (or other), it will require *massive* tagging all over the place.
Apr 15 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/15/2014 12:57 PM, monarch_dodra wrote:
 I mean, if we want this to actually be useful in Phobos (or other), it will
 require *massive* tagging all over the place.
I don't see that we have much choice about it. Note that an entire file can be marked with just: nogc:
Apr 15 2014
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 15 Apr 2014 16:06:26 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 4/15/2014 12:57 PM, monarch_dodra wrote:
 I mean, if we want this to actually be useful in Phobos (or other), it  
 will
 require *massive* tagging all over the place.
I don't see that we have much choice about it. Note that an entire file can be marked with just: nogc:
Then let's please have gc. -Steve
Apr 15 2014
parent reply "Brad Anderson" <eco gnuk.net> writes:
On Tuesday, 15 April 2014 at 20:56:24 UTC, Steven Schveighoffer 
wrote:
 On Tue, 15 Apr 2014 16:06:26 -0400, Walter Bright 
 <newshound2 digitalmars.com> wrote:

 On 4/15/2014 12:57 PM, monarch_dodra wrote:
 I mean, if we want this to actually be useful in Phobos (or 
 other), it will
 require *massive* tagging all over the place.
I don't see that we have much choice about it. Note that an entire file can be marked with just: nogc:
Then let's please have gc. -Steve
Yes, please. Too few of the attributes have inverse attributes. Being able to stick your defaults up at the top of your module and then overriding them only when needed would be very nice and make the code a lot more tidy. Also, it'd be nice if you could optionally stick a before some of the attributes to make everything consistent looking. pure nothrow nogc: /* Mmm */
Apr 15 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/15/2014 2:41 PM, Brad Anderson wrote:
 Yes, please. Too few of the attributes have inverse attributes.
That's a subject for another DIP.
Apr 15 2014
parent reply "Meta" <jared771 gmail.com> writes:
On Tuesday, 15 April 2014 at 21:42:51 UTC, Walter Bright wrote:
 On 4/15/2014 2:41 PM, Brad Anderson wrote:
 Yes, please. Too few of the attributes have inverse attributes.
That's a subject for another DIP.
This would go fairly well with Andrei's idea of passing true or false to an attribute to enable or disable it. gc(false) void fun() {} Also, as was mentioned earlier in the thread, if gc was actually implemented as a UDA just like any other, gc could simply be a struct that the compiler looks for, something along those lines. struct gc { bool enable; //... } Which naturally implements Andrei's proposed syntax. The same could be done for property, safe, etc. If we had DMD as a library, gc probably wouldn't even need to be "special", i.e., compiler magic. Finally, as MonarchDodra mentioned, the number of optional attributes to mark a function with can get to be a problem after awhile. I have two ideas on this. One is extending the concept of aliases to also alias attributes, like so: //TypeTuple or just bare list? alias everything = TypeTuple!( safe, nothrow, pure, gc(false)); or alias everything(Attrs...) = Attrs; I think that the Microsoft language with effect algebra (Bearophile has mentioned it before) does this. E.g., pure is actually: alias pure: noeffects nothrow //... (I don't remember the actual syntax) Secondly, this could just be a "higher level attribute". I don't know if this would require a language change or not... struct Everything { bool _pure; bool _nothrow; //... }
Apr 15 2014
next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Meta:

 //TypeTuple or just bare list?
 alias everything = TypeTuple!( safe, nothrow, pure,  gc(false));

 or

 alias everything(Attrs...) = Attrs;

 I think that the Microsoft language with effect algebra 
 (Bearophile has mentioned it before) does this. E.g., pure is 
 actually:
 alias pure: noeffects nothrow //... (I don't remember the 
 actual syntax)
In Koka those are not lists but sets of effects. Bye, bearophile
Apr 15 2014
parent "Meta" <jared771 gmail.com> writes:
On Wednesday, 16 April 2014 at 03:58:08 UTC, bearophile wrote:
 Meta:

 //TypeTuple or just bare list?
 alias everything = TypeTuple!( safe, nothrow, pure, 
  gc(false));

 or

 alias everything(Attrs...) = Attrs;

 I think that the Microsoft language with effect algebra 
 (Bearophile has mentioned it before) does this. E.g., pure is 
 actually:
 alias pure: noeffects nothrow //... (I don't remember the 
 actual syntax)
In Koka those are not lists but sets of effects. Bye, bearophile
Yes, but it's similar to the Koka concept, adapted for D. Perhaps D could lift the concept from Koka without any changes, I don't know.
Apr 15 2014
prev sibling parent "w0rp" <devw0rp gmail.com> writes:
On Wednesday, 16 April 2014 at 03:26:24 UTC, Meta wrote:
 This would go fairly well with Andrei's idea of passing true or 
 false to an attribute to enable or disable it.

  gc(false) void fun() {}
I don't like this because it's hard to read. It's a bad idea. Never use booleans in interfaces like that. gc and nogc are better.
Apr 16 2014
prev sibling next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 15 April 2014 at 21:41:37 UTC, Brad Anderson wrote:
 Yes, please. Too few of the attributes have inverse attributes.

 Being able to stick your defaults up at the top of your module 
 and then overriding them only when needed would be very nice 
 and make the code a lot more tidy.
Then you need a compiler option that will prevent gc, otherwise you risk libraries pulling in gc code as quick fixes without library users that want nogc noticing the deficiency. Ola.
Apr 15 2014
parent reply Matej Nanut via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 16 April 2014 01:45, via Digitalmars-d <digitalmars-d puremagic.com> wrote:
 Then you need a compiler option that will prevent  gc, otherwise you risk
 libraries pulling in  gc code as quick fixes without library users that want
  nogc noticing the deficiency.
This shouldn't be a problem if you plonk nogc: at the top of your own file, as it won't compile anymore if you try to call gc functions.
Apr 15 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 15 April 2014 at 23:54:24 UTC, Matej Nanut via 
Digitalmars-d wrote:
 This shouldn't be a problem if you plonk  nogc: at the top of 
 your own file, as it won't compile anymore if you try to call 
  gc functions.
It is a problem if you are allowed to override nogc with gc, which is what the post I responded to suggested.
Apr 16 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 16 April 2014 at 08:46:56 UTC, Ola Fosheim Grøstad 
wrote:
 On Tuesday, 15 April 2014 at 23:54:24 UTC, Matej Nanut via 
 Digitalmars-d wrote:
 This shouldn't be a problem if you plonk  nogc: at the top of 
 your own file, as it won't compile anymore if you try to call 
  gc functions.
It is a problem if you are allowed to override nogc with gc, which is what the post I responded to suggested.
Btw, I think you should add noalloc also which prevents both new and malloc. It would be useful for real time callbacks, interrupt handlers etc.
Apr 16 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/16/2014 1:49 AM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 Btw, I think you should add  noalloc also which prevents both new and malloc.
It
 would be useful for real time callbacks, interrupt handlers etc.
Not practical. malloc() is only one way of allocating memory - user defined custom allocators are commonplace.
Apr 16 2014
next sibling parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Wednesday, 16 April 2014 at 17:39:32 UTC, Walter Bright wrote:
 Not practical. malloc() is only one way of allocating memory - 
 user defined custom allocators are commonplace.
What I want is a __trait that scans for all call expressions in a particular function and returns all those functions. Then, we can check them for UDAs using the regular way and start to implement library defined things like safe, nogc, etc. (safe and gc are a bit different because they also are affected by built-in language features, not just functions, but the same idea of recursively scanning for an annotation in the function body). Of course, this wouldn't always be perfect, separate compilation could be used to lie about or hide annotations in a function prototype, but meh I don't really care about that, the main use for me would eb static asserts right under the function definition anyway.
Apr 16 2014
next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Adam D. Ruppe:

 What I want is a __trait that scans for all call expressions in 
 a particular function and returns all those functions.

 Then, we can check them for UDAs using the regular way and 
 start to implement library defined things like  safe,  nogc, 
 etc.
This is the start of a nice idea to extend the D type system a little in user defined code. But I think it still needs some refinement. I also think there can be a more automatic way to test them than "the regular way" of putting a static assert outside the function. Bye, bearophile
Apr 17 2014
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2014-04-16 19:52, Adam D. Ruppe wrote:

 What I want is a __trait that scans for all call expressions in a
 particular function and returns all those functions.

 Then, we can check them for UDAs using the regular way and start to
 implement library defined things like  safe,  nogc, etc. (safe and gc
 are a bit different because they also are affected by built-in language
 features, not just functions, but the same idea of recursively scanning
 for an annotation in the function body).

 Of course, this wouldn't always be perfect, separate compilation could
 be used to lie about or hide annotations in a function prototype, but
 meh I don't really care about that, the main use for me would eb static
 asserts right under the function definition anyway.
Sounds like a job for AST macros. -- /Jacob Carlborg
Apr 18 2014
prev sibling next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

 Not practical. malloc() is only one way of allocating memory - 
 user defined custom allocators are commonplace.
OK, then I'll have to close my ER about noheap. Bye, bearophile
Apr 16 2014
prev sibling next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 16 April 2014 at 17:39:32 UTC, Walter Bright wrote:
 Not practical. malloc() is only one way of allocating memory - 
 user defined custom allocators are commonplace.
Not sure why this is not practical. If the custom allocators are in D then you should be able to track all the way down to malloc. In sensitive code like NMIs you DO want to use custom allocators (allocating from a pool, ring buffer etc) or none at all. However, I think it falls into the same group as tracking syscalls in a call chain. And I guess you would have to think about library/syscall tracers such as ltrace, dtrace/truss, strace, ktrace, SystemTap etc too…
Apr 16 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/16/2014 2:14 PM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 If the custom allocators are in D then you
 should be able to track all the way down to malloc.
malloc is hardly the only storage allocator.
Apr 16 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 16 April 2014 at 22:34:35 UTC, Walter Bright wrote:
 malloc is hardly the only storage allocator.
Except for syscalls such as brk/sbrk, which ones are you thinking of?
Apr 16 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/16/2014 3:45 PM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Wednesday, 16 April 2014 at 22:34:35 UTC, Walter Bright wrote:
 malloc is hardly the only storage allocator.
Except for syscalls such as brk/sbrk, which ones are you thinking of?
I've written several myself that do not use malloc. Even the Linux kernel does not use malloc. Windows offers many ways to allocate memory without malloc. Trying to have a core language detect attempts to write a storage allocator is way, way beyond the scope of what is reasonable for it to do. And, frankly, I don't see a point for such a capability. malloc is hardly the only problem people will encounter with realtime callbacks. You'll want to avoid disk I/O, network access, etc., too.
Apr 16 2014
next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

 malloc is hardly the only problem people will encounter with
 realtime callbacks. You'll want to avoid disk I/O, network
 access, etc., too.
It seems a good idea to offer a way to extend the type system with new semantically meaningful annotations in user code. (Koka language does this too, with its effects management). I have seen an almost nice idea in this same thread. Bye, bearophile
Apr 16 2014
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 16 April 2014 at 23:14:27 UTC, Walter Bright wrote:
 On 4/16/2014 3:45 PM, "Ola Fosheim Grøstad" I've written 
 several myself that do not use malloc.
If it is shared or can call brk() it should be annotated.
 Even the Linux kernel does not use malloc. Windows offers many 
 ways to allocate memory without malloc. Trying to have a core 
 language detect attempts to write a storage allocator is way, 
 way beyond the scope of what is reasonable for it to do.
Library and syscalls can be marked, besides you can have dynamic tracing in debug mode.
 And, frankly, I don't see a point for such a capability.
Safe and contention free use of libraries in critical code paths. The alternative is to guess if it is safe to use.
 malloc is hardly the only problem people will encounter with 
 realtime callbacks. You'll want to avoid disk I/O, network 
 access, etc., too.
Yes, all syscalls. But malloc is easier to overlook and it might call brk() seldom, so detecting it without support might be difficult.
Apr 16 2014
parent Marco Leise <Marco.Leise gmx.de> writes:
Am Thu, 17 Apr 2014 06:19:56 +0000
schrieb "Ola Fosheim Gr=C3=B8stad"
<ola.fosheim.grostad+dlang gmail.com>:

 On Wednesday, 16 April 2014 at 23:14:27 UTC, Walter Bright wrote:
 On 4/16/2014 3:45 PM, "Ola Fosheim Gr=C3=B8stad" I've written=20
 several myself that do not use malloc.
=20 If it is shared or can call brk() it should be annotated. =20
 Even the Linux kernel does not use malloc. Windows offers many=20
 ways to allocate memory without malloc. Trying to have a core=20
 language detect attempts to write a storage allocator is way,=20
 way beyond the scope of what is reasonable for it to do.
=20 Library and syscalls can be marked, besides you can have dynamic=20 tracing in debug mode.
It's a bit of a grey zone. There are probably real-time malloc() implementations out there. And syscalls like mmap() can be used to allocate virtual memory or just to map a file into virtual memory. If you mark all syscalls that doesn't matter of course. At the end of the day you cannot really trace what a library uses that you happen to call into. Or at least not without significant overhead at runtime. --=20 Marco
Apr 18 2014
prev sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 16 Apr 2014 13:39:36 -0400, Walter Bright  =

<newshound2 digitalmars.com> wrote:

 On 4/16/2014 1:49 AM, "Ola Fosheim Gr=C3=B8stad"  =
 <ola.fosheim.grostad+dlang gmail.com>" wrote:
 Btw, I think you should add  noalloc also which prevents both new and=
=
 malloc. It
 would be useful for real time callbacks, interrupt handlers etc.
Not practical. malloc() is only one way of allocating memory - user =
 defined custom allocators are commonplace.
More practical: Mechanism for the compiler to apply arbitrary "transitive" attributes to= = functions. In other words, some mechanism that you can tell the compiler "all the = functions this someattribute function calls must have someattribute = attached to it," that also applies the attribute automatically for = templates. Then, you can come up with whatever restrictive schemes you want. Essentially, this is the same as nogc, except the compiler has special = = hooks to the GC (e.g. new) that need to be handled. The compiler has no = = such hooks for C malloc, or whatever allocation scheme you use, so it's = = all entirely up to the library and user code. -Steve
Apr 17 2014
prev sibling parent reply "Gary Willoughby" <dev nomad.so> writes:
On Tuesday, 15 April 2014 at 21:41:37 UTC, Brad Anderson wrote:
 Yes, please. Too few of the attributes have inverse attributes.

 Being able to stick your defaults up at the top of your module 
 and then overriding them only when needed would be very nice 
 and make the code a lot more tidy.
I actually think this will make code harder to read. e.g.: nogc: void foo() { ... } void bar() gc { ... } gc { void baz() nogc { ... } } gc: void quxx() nogc { ... } Ewww... nasty stuff.
Apr 16 2014
parent "Gary Willoughby" <dev nomad.so> writes:
On Wednesday, 16 April 2014 at 17:22:02 UTC, Gary Willoughby
wrote:
 On Tuesday, 15 April 2014 at 21:41:37 UTC, Brad Anderson wrote:
 Yes, please. Too few of the attributes have inverse attributes.

 Being able to stick your defaults up at the top of your module 
 and then overriding them only when needed would be very nice 
 and make the code a lot more tidy.
I actually think this will make code harder to read. e.g.: nogc: void foo() { ... } void bar() gc { ... } gc { void baz() nogc { ... } } gc: void quxx() nogc { ... } Ewww... nasty stuff.
My point was opposite attributes complicate the code hugely.
Apr 16 2014
prev sibling next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Tue, Apr 15, 2014 at 07:57:58PM +0000, monarch_dodra via Digitalmars-d wrote:
[...]
 I have an issue related to adding an extra attribute: Attributes of
 non-template functions. Currently, you have to mark most functions as
 already pure, nothrow and  safe. If we are adding another attribute.
 Code will start looking alike this:
 
 int someTrivialFunction(int i)  safe pure nothrow  nogc;
 
 *If* we introduce "yet" another attribute, then I think it should be
 worth considering adding the simple " infered" or " everything"
 wildcard or something.
 
 Having a function whose attributes takes more room than the signature
 proper is pretty bad :/
 
 Kind of not *directly* related to  nogc, but this DIP adds relevance
 to the "issue".
 
 I mean, if we want this to actually be useful in Phobos (or other), it
 will require *massive* tagging all over the place.
Is automatic inference of nogc anywhere within our reach right now? That would save us a lot of grunt work marking up template functions in Phobos (and user code), not to mention the case of template functions with alias parameters, for which manual marking of nogc may not even be possible. But yeah, for non-template functions, some way of triggering attribute inference would be very helpful. What about auto? T -- Why are you blatanly misspelling "blatant"? -- Branden Robinson
Apr 15 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/15/2014 1:10 PM, H. S. Teoh via Digitalmars-d wrote:
 Is automatic inference of  nogc anywhere within our reach right now?
Yes, it's no harder than pure or nothrow inference is.
Apr 15 2014
parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Tuesday, 15 April 2014 at 20:13:19 UTC, Walter Bright wrote:
 On 4/15/2014 1:10 PM, H. S. Teoh via Digitalmars-d wrote:
 Is automatic inference of  nogc anywhere within our reach 
 right now?
Yes, it's no harder than pure or nothrow inference is.
I meant mostly for non-template code, where there is no inference.
Apr 15 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/15/2014 1:19 PM, monarch_dodra wrote:
 On Tuesday, 15 April 2014 at 20:13:19 UTC, Walter Bright wrote:
 On 4/15/2014 1:10 PM, H. S. Teoh via Digitalmars-d wrote:
 Is automatic inference of  nogc anywhere within our reach right now?
Yes, it's no harder than pure or nothrow inference is.
I meant mostly for non-template code, where there is no inference.
I had a PR for that, but nobody liked it.
Apr 15 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/15/2014 1:37 PM, Walter Bright wrote:
 On 4/15/2014 1:19 PM, monarch_dodra wrote:
 On Tuesday, 15 April 2014 at 20:13:19 UTC, Walter Bright wrote:
 On 4/15/2014 1:10 PM, H. S. Teoh via Digitalmars-d wrote:
 Is automatic inference of  nogc anywhere within our reach right now?
Yes, it's no harder than pure or nothrow inference is.
I meant mostly for non-template code, where there is no inference.
I had a PR for that, but nobody liked it.
https://github.com/D-Programming-Language/dmd/pull/1877
Apr 15 2014
next sibling parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Tue, Apr 15, 2014 at 01:40:07PM -0700, Walter Bright via Digitalmars-d wrote:
 On 4/15/2014 1:37 PM, Walter Bright wrote:
On 4/15/2014 1:19 PM, monarch_dodra wrote:
On Tuesday, 15 April 2014 at 20:13:19 UTC, Walter Bright wrote:
On 4/15/2014 1:10 PM, H. S. Teoh via Digitalmars-d wrote:
Is automatic inference of  nogc anywhere within our reach right
now?
Yes, it's no harder than pure or nothrow inference is.
I meant mostly for non-template code, where there is no inference.
I had a PR for that, but nobody liked it.
https://github.com/D-Programming-Language/dmd/pull/1877
I'm not sure I like the idea of overloading "auto" to mean *both* "automatically infer return type" and "automatically infer attributes". What about auto or default for attribute inference, and leave return type inference as-is? T -- Too many people have open minds but closed eyes.
Apr 15 2014
prev sibling parent "Tove" <tove fransson.se> writes:
On Tuesday, 15 April 2014 at 20:40:05 UTC, Walter Bright wrote:
 I had a PR for that, but nobody liked it.
https://github.com/D-Programming-Language/dmd/pull/1877
If I correctly understand the reservations raised against this PR, the people objecting might have agreed to attribute inference for *private* functions, would this be worth pursuing?
Apr 15 2014
prev sibling parent "Francesco Cattoglio" <francesco.cattoglio gmail.com> writes:
On Tuesday, 15 April 2014 at 19:57:59 UTC, monarch_dodra wrote:
 I have an issue related to adding an extra attribute: 
 Attributes of non-template functions. Currently, you have to 
 mark most functions as already pure, nothrow and  safe. If we 
 are adding another attribute. Code will start looking alike 
 this:

 int someTrivialFunction(int i)  safe pure nothrow  nogc;
don't forget final ;)
Apr 17 2014
prev sibling next sibling parent reply "Frustrated" <Who where.com> writes:
On Tuesday, 15 April 2014 at 17:01:38 UTC, Walter Bright wrote:
 http://wiki.dlang.org/DIP60

 Start on implementation:

 https://github.com/D-Programming-Language/dmd/pull/3455
"GC allocations in a nogc function will be disallowed, and that means calls to operator new" I do not think new is exclusive to GC? It can be overridden. If new is overridden by classes and marked nogc then it too should be allowed. e.g., new, slices, array behavior, even all the GC methods themselves, are all not necessarily dependent on the GC if their behavior can be overridden not to use the GC. Essentially all default GC dependent behavior should be marked gc and all default non-GC dependent behavior should be marked nogc. User defined behavior could be marked either way except nogc behavior can't use gc behavior. After all, nogc is just an attribute and means nothing except what is prescribed to it(you could mark all the GC's functions as nogc and then use them in nogc code if you want, which, of course, defeats the whole purpose). I just hope that if implemented, it is not blindly done so that prevents one from accurately removing GC dependence(which is the whole point of having the attribute).
Apr 15 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/15/2014 1:42 PM, Frustrated wrote:
 I do not think new is exclusive to GC? It can be overridden.
Global operator new is not overridable.
 If new is overridden by classes and marked  nogc then it too should be allowed.
Then they are treated as regular functions - they'd have to be marked as nogc to be used in nogc functions.
 e.g., new, slices, array behavior, even all the GC methods themselves, are all
 not necessarily dependent on the GC if their behavior can be overridden not to
 use the GC.
Overrides use functions, and then the behavior depends on how those functions are annotated.
 I just hope that if implemented, it is not blindly done so that prevents one
 from accurately removing GC dependence(which is the whole point of having the
 attribute).
I don't know what you mean by this.
Apr 15 2014
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 15 Apr 2014 13:01:40 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 http://wiki.dlang.org/DIP60

 Start on implementation:

 https://github.com/D-Programming-Language/dmd/pull/3455
Additional proposal: In non-release mode, at the start of nogc function, a thread-local variable __nogc is incremented. At the end, __nogc is decremented. Then if any GC calls occur when __nogc is nonzero, an error is thrown. This would be for debugging only, and I'm thinking specifically to prove absolutely no hidden compiler-generated GC calls, or calls via extern(C) functions occur. -Steve
Apr 15 2014
prev sibling next sibling parent reply "Mike" <none none.com> writes:
On Tuesday, 15 April 2014 at 17:01:38 UTC, Walter Bright wrote:
 http://wiki.dlang.org/DIP60

 Start on implementation:

 https://github.com/D-Programming-Language/dmd/pull/3455
First of all, I have the following suggestions for the DIP: * Articulate a more thorough motivation for this feature other than to give users what they want. * Summarize previous discussions and provide links to them. * Provide some justification for the final decision, even if it's subjective. I don't believe users hesitant to use D will suddenly come to D now that there is a nogc attribute. I also don't believe they want to avoid the GC, even if they say they do. I believe what they really want is to have an alternative to the GC. I suggest the following instead: 1. Provide the necessary druntime hooks to enable developers to create a variety of memory management implementations. 2. Given these hooks and ideas that have been discussed in the community and presented at DConf, improve D's default GC. 3. Implement a variety of alternative memory managers (official and unofficial) that can be compiled and/or linked into druntime and/or the program. Alternative memory managers could include the following: * Default D2 mark-and-sweep GC for backward compatibility * Concurrent, precise or otherwise improved D2 mark-and-sweep GC * Reference counting with mark-and-sweep fallback * Automatic malloc, manual free * Manual malloc, manual free * {Insert your favorite here} Users can use 'if version(x)' if they wish to support more than one memory management scheme, and their choice forces them to change their idioms. I suspect some of the motivation for this is to give customers "faster horses". I would be surprised if a nogc attribute increased D's appeal, and I think efforts would be better allocated to some form of the above. Bottom line: IMO, memory management is an implementation detail of the problem domain, not the language, and it's implementation should be delegated to a library or to the programmer. Mike
Apr 15 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/15/2014 6:57 PM, Mike wrote:
 I suspect some of the motivation for this is to give customers "faster horses".
 I would be surprised if a  nogc attribute increased D's appeal, and I think
 efforts would be better allocated to some form of the above.
Asking for nogc comes up *constantly*.
Apr 15 2014
next sibling parent reply "Mike" <none none.com> writes:
On Wednesday, 16 April 2014 at 02:14:18 UTC, Walter Bright wrote:
 On 4/15/2014 6:57 PM, Mike wrote:
 I suspect some of the motivation for this is to give customers 
 "faster horses".
 I would be surprised if a  nogc attribute increased D's 
 appeal, and I think
 efforts would be better allocated to some form of the above.
Asking for nogc comes up *constantly*.
I know it does, but users employing nogc still have to manage memory somehow. Let's add hooks to the runtime and implement some GC alternatives, and then see what demand is like ;-)
Apr 15 2014
parent reply "froglegs" <nono yahoo.com> writes:
 I know it does, but users employing  nogc still have to manage 
 memory somehow.  Let's add hooks to the runtime and implement 
 some GC alternatives, and then see what demand is like ;-)
They use noGC and smart pointers/manual ala C++. You seem to be suggesting that people who don't want GC, actually secretly want GC. Just to let you know, that isn't the case. A global one size fits all approach to memory management is far from optimal.
Apr 15 2014
parent "Mike" <none none.com> writes:
On Wednesday, 16 April 2014 at 03:16:34 UTC, froglegs wrote:
 I know it does, but users employing  nogc still have to manage 
 memory somehow.  Let's add hooks to the runtime and implement 
 some GC alternatives, and then see what demand is like ;-)
They use noGC and smart pointers/manual ala C++.
Yes, and they would have to forego many of D's features that do implicit allocations (dynamic arrays, exceptions, etc...) and implement alternatives. If the right hooks and corresponding logic were added to the runtime, users could potentially implement memory management alternatives, including some which more closely mimic that of C++, and still be able to employ those features.
  You seem to be suggesting that people who don't want GC, 
 actually secretly want GC. Just to let you know, that isn't the 
 case.
No, I'm not. I'm suggesting people who say they don't want GC want an alternative way to manage memory. So I suggest adding runtime hooks that would enable us to build alternatives.
Apr 15 2014
prev sibling parent reply "Frustrated" <Who where.com> writes:
On Wednesday, 16 April 2014 at 02:14:18 UTC, Walter Bright wrote:
 On 4/15/2014 6:57 PM, Mike wrote:
 I suspect some of the motivation for this is to give customers 
 "faster horses".
 I would be surprised if a  nogc attribute increased D's 
 appeal, and I think
 efforts would be better allocated to some form of the above.
Asking for nogc comes up *constantly*.
How bout this! Why not allow one to define their own attributes from a generalized subset and then define a few standard ones like nogc. i.e., instead of having to define specific attributes every few years to satisfy some new thing, why not just abstract the process. Attributes, I believe, are essentially relationships between parts of code? If so, then one simply has to implement some generic way to specify the attributes and properties of the relationship to the compiler. Then anyone would have the tools to define and use these attributes as they wish. (in fact, I think it would just involve enhancing the current attribute support, probably just need to rewrite it all so that the same code is used for built in attributes( safe, pure, etc...) and user define attributes. So, we just need to define the attribute name and the properties it has such as: Assume Y uses X in some way(function call) and X has an attribute A defined on it: Inheritance - Y inherits attribute A. Exclusion - If Y has attribute B and B is mutually excluded from A then error Composition - If Y also uses Z and Z has attribute B then Y has the compound attribute (A:B). Compound attributes can be rewritten to other attributes using a grammar/reduction scheme. Some compositions can be invalid. E.g., nogc and gc, pure and notpure, etc... Duality - If an attribute A is not specified for a block of code then it's inverse attribute is implicitly specified always. e.g., gc and !gc = nogc are duals and one or the other always is specified, even if implicit. etc... [Note, I'm not saying all attributes have these properties, just that these the possible properties they can have] By coming up with a general system(I'm sure there is some mathematical structure that describes attributes) it would be very easy to add attributes in the future and there would be a consistent code backing for them. It would also be easier for CT reflection on attributes. Anyways, just a thought, sounds easy in theory...
Apr 20 2014
next sibling parent reply "Rikki Cattermole" <alphaglosined gmail.com> writes:
On Sunday, 20 April 2014 at 14:38:47 UTC, Frustrated wrote:
 On Wednesday, 16 April 2014 at 02:14:18 UTC, Walter Bright 
 wrote:
 On 4/15/2014 6:57 PM, Mike wrote:
 I suspect some of the motivation for this is to give 
 customers "faster horses".
 I would be surprised if a  nogc attribute increased D's 
 appeal, and I think
 efforts would be better allocated to some form of the above.
Asking for nogc comes up *constantly*.
How bout this! Why not allow one to define their own attributes from a generalized subset and then define a few standard ones like nogc. i.e., instead of having to define specific attributes every few years to satisfy some new thing, why not just abstract the process. Attributes, I believe, are essentially relationships between parts of code? If so, then one simply has to implement some generic way to specify the attributes and properties of the relationship to the compiler. Then anyone would have the tools to define and use these attributes as they wish. (in fact, I think it would just involve enhancing the current attribute support, probably just need to rewrite it all so that the same code is used for built in attributes( safe, pure, etc...) and user define attributes. So, we just need to define the attribute name and the properties it has such as: Assume Y uses X in some way(function call) and X has an attribute A defined on it: Inheritance - Y inherits attribute A. Exclusion - If Y has attribute B and B is mutually excluded from A then error Composition - If Y also uses Z and Z has attribute B then Y has the compound attribute (A:B). Compound attributes can be rewritten to other attributes using a grammar/reduction scheme. Some compositions can be invalid. E.g., nogc and gc, pure and notpure, etc... Duality - If an attribute A is not specified for a block of code then it's inverse attribute is implicitly specified always. e.g., gc and !gc = nogc are duals and one or the other always is specified, even if implicit. etc... [Note, I'm not saying all attributes have these properties, just that these the possible properties they can have] By coming up with a general system(I'm sure there is some mathematical structure that describes attributes) it would be very easy to add attributes in the future and there would be a consistent code backing for them. It would also be easier for CT reflection on attributes. Anyways, just a thought, sounds easy in theory...
Sounds like a neat idea, now for some code examples? Because it sounds like we would need an entirely different notation mechanism or something crazy. Like: struct MyPureFunction(alias MYFUNC) { shared static this() { registerFunc!(MYFUNC); } __annotation() { static if (!is(ReturnType!MYFUNC == void)) { return Tuple!(__annotation(pure), __annotation(property)); } else { return Tuple!(__anotation(pure)); } } } MyPureFunction string mypurefunc() { return "hi"; } pragma(msg, mypurefunc); I added the constructor in there because being able to run code dependent on it would enable registering of certain types (useful for e.g. Cmsed so users don't have to). This would add a new keyword (__annotation) in the same style as __traits. __annotation function would be called post constructor meaning you could negate what you would normally return. Perhaps another function !__annotation to remove current ones. Not quite sure how this would relate to nogc but.. Maybe it means we can fine tune it per attribute/compiler or something. But hey just my take.
Apr 20 2014
parent "Frustrated" <Who where.com> writes:
On Sunday, 20 April 2014 at 15:04:28 UTC, Rikki Cattermole wrote:
 On Sunday, 20 April 2014 at 14:38:47 UTC, Frustrated wrote:
 On Wednesday, 16 April 2014 at 02:14:18 UTC, Walter Bright 
 wrote:
 On 4/15/2014 6:57 PM, Mike wrote:
 I suspect some of the motivation for this is to give 
 customers "faster horses".
 I would be surprised if a  nogc attribute increased D's 
 appeal, and I think
 efforts would be better allocated to some form of the above.
Asking for nogc comes up *constantly*.
How bout this! Why not allow one to define their own attributes from a generalized subset and then define a few standard ones like nogc. i.e., instead of having to define specific attributes every few years to satisfy some new thing, why not just abstract the process. Attributes, I believe, are essentially relationships between parts of code? If so, then one simply has to implement some generic way to specify the attributes and properties of the relationship to the compiler. Then anyone would have the tools to define and use these attributes as they wish. (in fact, I think it would just involve enhancing the current attribute support, probably just need to rewrite it all so that the same code is used for built in attributes( safe, pure, etc...) and user define attributes. So, we just need to define the attribute name and the properties it has such as: Assume Y uses X in some way(function call) and X has an attribute A defined on it: Inheritance - Y inherits attribute A. Exclusion - If Y has attribute B and B is mutually excluded from A then error Composition - If Y also uses Z and Z has attribute B then Y has the compound attribute (A:B). Compound attributes can be rewritten to other attributes using a grammar/reduction scheme. Some compositions can be invalid. E.g., nogc and gc, pure and notpure, etc... Duality - If an attribute A is not specified for a block of code then it's inverse attribute is implicitly specified always. e.g., gc and !gc = nogc are duals and one or the other always is specified, even if implicit. etc... [Note, I'm not saying all attributes have these properties, just that these the possible properties they can have] By coming up with a general system(I'm sure there is some mathematical structure that describes attributes) it would be very easy to add attributes in the future and there would be a consistent code backing for them. It would also be easier for CT reflection on attributes. Anyways, just a thought, sounds easy in theory...
Sounds like a neat idea, now for some code examples? Because it sounds like we would need an entirely different notation mechanism or something crazy. Like: struct MyPureFunction(alias MYFUNC) { shared static this() { registerFunc!(MYFUNC); } __annotation() { static if (!is(ReturnType!MYFUNC == void)) { return Tuple!(__annotation(pure), __annotation(property)); } else { return Tuple!(__anotation(pure)); } } } MyPureFunction string mypurefunc() { return "hi"; } pragma(msg, mypurefunc); I added the constructor in there because being able to run code dependent on it would enable registering of certain types (useful for e.g. Cmsed so users don't have to). This would add a new keyword (__annotation) in the same style as __traits. __annotation function would be called post constructor meaning you could negate what you would normally return. Perhaps another function !__annotation to remove current ones. Not quite sure how this would relate to nogc but.. Maybe it means we can fine tune it per attribute/compiler or something. But hey just my take.
The way I see it is that attributes, in general, are simply meta tags applied to things. Just like tags used in audio files. You apply tags, do things with them. You can sort them based on tag, only play certain song with certain tags, etc... Hence, the compiler does not to be coded for specific attributes because attributes, in and of themselves do not need code at the compiler level to handle them except for possibly optimizations. In any case, there should then be a common base for attributes(built in or user defined) that generalizes what they are now which would also make them more powerful with a little work. As far as nogc goes, I think it's probably one of the more simpler tags in that it it only uses inheritance and logical and composition. i.e., if all sub-attribute uses are nogc then the the attribute use becomes nogc. E.g., Q uses X, Y, Z which each have attributes x, y, z Then Q's attribute q = (x == y == z == nogc) ? nogc : gc [To start off we could say Q's attribute q = x:y:z, a compound attribute... and use a "grammar" to simplify q if possible. If q's simplified compound attribute exists in the defined attributes then processing continues until a atomic attribute is found... of course one has to be careful with ambiguity but probably won't ever be much of a problem] Simple expressions and statements can also have implicitly defined attributes. e.g., all of D is, by default nogc except slicing, new, etc.. The GC module then introduces the gc attribute on some of it's functions(not that it really matters cause usually you don't use the gc directly but through the core features of D(slicing, new, etc...). If nogc is defined as above then everything should just *work*. The compiler will form compound attributes and simplify them and nogc will propagate. You would rarely ever have to explicitly use nogc or gc. Same thing would happen with pure, safe, etc... Using the rules and possible some deduction the compiler could cute out most of the work. e.g., pure could be automatically deduced in almost all cases(99% of use cases) by marking everything that is no an assignment in D as pure. Assignments may or may not be pure depending on if they modify parent scope. The compiler determines if they do, and if they do then it marks that assignment impure... then uses the same rule for combining all the attributes as nogc. If the compound attribute can't be reduced. e.g., some assignments in D can't be determined as pure or not by D, then the pureness is unknown which results in a compound attribute that can't be reduced... and hence the user must explicitly mark. (e.g., using a pointer assignment which can't guarantee pureness unless, maybe, static analysis is used to try and figure it out. So, in D, some assignments are pure, some are impure, some are not provably pure and some are not provably impure. In the case of not probably pure/impure one might allow the user force the attribute to be one or the other so the compound attribute can be resolved. If that is not enough the user could tell the compiler to assume the compound attribute resolves to something. Anyways, really feels like there is some simple mathematical structure going on here but I can't quite put my finger on it. We have a set of attributes, a set of operators on the set of attributes(probably just need symmetric binary operators but would be nice to be able to handle the more general case), and some "code" to do things with them so user defined attributes are more useful. For example, suppose we have a complex program with two functions. Depending on something one function may or may not be pure. But lets suppose there is an external reason why the other function must always have the same pureness as the first function. This seems like a difficult problem because it depends on context. If we have CT reflection of attributes and the ability to define attributes in cool ways then void foo() { } // pure void bar() forceSame(foo, #pureness) { } where forceSame is an attribute meta function that takes a symbol and a attribute group, which is the group of related attributes, in this case #pureness = { pure, impure}, and forces bar to have the same attribute as foo that is within that group. D being as cool as it is, we could even execute user defined code: void foo() { } // pureness unknown void bar() execute({ if (foo:attributes.is( pure)) set(bar, disable)} { } (just pseudo code so don't get your panties in a wad, but should be obvious, if foo is pure then bar is disabled, else it is enabled) Of course one can take it as far as one wants... The point is, that we have attributes but no underlying common system which to easily build on. Seems like every attribute added to D requires re-implementing the wheel more or less. If such a system was already in D, there would not be any discussion about nogc except possibly should it become a built in attribute which the compiler could use for optimization purposes.
Apr 20 2014
prev sibling next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 20 April 2014 at 14:38:47 UTC, Frustrated wrote:
 Why not allow one to define their own attributes from a 
 generalized subset and then define a few standard ones like 
  nogc.

 i.e., instead of having to define specific attributes every few 
 years to satisfy some new thing, why not just abstract the 
 process.

 Attributes, I believe, are essentially relationships between 
 parts of code?

 If so, then one simply has to implement some generic way to 
 specify the attributes and properties of the relationship to 
 the compiler. Then anyone would have the tools to define and 
 use these attributes as they wish.
This could be very powerful, but I think you would need to put attributes in namespaces if you expand their usage. You could also get better concurrency warnings/optimization if you could distinguish between reads/writes to specific types, type members, globals, and function calls and tie it to mutexes. Unfortunately, it isn't sufficient, since you often want to tag sets of instances, not sets of definitions (which is what attributes do, it creates sets of defs and give the set a name).
Apr 20 2014
prev sibling parent reply "Jacob Carlborg" <doob me.com> writes:
On Sunday, 20 April 2014 at 14:38:47 UTC, Frustrated wrote:

 How bout this!

 Why not allow one to define their own attributes from a 
 generalized subset and then define a few standard ones like 
  nogc.
Sounds like you want AST macros. Have a look at this http://wiki.dlang.org/DIP50#Declaration_macros -- /Jacob Carlborg
Apr 21 2014
parent reply "Frustrated" <Who where.com> writes:
On Monday, 21 April 2014 at 16:45:15 UTC, Jacob Carlborg wrote:
 On Sunday, 20 April 2014 at 14:38:47 UTC, Frustrated wrote:

 How bout this!

 Why not allow one to define their own attributes from a 
 generalized subset and then define a few standard ones like 
  nogc.
Sounds like you want AST macros. Have a look at this http://wiki.dlang.org/DIP50#Declaration_macros -- /Jacob Carlborg
Not quite. AST macros simply transform code. Attributes attach meta data to code. While I'm sure there is some overlap they are not the same. Unless AST macros have the ability to arbitrary add additional contextual information to meta code then they can't emulate attributes. E.g., Suppose you have D with AST macros but not attributes, how can you add them? In the dip, you have macro attr (Context context, Declaration decl) { auto attrName = decl.name; auto type = decl.type; return <[ private $decl.type _$decl.name; $decl.type $decl.name () { return _$decl.name; } $decl.type $decl.name ($decl.type value) { return _$decl.name = value; } ]>; } class Foo { attr int bar; } but attr is not an attribute. It is an macro. attr converts the "int bar" field into a private setter and getter. This has nothing to do with attributes. (just cause you use the attr word and the symbol doesn't make it an attribute) I don't see how you could ever add attributes to D using AST macros above unless the definition of an AST macro is modified. [Again, assuming D didn't have attributes in the first place] This does not mean that AST macros could not be used to help define the generalized attributes though. What I am talking about is instead of hard coding attributes in the compiler, one abstracts and generalizes the code so that any attribute could be added in the future with minimal work. It would simply require one to add the built in attributes list, add the attribute grammar(which is used to reduce compound attributes), add any actions that happen when the attribute is used in code. e.g., builtin_attributes = { {pureness, pure, !pure/impure, attr = any(attr, impure) => impure attr = all(attr, pure) => pure } {gc, gc, !gc/nogc, attr = any(attr, gc) => gc attr = all(attr, nogc) => nogc } etc... } notices that pureness and gc have the same grammatical rules. Code would be added to handle the pureness and gc attributes when they are come across for optimization purposes. The above syntax is just made up and pretty bad but hopefully not too difficult to get the bigger picture. Every new built in attribute would just have to be added to the list above(easy) and code that uses it for whatever purpose would be added in the code where it belongs. User define attributes essentially would make the attributes list above dynamic allowing the user to add to it. The compiler would only be told how to simplify the attributes using the grammar and would do so but would not have any code inserted because there is no way for the user to hook into the compiler properly(I suppose it could be done if the compiler was written in an oop like way). In any case, because the compiler knows how to simplify UDA's by using the provided grammar it makes UDA's much more powerful. CT reflection and AST macros would make them way more powerful. The ability to add hooks into the compiler would nearly give the user the same power as the compiler writer. Of course, this would probably lead to all kinds of compatibility problems in user code. Basically, as of now, all we can do is define UDA's, we can't define how they relate to each other(the grammar) nor what happens when the exist(the behavior). There should not be any real difference between UDA's and built in attributes(except hooking, only because of complexity issues) and having a generalized attribute system in the compiler would go a long way to bridging the gap. The main thing to take away from this is that *if* D had such an attribute system, the gc/nogc issue would be virtually non-existent. It would only take a few minutes to add a UDA gc/ nogc to D in user code. At least a few people would have done it, and it's merits could be seen. Then it would be only a matter of "copy and pasting" the definition of the attribute to the compiler code and it would then become a built in making it available for everyone. At some point optimizations in the compiler could potentially be added... just for fun. Not really the point of the attribute but having it does provide such possibilities. Also, with such a system, attributes can propagate from the bottom up and inference makes it a lot easier. E.g., atomic statements could be "marked" with attributes. Blocks of statements can be marked but also inherit from the statements and blocks they use. With such a system, one could mark individual assembly instructions as, say, dangerous. If dangerous was a "bubbling" attribute then you could check and see which functions in your code was dangerous, or if your program was dangerous. One could do it similar with allocate. Any primitive thing that allocates in D should get the attribute. Then, if allocate is a "bubbling" attribute, you could find out exactly which parts of code in your allocates. You could even find out were this happens. e.g., void foo() { auto m = core.stdc.stdlib.malloc(size); bar(); // assume bar does not have allocate attribute(does not allocate memory in any way) } foo would be marked allocate since malloc is marked with allocate and allocate is an inheriting attribute. A utility could be written that shows the hierarchy of the program allowing you to explore things and you could find foo and see that the reason why foo is marked allocate is because it calls malloc which is marked allocate. (malloc would be explicitly marked while foo would be deduced by the compiler for us) Hopefully it is easy to see that by starting from the bottom and with a well defined attribute system the compiler can become more powerful by automatically simplifying attributes for us no matter how complex the program.
Apr 21 2014
parent Jacob Carlborg <doob me.com> writes:
On 21/04/14 19:49, Frustrated wrote:

 Not quite. AST macros simply transform code. Attributes attach meta data
 to code. While I'm sure there is some overlap they are not the same.

 Unless AST macros have the ability to arbitrary add additional
 contextual information to meta code then they can't emulate attributes.
I'm not saying we should emulate attributes, we already have those. BTW, I'm pretty sure they could be implemented with macros. Just return the exact same AST that was passed in, but replace the top AST node with a node that is a subclass that adds the data for the UDA.
 E.g., Suppose you have D with AST macros but not attributes, how can you
 add them?

 In the dip, you have

 macro attr (Context context, Declaration decl)
 {
      auto attrName = decl.name;
      auto type = decl.type;

      return <[
          private $decl.type _$decl.name;

          $decl.type $decl.name ()
          {
              return _$decl.name;
          }

          $decl.type $decl.name ($decl.type value)
          {
              return _$decl.name = value;
          }
      ]>;
 }

 class Foo
 {
       attr int bar;
 }

 but attr is not an attribute. It is an macro.  attr converts the "int
 bar" field into a private setter and getter. This has nothing to do with
 attributes.
Sure it does. nogc could be implemented with AST macros. nogc void foo () { new Object; } macro nogc (Context context, Declaration decl) { if (containsGCAllocation(decl)) context.compiler.error(decl.name ~ " marked with nogc performs GC allocations); return decl; }
 (just cause you use the attr word and the   symbol doesn't make it an
 attribute)


 I don't see how you could ever add attributes to D using AST macros
 above unless the definition of an AST macro is modified. [Again,
 assuming D didn't have attributes in the first place]

 This does not mean that AST macros could not be used to help define the
 generalized attributes though.


 What I am talking about is instead of hard coding attributes in the
 compiler, one abstracts and generalizes the code so that any attribute
 could be added in the future with minimal work.

 It would simply require one to add the built in attributes list, add the
 attribute grammar(which is used to reduce compound attributes), add any
 actions that happen when the attribute is used in code.

 e.g.,

 builtin_attributes = {

      {pureness, pure, !pure/impure,
          attr = any(attr, impure) => impure
          attr = all(attr, pure) => pure
      }

      {gc, gc, !gc/nogc,
          attr = any(attr, gc) => gc
          attr = all(attr, nogc) => nogc
      }
      etc... }

 notices that pureness and gc have the same grammatical rules. Code would
 be added to handle the pureness and gc attributes when they are come
 across for optimization purposes.

 The above syntax is just made up and pretty bad but hopefully not too
 difficult to get the bigger picture.

 Every new built in attribute would just have to be added to the list
 above(easy) and code that uses it for whatever purpose would be added in
 the code where it belongs.

 User define attributes essentially would make the attributes list above
 dynamic allowing the user to add to it. The compiler would only be told
 how to simplify the attributes using the grammar and would do so but
 would not have any code inserted because there is no way for the user to
 hook into the compiler properly(I suppose it could be done if the
 compiler was written in an oop like way).
The AST macros would provide a way to hook into the compiler. We already have a way to define attributes, that is, UDA's. What is missing is a way to add semantic meanings the UDA's, that is where macros come in. -- /Jacob Carlborg
Apr 22 2014
prev sibling parent reply "JN" <666total wp.pl> writes:
On Wednesday, 16 April 2014 at 01:57:29 UTC, Mike wrote:
 I don't believe users hesitant to use D will suddenly come to D 
 now that there is a  nogc attribute.  I also don't believe they 
 want to avoid the GC, even if they say they do.  I believe what 
 they really want is to have an alternative to the GC.
I'd have to agree. I doubt nogc will change anything, people will just start complaining about limitations of nogc (no array concat, having to use own libraries which may be incompatible with phobos). The complaints mostly come from the fact that D wants to offer a choice, in other languages people just accept about having to use GC, or C++ programmers all over the world asking for GC. Well, most of the new games (Unity3D) are done in is one of the biggest C++ loving and GC hating crowd there is. Another issue is the quality of D garbage collector, but adding alternative memory management ways doesn't help, fragmenting the codebase.
Apr 16 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 16 April 2014 at 09:03:22 UTC, JN wrote:
 I'd have to agree. I doubt  nogc will change anything, people 
 will just start complaining about limitations of  nogc (no 
 array concat, having to use own libraries which may be 
 incompatible with phobos). The complaints mostly come from the 
 fact that D wants to offer a choice, in other languages people 
 just accept what they have.
The complaints mostly come from the fact that D claims to be a system programming language capable of competing with C/C++. Stuff like nogc, noalloc, nosyscalls etc will make system level programming with reuse of code more manageable. because those languages are not system level programming languages.
Apr 16 2014
parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Wednesday, 16 April 2014 at 09:17:48 UTC, Ola Fosheim Grøstad
wrote:
 On Wednesday, 16 April 2014 at 09:03:22 UTC, JN wrote:
 I'd have to agree. I doubt  nogc will change anything, people 
 will just start complaining about limitations of  nogc (no 
 array concat, having to use own libraries which may be 
 incompatible with phobos). The complaints mostly come from the 
 fact that D wants to offer a choice, in other languages people 
 just accept what they have.
The complaints mostly come from the fact that D claims to be a system programming language capable of competing with C/C++. Stuff like nogc, noalloc, nosyscalls etc will make system level programming with reuse of code more manageable. because those languages are not system level programming languages.
A system level programming language is a language that can be used to write a full stack OS with it, excluding the required Assembly parts. There are a few examples of research OS written in the said languages. -- Paulo
Apr 16 2014
prev sibling next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
JN:

 I doubt  nogc will change anything, people will just start 
 complaining about limitations of  nogc
Having a way to say "this piece of program doesn't cause heap activity" is quite useful for certain piece of code. It makes a difference in both performance and safety. But not being able to call core.stdc.stdlib.alloca in a " nogc pure" function sub-three is not good. Bye, bearophile
Apr 16 2014
parent reply "sclytrack" <sclytrack fake.com> writes:
On Wednesday, 16 April 2014 at 10:13:06 UTC, bearophile wrote:
 JN:

 I doubt  nogc will change anything, people will just start 
 complaining about limitations of  nogc
Having a way to say "this piece of program doesn't cause heap activity" is quite useful for certain piece of code. It makes a difference in both performance and safety. But not being able to call core.stdc.stdlib.alloca in a " nogc pure" function sub-three is not good. Bye, bearophile
What about adding custom annotations that don't do any checking by itself. Like when nogc doesn't actually verify that the ~ is not used for strings. void hello() require( nogc) { } Just a verification by the compiler that you use only routines that are marked with certain annotations. void boe() { } (nasaverified) void test() { } // void hello() require( (nasaverified)) { test(); // ok boe(); // not ok. }
Apr 16 2014
parent "Rikki Cattermole" <alphaglosined gmail.com> writes:
On Wednesday, 16 April 2014 at 15:32:05 UTC, sclytrack wrote:
 What about adding custom annotations that don't do any checking 
 by
 itself. Like when  nogc doesn't actually verify that the
 ~ is not used for strings.

 void hello() require( nogc)
 {

 }

 Just a verification by the compiler that you use only routines
 that are marked with certain annotations.

 void boe()
 {
 }

  (nasaverified)
 void test()
 {
 }

 //

 void hello() require( (nasaverified))
 {
   test(); // ok
   boe();  // not ok.
 }
I really REALLY like this. I can see it being rather useful. Assuming its expanded to support UDA's. Not quite sure what a use case is for it though.
Apr 16 2014
prev sibling next sibling parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 16 April 2014 19:03, JN via Digitalmars-d <digitalmars-d puremagic.com>wrote:

 On Wednesday, 16 April 2014 at 01:57:29 UTC, Mike wrote:

 I don't believe users hesitant to use D will suddenly come to D now that
 there is a  nogc attribute.  I also don't believe they want to avoid the
 GC, even if they say they do.  I believe what they really want is to have
 an alternative to the GC.
I'd have to agree. I doubt nogc will change anything, people will just start complaining about limitations of nogc (no array concat, having to use own libraries which may be incompatible with phobos). The complaints mostly come from the fact that D wants to offer a choice, in other complaining much about having to use GC, or C++ programmers all over the nowadays and people live with it even though game development is one of the biggest C++ loving and GC hating crowd there is. Another issue is the quality of D garbage collector, but adding alternative memory management ways doesn't help, fragmenting the codebase.
I don't really have an opinion on nogc, but I feel like I'm one of the people that definitely should. I agree with these comments somewhat though. I have as big a GC-phobia as anyone, but I have never said the proper strategy is to get rid of it, and I'm not sure how helpful nogc is. I don't *mind* the idea of a nogc attribute; I do like the idea that I may have confidence some call tree doesn't invoke the GC, but I can't say I'm wildly excited about this change. I'm not sure about the larger implications for the language, or what the result of this will do to code at large. I'm not yet sure how annoying I'll find typing it everywhere, and whether that's a worthwhile tradeoff. I have a short list of things I'm dying for in D for years, and this is not on it. Nowhere near. (rvalue temp -> ref args pleeeease! Linear algebra in D really sucks!!) The thing is, this doesn't address the problem. I *want* to like the GC... I want a GC that is acceptable. I am convinced that ARC would be acceptable, and I've never heard anyone suggest any proposal/fantasy/imaginary GC implementation that would be acceptable... In complete absence of a path towards an acceptable GC implementation, I'd prefer to see people that know what they're talking about explore how refcounting could be used instead. GC backed ARC sounds like it would acceptably automate the circular reference catching that people fuss about, while still providing a workable solution for embedded/realtime users; disable(/don't link) the backing GC, make sure you mark weak references properly. That would make this whole effort redundant because there would be no fear of call trees causing a surprise collect under that environment. Most importantly, it maintains compatibility with phobos and all other libs. It doesn't force realtime/embedded users into their own little underground world where they have nogc everywhere and totally different allocation API's than the rest of the D universe, producing endless problems interacting with libraries. These are antiquated problems we've suffered in C++ for decades that I _really_ don't want to see transfer into D. I'd like to suggest experts either, imagine/invent/design a GC that is acceptable to the realtime/embedded crowd (seriously, can anyone even _imagine_ a feasible solution in D? I can't, but I'm not an expert by any measure), or take ARC seriously and work out how it can be implemented; what are the hurdles, are they surmountable? Is there room for an experimental fork?
Apr 16 2014
next sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Wednesday, 16 April 2014 at 11:51:07 UTC, Manu via 
Digitalmars-d wrote:
 On 16 April 2014 19:03, JN via Digitalmars-d 
 <digitalmars-d puremagic.com>wrote:

 On Wednesday, 16 April 2014 at 01:57:29 UTC, Mike wrote:

 I don't believe users hesitant to use D will suddenly come to 
 D now that
 there is a  nogc attribute.  I also don't believe they want 
 to avoid the
 GC, even if they say they do.  I believe what they really 
 want is to have
 an alternative to the GC.
I'd have to agree. I doubt nogc will change anything, people will just start complaining about limitations of nogc (no array concat, having to use own libraries which may be incompatible with phobos). The complaints mostly come from the fact that D wants to offer a choice, in other developers complaining much about having to use GC, or C++ programmers all over the world asking for GC. Well, most of the new games (Unity3D) are nowadays and people live with it even though game development is one of the biggest C++ loving and GC hating crowd there is. Another issue is the quality of D garbage collector, but adding alternative memory management ways doesn't help, fragmenting the codebase.
I don't really have an opinion on nogc, but I feel like I'm one of the people that definitely should. I agree with these comments somewhat though. I have as big a GC-phobia as anyone, but I have never said the proper strategy is to get rid of it, and I'm not sure how helpful nogc is. I don't *mind* the idea of a nogc attribute; I do like the idea that I may have confidence some call tree doesn't invoke the GC, but I can't say I'm wildly excited about this change. I'm not sure about the larger implications for the language, or what the result of this will do to code at large. I'm not yet sure how annoying I'll find typing it everywhere, and whether that's a worthwhile tradeoff. I have a short list of things I'm dying for in D for years, and this is not on it. Nowhere near. (rvalue temp -> ref args pleeeease! Linear algebra in D really sucks!!) The thing is, this doesn't address the problem. I *want* to like the GC... I want a GC that is acceptable. I am convinced that ARC would be acceptable, and I've never heard anyone suggest any proposal/fantasy/imaginary GC implementation that would be acceptable... In complete absence of a path towards an acceptable GC implementation, I'd prefer to see people that know what they're talking about explore how refcounting could be used instead. GC backed ARC sounds like it would acceptably automate the circular reference catching that people fuss about, while still providing a workable solution for embedded/realtime users; disable(/don't link) the backing GC, make sure you mark weak references properly. That would make this whole effort redundant because there would be no fear of call trees causing a surprise collect under that environment. Most importantly, it maintains compatibility with phobos and all other libs. It doesn't force realtime/embedded users into their own little underground world where they have nogc everywhere and totally different allocation API's than the rest of the D universe, producing endless problems interacting with libraries. These are antiquated problems we've suffered in C++ for decades that I _really_ don't want to see transfer into D. I'd like to suggest experts either, imagine/invent/design a GC that is acceptable to the realtime/embedded crowd (seriously, can anyone even _imagine_ a feasible solution in D? I can't, but I'm not an expert by any measure), or take ARC seriously and work out how it can be implemented; what are the hurdles, are they surmountable? Is there room for an experimental fork?
AAA games of course. http://tirania.org/blog/archive/2014/Apr-14.html -- Paulo
Apr 16 2014
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/16/2014 4:50 AM, Manu via Digitalmars-d wrote:
 I am convinced that ARC would be acceptable,
ARC has very serious problems with bloat and performance. Every time a copy is made of a pointer, the ref count must be dealt with, engendering bloat and slowdown. C++ deals with this by providing all kinds of ways to bypass doing this, but the trouble is such is totally unsafe. Further problems with ARC are inability to mix ARC references with non-ARC references, seriously hampering generic code.
 and I've never heard anyone suggest
 any proposal/fantasy/imaginary GC implementation that would be acceptable...
Exactly.
 In complete absence of a path towards an acceptable GC implementation, I'd
 prefer to see people that know what they're talking about explore how
 refcounting could be used instead.
 GC backed ARC sounds like it would acceptably automate the circular reference
 catching that people fuss about, while still providing a workable solution for
 embedded/realtime users; disable(/don't link) the backing GC, make sure you
mark
 weak references properly.
I have, and I've worked with a couple others here on it, and have completely failed at coming up with a workable, safe, non-bloated, performant way of doing pervasive ARC.
Apr 16 2014
parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 17 April 2014 03:37, Walter Bright via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 On 4/16/2014 4:50 AM, Manu via Digitalmars-d wrote:

 I am convinced that ARC would be acceptable,
ARC has very serious problems with bloat and performance.
This is the first I've heard of it, and I've been going on about it for ages. Every time a copy is made of a pointer, the ref count must be dealt with,
 engendering bloat and slowdown. C++ deals with this by providing all kinds
 of ways to bypass doing this, but the trouble is such is totally unsafe.
Obviously, a critical part of ARC is the compilers ability to reduce redundant inc/dec sequences. At which point your 'every time' assertion is false. C++ can't do ARC, so it's not comparable. With proper elimination, transferring ownership results in no cost, only duplication/destruction, and those are moments where I've deliberately committed to creation/destruction of an instance of something, at which point I'm happy to pay for an inc/dec; creation/destruction are rarely high-frequency operations. Have you measured the impact? I can say that in realtime code and embedded code in general, I'd be much happier to pay a regular inc/dec cost (a known, constant quantity) than commit to unknown costs at unknown times. I've never heard of Obj-C users complaining about the inc/dec costs. If an inc/dec becomes a limiting factor in hot loops, there are lots of things you can do to eliminate them from your loops. I just don't buy that this is a significant performance penalty, but I can't say that experimentally... can you? How often does ref fiddling occur in reality? My guess is that with redundancy elimination, it would be surprisingly rare, and insignificant. I can imagine that I would be happy with this known, controlled, and controllable cost. It comes with a whole bunch of benefits for realtime/embedded use (immediate destruction, works in little-to-no-free-memory environments, predictable costs, etc). Further problems with ARC are inability to mix ARC references with non-ARC
 references, seriously hampering generic code.
That's why the only workable solution is that all references are ARC references. The obvious complication is reconciling malloc pointers, but I'm sure this can be addressed with some creativity. I imagine it would look something like: By default, pointers are fat: struct ref { void* ptr, ref_t* rc; } malloc pointers could conceivably just have a null entry for 'rc' and therefore interact comfortably with rc pointers. I imagine that a 'raw-pointer' type would be required to refer to a thin pointer. Raw pointers would implicitly cast to fat pointers, and a fat->thin casts may throw if the fat pointer's rc is non-null, or compile error if it can be known at compile time. Perhaps a solution is possible where an explicit rc record is not required (such that all pointers remain 'thin' pointers)... A clever hash of the pointer itself can look up the rc? Perhaps the rc can be found at ptr[-1]? But then how do you know if the pointer is rc allocated or not? An unlikely sentinel value at ptr[-1]? Perhaps the virtual memory page can imply whether pointers allocated in that region are ref counted or not? Some clever method of assigning the virtual address space so that recognition of rc memory can amount to testing a couple of bits in pointers? I'm just making things up, but my point is, there are lots of creative possibilities, and I have never seen any work to properly explore the options. and I've never heard anyone suggest
 any proposal/fantasy/imaginary GC implementation that would be
 acceptable...
Exactly.
So then consider ARC seriously. If it can't work, articulate why. I still don't know, nobody has told me. It works well in other languages, and as far as I can tell, it has the potential to produce acceptable results for _all_ D users. iOS is a competent realtime platform, Apple are well known for their commitment to silky-smooth, jitter-free UI and general feel. Android on the other hand is a perfect example of why GC is not acceptable. In complete absence of a path towards an acceptable GC implementation, I'd
 prefer to see people that know what they're talking about explore how
 refcounting could be used instead.
 GC backed ARC sounds like it would acceptably automate the circular
 reference
 catching that people fuss about, while still providing a workable
 solution for
 embedded/realtime users; disable(/don't link) the backing GC, make sure
 you mark
 weak references properly.
I have, and I've worked with a couple others here on it, and have completely failed at coming up with a workable, safe, non-bloated, performant way of doing pervasive ARC.
Okay. Where can I read about that? It doesn't seem to have surfaced, at least, it was never presented in response to my many instances of raising the topic. What are the impasses? I'm very worried about this. ARC is the only imaginary solution I have left. In lieu of that, we make a long-term commitment to a total fracturing of memory allocation techniques, just like C++ today where interaction between libraries is always a massive pain in the arse. It's one of the most painful things about C/C++, and perhaps one of the primary causes of incompatibility between libraries and frameworks. This will transfer into D, but it's much worse in D because of the relatively high number of implicit allocations ('~', closures, etc). Frameworks and libraries become incompatible with each other, which is a don't suffer. My feeling is that, if D doesn't transcend these fundamental troubles we wrestle in C++, then D is a stepping stone rather than a salvation. nogc, while seemingly simple and non-destructive, feels kinda like a commitment, or at least an acceptance, of fracturing allocation paradigms between codebases. Like I say before, I kinda like the idea of nogc, but I'm seriously concerned about what it implies...
Apr 16 2014
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 17 April 2014 at 03:14:21 UTC, Manu via 
Digitalmars-d wrote:
 Obviously, a critical part of ARC is the compilers ability to 
 reduce
 redundant inc/dec sequences.
You need whole program opimization to do this well. Which I am strongly in favour of, btw.
 I've never heard of Obj-C users complaining about the inc/dec 
 costs.
Obj-C has lots of overhead.
 Further problems with ARC are inability to mix ARC references 
 with non-ARC
 references, seriously hampering generic code.
That's why the only workable solution is that all references are ARC references.
I never understood why you cannot mix. If your module owns a shared object you should be able to use regular pointers from that module.
 So then consider ARC seriously. If it can't work, articulate 
 why.
It can work if the language is designed for it, and code is written to enable optimizations. IMHO you need a seperate layer to enable compiletime proofs if you want to have safe and efficient system level programming. A bit more than safe, pure etc.
 iOS is a competent realtime platform, Apple are well known for 
 their
 commitment to silky-smooth, jitter-free UI and general feel.
Foundational libraries does not use ARC? Only higher level stuff? Ola
Apr 16 2014
prev sibling next sibling parent reply Michel Fortin <michel.fortin michelf.ca> writes:
On 2014-04-17 03:13:48 +0000, Manu via Digitalmars-d 
<digitalmars-d puremagic.com> said:

 Obviously, a critical part of ARC is the compilers ability to reduce
 redundant inc/dec sequences. At which point your 'every time' assertion is
 false. C++ can't do ARC, so it's not comparable.
 With proper elimination, transferring ownership results in no cost, only
 duplication/destruction, and those are moments where I've deliberately
 committed to creation/destruction of an instance of something, at which
 point I'm happy to pay for an inc/dec; creation/destruction are rarely
 high-frequency operations.
You're right that transferring ownership does not cost with ARC. What costs you is return values and temporary local variables. While it's nice to have a compiler that'll elide redundant retain/release pairs, function boundaries can often makes this difficult. Take this first example: Object globalObject; Object getObject() { return globalObject; // implicit: retain(globalObject) } void main() { auto object = getObject(); writeln(object); // implicit: release(object) } It might not be obvious, but here the getObject function *has to* increment the reference count by one before returning. There's no other convention that'll work because another implementation of getObject might return a temporary object. Then, at the end of main, globalObject's reference counter is decremented. Only if getObject gets inlined can the compiler detect the increment/decrement cycle is unnecessary. But wait! If writeln isn't pure (and surely it isn't), then it might change the value of globalObject (you never know what's in Object.toString, right?), which will in turn release object. So main *has to* increment the reference counter if it wants to make sure its local variable object is valid until the end of the writeln call. Can't elide here. Let's take this other example: Object globalObject; Object otherGlobalObject; void main() { auto object = globalObject; // implicit: retain(globalObject) foo(object); // implicit: release(object) } Here you can elide the increment/decrement cycle *only if* foo is pure. If foo is not pure, then it might set another value to globalObject (you never know, right?), which will decrement the reference count and leave the "object" variable in main the sole owner of the object. Alternatively, if foo is not pure but instead gets inlined it might be provable that it does not touch globalObject, and elision might become a possibility. I think ARC needs to be practical without eliding of redundant calls. It's a good optimization, but a difficult one unless everything is inlined. Many such elisions that would appear to be safe at first glance aren't provably safe for the compiler because of function calls. -- Michel Fortin michel.fortin michelf.ca http://michelf.ca
Apr 17 2014
parent Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 17 April 2014 22:28, Michel Fortin via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 On 2014-04-17 03:13:48 +0000, Manu via Digitalmars-d <
 digitalmars-d puremagic.com> said:

  Obviously, a critical part of ARC is the compilers ability to reduce
 redundant inc/dec sequences. At which point your 'every time' assertion is
 false. C++ can't do ARC, so it's not comparable.
 With proper elimination, transferring ownership results in no cost, only
 duplication/destruction, and those are moments where I've deliberately
 committed to creation/destruction of an instance of something, at which
 point I'm happy to pay for an inc/dec; creation/destruction are rarely
 high-frequency operations.
You're right that transferring ownership does not cost with ARC. What costs you is return values and temporary local variables.
Why would they cost? If a function receives a reference, it will equally release it on return. I don't see why a ref should be bumped to pass it to a function? Return values I can see, because return values are effectively copying assignments. But if the assignment is to a local, then the close of scope implies a dec, which would again cancel out. While it's nice to have a compiler that'll elide redundant retain/release
 pairs, function boundaries can often makes this difficult. Take this first
 example:

         Object globalObject;

         Object getObject()
         {
                 return globalObject; // implicit: retain(globalObject)
         }

         void main()
         {
                 auto object = getObject();
                 writeln(object);
                 // implicit: release(object)
         }

 It might not be obvious, but here the getObject function *has to*
 increment the reference count by one before returning. There's no other
 convention that'll work because another implementation of getObject might
 return a temporary object. Then, at the end of main, globalObject's
 reference counter is decremented. Only if getObject gets inlined can the
 compiler detect the increment/decrement cycle is unnecessary.
Well in most cases of accessors like this, it would inline properly. It's a fairly reliable rule that, if a function is not an inline candidate, it is probably also highly unlikely to appear in a hot loop. I don't follow why it needs to retain before returning though. It would seem that it should retain upon assignment after returning (making it similar to the situation below). Nothing can interfere with the refcount before and after the function returns. But wait! If writeln isn't pure (and surely it isn't), then it might change
 the value of globalObject (you never know what's in Object.toString,
 right?), which will in turn release object. So main *has to* increment the
 reference counter if it wants to make sure its local variable object is
 valid until the end of the writeln call. Can't elide here.

 Let's take this other example:

         Object globalObject;
         Object otherGlobalObject;

         void main()
         {
                 auto object = globalObject; // implicit:
 retain(globalObject)
                 foo(object);
                 // implicit: release(object)
         }

 Here you can elide the increment/decrement cycle *only if* foo is pure. If
 foo is not pure, then it might set another value to globalObject (you never
 know, right?), which will decrement the reference count and leave the
 "object" variable in main the sole owner of the object. Alternatively, if
 foo is not pure but instead gets inlined it might be provable that it does
 not touch globalObject, and elision might become a possibility.
Sure, there is potential that certain bits of code between the retain/release can break the ability to eliminate the pair, but that's why I think D has an advantage here over other languages, like Obj-C for instance. D has so much more richness in the type system which can assist here. I'm pretty confident that D would offer much better results than existing implementations. I think ARC needs to be practical without eliding of redundant calls. It's
 a good optimization, but a difficult one unless everything is inlined. Many
 such elisions that would appear to be safe at first glance aren't provably
 safe for the compiler because of function calls.
I'm very familiar with this class of problem. I have spent much of my career dealing with precisely this class of problem. __restrict addresses the exact same problem with raw pointers in C, and programmers understand the issue, and know how to work around it when it appears in hot loops. D has some significant advantages that other ARC languages don't have though. D's module system makes inlining much more reliable than C/C++ for instance, pure is an important part of D, and people do use it liberally.
Apr 17 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/16/2014 8:13 PM, Manu via Digitalmars-d wrote:
 On 17 April 2014 03:37, Walter Bright via Digitalmars-d
 <digitalmars-d puremagic.com <mailto:digitalmars-d puremagic.com>> wrote:
     ARC has very serious problems with bloat and performance.
 This is the first I've heard of it, and I've been going on about it for ages.
Consider two points: 1. I can't think of any performant ARC systems. 2. Java would be a relatively easy language to implement ARC in. There's probably a billion dollars invested in Java's GC. Why not ARC?
 Obviously, a critical part of ARC is the compilers ability to reduce redundant
 inc/dec sequences. At which point your 'every time' assertion is false. C++
 can't do ARC, so it's not comparable.
C++ has shared_ptr, with all kinds of escapes.
 With proper elimination, transferring ownership results in no cost, only
 duplication/destruction, and those are moments where I've deliberately
committed
 to creation/destruction of an instance of something, at which point I'm happy
to
 pay for an inc/dec; creation/destruction are rarely high-frequency operations.
inc/dec isn't as cheap as you imply. The dec usually requires the creation of an exception handling unwinder to do it.
 Have you measured the impact?
No. I don't really know how I could, as I haven't seen an ARC system.
 I've never heard of Obj-C users complaining about the inc/dec costs.
Obj-C only uses ARC for a minority of the objects.
 How often does ref fiddling occur in reality? My guess is that with redundancy
 elimination, it would be surprisingly rare, and insignificant.
Yes, I would be surprised.
     Further problems with ARC are inability to mix ARC references with non-ARC
     references, seriously hampering generic code.
 That's why the only workable solution is that all references are ARC
references.
 The obvious complication is reconciling malloc pointers, but I'm sure this can
 be addressed with some creativity.

 I imagine it would look something like:
 By default, pointers are fat: struct ref { void* ptr, ref_t* rc; }
First off, now pointers are 24 bytes in size. Secondly, every pointer dereference becomes two dereferences (not so good for cache performance).
 malloc pointers could conceivably just have a null entry for 'rc' and therefore
 interact comfortably with rc pointers.
 I imagine that a 'raw-pointer' type would be required to refer to a thin
 pointer. Raw pointers would implicitly cast to fat pointers, and a fat->thin
 casts may throw if the fat pointer's rc is non-null, or compile error if it can
 be known at compile time.
Now we throw in a null check and branch for pointer operations.
 Perhaps a solution is possible where an explicit rc record is not required
(such
 that all pointers remain 'thin' pointers)...
 A clever hash of the pointer itself can look up the rc?
 Perhaps the rc can be found at ptr[-1]? But then how do you know if the pointer
 is rc allocated or not? An unlikely sentinel value at ptr[-1]? Perhaps the
 virtual memory page can imply whether pointers allocated in that region are ref
 counted or not? Some clever method of assigning the virtual address space so
 that recognition of rc memory can amount to testing a couple of bits in
pointers?

 I'm just making things up,
Yes.
 but my point is, there are lots of creative
 possibilities, and I have never seen any work to properly explore the options.
ARC has been known about for many decades. If you haven't seen it "properly explored", perhaps it isn't as simple and cost-effective as it may appear at first blush.
 So then consider ARC seriously. If it can't work, articulate why. I still don't
 know, nobody has told me.
 It works well in other languages, and as far as I can tell, it has the
potential
 to produce acceptable results for _all_ D users.
What other languages?
 iOS is a competent realtime platform, Apple are well known for their commitment
 to silky-smooth, jitter-free UI and general feel.
A UI is a good use case for ARC. A UI doesn't require high performance.
 Okay. Where can I read about that? It doesn't seem to have surfaced, at least,
 it was never presented in response to my many instances of raising the topic.
 What are the impasses?
I'd have to go look to find the thread. The impasses were as I pointed out here.
 I'm very worried about this. ARC is the only imaginary solution I have left. In
 lieu of that, we make a long-term commitment to a total fracturing of memory
 allocation techniques, just like C++ today where interaction between libraries
 is always a massive pain in the arse. It's one of the most painful things about
 C/C++, and perhaps one of the primary causes of incompatibility between
 libraries and frameworks. This will transfer into D, but it's much worse in D
 because of the relatively high number of implicit allocations ('~', closures,
etc).
There are only about 3 cases of implicit allocation in D, all easily avoided, and with nogc they'll be trivial to avoid. It is not "much worse".
 Frameworks and libraries become incompatible with each other, which is a
problem

A GC makes libraries compatible with each other, which is one reason why GCs are very popular.
Apr 17 2014
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 17 Apr 2014 04:35:34 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 4/16/2014 8:13 PM, Manu via Digitalmars-d wrote:
 I've never heard of Obj-C users complaining about the inc/dec costs.
Obj-C only uses ARC for a minority of the objects.
Really? Every Obj-C API I've seen uses Objective-C objects, which all use RC.
 iOS is a competent realtime platform, Apple are well known for their  
 commitment
 to silky-smooth, jitter-free UI and general feel.
A UI is a good use case for ARC. A UI doesn't require high performance.
I've written video processing/players on iOS, they all use blocks and reference counting, including to do date/time processing per frame. All while using RC network buffers. And it works quite smoothly. -Steve
Apr 17 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/17/2014 10:05 AM, Steven Schveighoffer wrote:
 Obj-C only uses ARC for a minority of the objects.
Really? Every Obj-C API I've seen uses Objective-C objects, which all use RC.
And what about all allocated items?
 A UI is a good use case for ARC. A UI doesn't require high performance.
I've written video processing/players on iOS, they all use blocks and reference counting, including to do date/time processing per frame. All while using RC network buffers. And it works quite smoothly.
And did you use ref counting for all allocations and all pointers? There's no doubt that ref counting can be used successfully here and there, with a competent programmer knowing when he can just convert it to a raw pointer and use that. It's another thing entirely to use ref counting for ALL pointers. And remember that if you have exceptions, then all the dec code needs to be in exception unwind handlers.
Apr 17 2014
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 17 Apr 2014 14:47:00 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 4/17/2014 10:05 AM, Steven Schveighoffer wrote:
 Obj-C only uses ARC for a minority of the objects.
Really? Every Obj-C API I've seen uses Objective-C objects, which all use RC.
And what about all allocated items?
What do you mean?
 A UI is a good use case for ARC. A UI doesn't require high performance.
I've written video processing/players on iOS, they all use blocks and reference counting, including to do date/time processing per frame. All while using RC network buffers. And it works quite smoothly.
And did you use ref counting for all allocations and all pointers?
Yes.
 There's no doubt that ref counting can be used successfully here and  
 there, with a competent programmer knowing when he can just convert it  
 to a raw pointer and use that.
The compiler treats pointers to NSObject-derived differently than pointers to structs and raw bytes. There is no need to know, you just use them like normal pointers, and the compiler inserts the retain/release calls for you. But I did not use structs. I only used structs for network packet overlays. I still created an object that contained the struct to enjoy the benefits of the memory management system.
 And remember that if you have exceptions, then all the dec code needs to  
 be in exception unwind handlers.
I haven't really used exceptions, but they automatically handle the reference counting. -Steve
Apr 17 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/17/2014 12:41 PM, Steven Schveighoffer wrote:
 On Thu, 17 Apr 2014 14:47:00 -0400, Walter Bright <newshound2 digitalmars.com>
 wrote:

 On 4/17/2014 10:05 AM, Steven Schveighoffer wrote:
 Obj-C only uses ARC for a minority of the objects.
Really? Every Obj-C API I've seen uses Objective-C objects, which all use RC.
And what about all allocated items?
What do you mean?
Can you call malloc() ?
 A UI is a good use case for ARC. A UI doesn't require high performance.
I've written video processing/players on iOS, they all use blocks and reference counting, including to do date/time processing per frame. All while using RC network buffers. And it works quite smoothly.
And did you use ref counting for all allocations and all pointers?
Yes.
You never used malloc? for anything? or stack allocated anything? or had any pointers to anything that weren't ref counted? How did that work for printf?
 There's no doubt that ref counting can be used successfully here and there,
 with a competent programmer knowing when he can just convert it to a raw
 pointer and use that.
The compiler treats pointers to NSObject-derived differently than pointers to structs and raw bytes.
So there *are* regular pointers.
 There is no need to know, you just use them like normal
 pointers, and the compiler inserts the retain/release calls for you.
I know that with ARC the compiler inserts the code for you. That doesn't make it costless.
 But I did not use structs. I only used structs for network packet overlays. I
 still created an object that contained the struct to enjoy the benefits of the
 memory management system.

 And remember that if you have exceptions, then all the dec code needs to be in
 exception unwind handlers.
I haven't really used exceptions, but they automatically handle the reference counting.
I know it's done automatically. But you might be horrified at what the generated code looks like.
Apr 17 2014
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 17 Apr 2014 15:55:10 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 4/17/2014 12:41 PM, Steven Schveighoffer wrote:
 On Thu, 17 Apr 2014 14:47:00 -0400, Walter Bright  
 <newshound2 digitalmars.com>
 wrote:

 On 4/17/2014 10:05 AM, Steven Schveighoffer wrote:
 Obj-C only uses ARC for a minority of the objects.
Really? Every Obj-C API I've seen uses Objective-C objects, which all use RC.
And what about all allocated items?
What do you mean?
Can you call malloc() ?
Of course. And then I can wrap it in NSData or NSMutableData.
 A UI is a good use case for ARC. A UI doesn't require high  
 performance.
I've written video processing/players on iOS, they all use blocks and reference counting, including to do date/time processing per frame. All while using RC network buffers. And it works quite smoothly.
And did you use ref counting for all allocations and all pointers?
Yes.
You never used malloc? for anything? or stack allocated anything? or had any pointers to anything that weren't ref counted? How did that work for printf?
I didn't exactly use printf, iOS has no console. NSLog logs to the xcode console, and that works great. But we used FILE * plenty. And I've had no problems.
 There's no doubt that ref counting can be used successfully here and  
 there,
 with a competent programmer knowing when he can just convert it to a  
 raw
 pointer and use that.
The compiler treats pointers to NSObject-derived differently than pointers to structs and raw bytes.
So there *are* regular pointers.
Of course, all C code is valid Objective-C code.
 There is no need to know, you just use them like normal
 pointers, and the compiler inserts the retain/release calls for you.
I know that with ARC the compiler inserts the code for you. That doesn't make it costless.
I'm not saying it's costless. I'm saying the cost is something I didn't notice performance-wise. But my point is, pointers are pointers. I use them the same whether they are ARC pointers or normal pointers (they are even declared the same way), but the compiler treats them differently.
 And remember that if you have exceptions, then all the dec code needs  
 to be in
 exception unwind handlers.
I haven't really used exceptions, but they automatically handle the reference counting.
I know it's done automatically. But you might be horrified at what the generated code looks like.
Perhaps a reason to avoid exceptions :) I generally do anyway, even in D. -Steve
Apr 17 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/17/2014 1:30 PM, Steven Schveighoffer wrote:
 I'm not saying it's costless. I'm saying the cost is something I didn't notice
 performance-wise.
You won't with FILE*, as it is overwhelmed by file I/O times. Same with UI objects.
Apr 17 2014
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 17 Apr 2014 16:47:04 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 4/17/2014 1:30 PM, Steven Schveighoffer wrote:
 I'm not saying it's costless. I'm saying the cost is something I didn't  
 notice
 performance-wise.
You won't with FILE*, as it is overwhelmed by file I/O times. Same with UI objects.
OK, you beat it out of me. I admit, when I said "Video processing/players with network capability" I meant all FILE * I/O, and really nothing to do with video processing or networking. -Steve
Apr 17 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/17/2014 1:53 PM, Steven Schveighoffer wrote:
 OK, you beat it out of me. I admit, when I said "Video processing/players with
 network capability" I meant all FILE * I/O, and really nothing to do with video
 processing or networking.
I would expect that with a video processor, you aren't dealing with ARC references inside the routine actually doing the work.
Apr 17 2014
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 17 Apr 2014 18:08:43 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 4/17/2014 1:53 PM, Steven Schveighoffer wrote:
 OK, you beat it out of me. I admit, when I said "Video  
 processing/players with
 network capability" I meant all FILE * I/O, and really nothing to do  
 with video
 processing or networking.
I would expect that with a video processor, you aren't dealing with ARC references inside the routine actually doing the work.
Obviously, if you are dealing with raw data, you are not using ARC while accessing the data. But you are using ARC to get a reference to that data. For instance, you might see: -(void)processVideoData:(NSData *)data { unsigned char *vdata = data.data; // process vdata ... } During the entire processing, you never increment/decrement a reference count, because the caller will have passed data to you with an incremented count. Just because ARC protects the data, doesn't mean you need to constantly and needlessly increment/decrement references. If you know the data won't go away while you are using it, you can just ignore the reference counting aspect. -Steve
Apr 17 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/17/2014 3:18 PM, Steven Schveighoffer wrote:
 During the entire processing, you never increment/decrement a reference count,
 because the caller will have passed data to you with an incremented count.

 Just because ARC protects the data, doesn't mean you need to constantly and
 needlessly increment/decrement references. If you know the data won't go away
 while you are using it, you can just ignore the reference counting aspect.
The salient point there is "if you know". If you are doing it, it is not guaranteed memory safe by the compiler. If the compiler is doing it, how does it know? You really are doing *manual*, not automatic, ARC here, because you are making decisions about when ARC can be skipped, and you must make those decisions in order to have it run at a reasonable speed.
Apr 17 2014
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Apr 17, 2014 at 03:52:10PM -0700, Walter Bright via Digitalmars-d wrote:
 On 4/17/2014 3:18 PM, Steven Schveighoffer wrote:
During the entire processing, you never increment/decrement a
reference count, because the caller will have passed data to you with
an incremented count.

Just because ARC protects the data, doesn't mean you need to
constantly and needlessly increment/decrement references. If you know
the data won't go away while you are using it, you can just ignore
the reference counting aspect.
The salient point there is "if you know". If you are doing it, it is not guaranteed memory safe by the compiler. If the compiler is doing it, how does it know? You really are doing *manual*, not automatic, ARC here, because you are making decisions about when ARC can be skipped, and you must make those decisions in order to have it run at a reasonable speed.
I thought that whole point of *A*RC is for the compiler to know when ref count updates can be skipped? Or are you saying this is algorithmically undecidable in the compiler? T -- "You are a very disagreeable person." "NO."
Apr 17 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/17/2014 5:09 PM, H. S. Teoh via Digitalmars-d wrote:
 I thought that whole point of *A*RC is for the compiler to know when ref
 count updates can be skipped? Or are you saying this is algorithmically
 undecidable in the compiler?
I don't think anyone has produced a "sufficiently smart compiler" in that regard.
Apr 17 2014
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Apr 17, 2014 at 05:34:34PM -0700, Walter Bright via Digitalmars-d wrote:
 On 4/17/2014 5:09 PM, H. S. Teoh via Digitalmars-d wrote:
I thought that whole point of *A*RC is for the compiler to know when
ref count updates can be skipped? Or are you saying this is
algorithmically undecidable in the compiler?
I don't think anyone has produced a "sufficiently smart compiler" in that regard.
So what are some optimizations that compilers *are* currently able to do, and what currently isn't done? T -- Spaghetti code may be tangly, but lasagna code is just cheesy.
Apr 17 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/17/2014 7:24 PM, H. S. Teoh via Digitalmars-d wrote:
 So what are some optimizations that compilers *are* currently able to
 do, and what currently isn't done?
Just look at all the problems with "escape analysis" being done with compilers.
Apr 17 2014
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 18 April 2014 at 00:11:28 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 I thought that whole point of *A*RC is for the compiler to know 
 when ref
 count updates can be skipped? Or are you saying this is 
 algorithmically
 undecidable in the compiler?
Multithreading cause major problems. A function owns the array passed as a parameter, no ref counting needed, but if another thread is deleting objects in the array then you cannot assume that ownership is transitive and will have to inc/dec every object you look at. If it is thread local then ownership is transitive and no inc/decs are needed in the function...? But how can you let the compiler know that you have protected the array so only one thread will take processing-ownership during the life span of the function call?
Apr 17 2014
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 18 April 2014 at 06:16:29 UTC, Ola Fosheim Grøstad 
wrote:
 But how can you let the compiler know that you have protected 
 the array so only one thread will take processing-ownership 
 during the life span of the function call?
Btw, Rust apparently uses ARC for immutable data only.
Apr 18 2014
prev sibling parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 18 April 2014 16:16, via Digitalmars-d <digitalmars-d puremagic.com>wrote:

 On Friday, 18 April 2014 at 00:11:28 UTC, H. S. Teoh via Digitalmars-d
 wrote:

 I thought that whole point of *A*RC is for the compiler to know when ref
 count updates can be skipped? Or are you saying this is algorithmically
 undecidable in the compiler?
Multithreading cause major problems. A function owns the array passed as a parameter, no ref counting needed, but if another thread is deleting objects in the array then you cannot assume that ownership is transitive and will have to inc/dec every object you look at. If it is thread local then ownership is transitive and no inc/decs are needed in the function...? But how can you let the compiler know that you have protected the array so only one thread will take processing-ownership during the life span of the function call?
D pointers are thread-local by default, you need to mark things 'shared' explicitly if they are to be passed between threads. This is one of the great advantages D has over C/C++/Obj-C.
Apr 18 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 18 April 2014 at 10:06:35 UTC, Manu via Digitalmars-d 
wrote:
 D pointers are thread-local by default, you need to mark things 
 'shared'
 explicitly if they are to be passed between threads. This is 
 one of the
 great advantages D has over C/C++/Obj-C.
TLS sucks, as you get extra indirections if it isn't implemented using a MMU (typically not because the extra page tables would cost too much?). Besides, when I want globals it is for database like structures so I usually want them shared anyway. There is a reason for why Rust only uses ARC for immutable data. But that could be so useful that it might be worth it. E.g. for caches. For efficient ARC I think you would need the equivalent of isolated threads and forbid pointers to internal data so that you can maintain ref-counting at the start of the object. Oth Weak references is still an issue though. It takes extra effort. You either need a separate object, or you need a list of weak references to clear, or you will have to "realloc" the object (freeing the datasegment, but keeping the refcounting head, causing memory fragmentation), or some other "ugly" scheme. So you want whole program analysis to figure out for which objects you don't have to deal with weak references… Ola.
Apr 18 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/18/2014 4:58 AM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 TLS sucks, as you get extra indirections if it isn't implemented using a MMU
 (typically not because the extra page tables would cost too much?).
You shouldn't be using global data anyway. Most TLS references will be to the heap or stack, which has no extra cost.
Apr 18 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 18 April 2014 at 23:54:52 UTC, Walter Bright wrote:
 You shouldn't be using global data anyway.
Why not? LUTs and indexes that are global saves you a register.
Apr 18 2014
prev sibling parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Friday, 18 April 2014 at 10:06:35 UTC, Manu via Digitalmars-d 
wrote:
 D pointers are thread-local by default, you need to mark things 
 'shared'
 explicitly if they are to be passed between threads. This is 
 one of the
 great advantages D has over C/C++/Obj-C.
There's nothing special about pointers in D. You can pass them between threads however you want. The type system has some constraints that you can *choose* to use/abuse/obey/disobey, but they definitely aren't in thread local storage unless they are global or static variables.
Apr 18 2014
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 17 Apr 2014 18:52:10 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 4/17/2014 3:18 PM, Steven Schveighoffer wrote:
 During the entire processing, you never increment/decrement a reference  
 count,
 because the caller will have passed data to you with an incremented  
 count.

 Just because ARC protects the data, doesn't mean you need to constantly  
 and
 needlessly increment/decrement references. If you know the data won't  
 go away
 while you are using it, you can just ignore the reference counting  
 aspect.
The salient point there is "if you know". If you are doing it, it is not guaranteed memory safe by the compiler. If the compiler is doing it, how does it know?
When I said you, I misspoke. I meant the compiler. If it isn't sure, it increments the count. But any objects passed into a function are already incremented. Basically it's like a mutex lock. You only need to increment at the most outer level of where you are using it. This idea that every time you pass around a variable you need to adjust counts is just not true. This is not shared_ptr.
 You really are doing *manual*, not automatic, ARC here, because you are  
 making decisions about when ARC can be skipped, and you must make those  
 decisions in order to have it run at a reasonable speed.
Absolutely not, the compiler knows whether the count needs to be incremented, I don't need to know. In fact, in ARC, you are NOT ALLOWED to increment or decrement the count manually. -Steve
Apr 18 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/18/2014 5:30 AM, Steven Schveighoffer wrote:
 Absolutely not, the compiler knows whether the count needs to be incremented, I
 don't need to know.
But there are manual escapes from it, meaning you need to know to use them: unsigned char *vdata = data.data; // process vdata I am skeptical that the O-C compiler is sufficiently smart to make ARC on all pointers practical. That snippet pretty much proves it. Total replacement of GC with ARC in D will: 1. Be a massive change that will break most every D program 2. Require people to use unsafe escapes to recover lost performance 3. Require multiple pointer types 4. Will not be memory safe (see (2)) 5. Require the invention of optimization technology that doesn't exist 6. Become more or less ABI incompatible with C without a long list of caveats and translations and, to top it off, as the paper Andrei referenced pointed out, it may not even be faster than the GC. It has the very real likelihood of destroying D. A much more tractable idea is to implement something like C++'s shared_ptr<> as a library type, with usage strategies paralleling C++'s (and yes, use of shared_ptr<> would be unsafe).
Apr 18 2014
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 18 Apr 2014 20:17:59 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 4/18/2014 5:30 AM, Steven Schveighoffer wrote:
 Absolutely not, the compiler knows whether the count needs to be  
 incremented, I
 don't need to know.
But there are manual escapes from it, meaning you need to know to use them: unsigned char *vdata = data.data; // process vdata I am skeptical that the O-C compiler is sufficiently smart to make ARC on all pointers practical. That snippet pretty much proves it.
I finally understand what you are saying. Yes, when my code devolves into C-land, I have non-ARC pointers. This means I cannot store these pointers somewhere if I don't keep a pointer to the object that owns them. This is indeed manual memory management. But, and this is a big but, I DON'T have to care about memory management as long as I only store pointers to objects, aside from the stack. Objects themselves can store non-ARC pointers and manage them privately, but not allow arbitrary setting/getting of pointers. This is not very common. A great example of where I use arbitrary non-GC pointers, is an object that stores a network packet. unix sockets work just fine in networking and iOS, so I do all of my networking with socket, send, recv, etc. These functions take unmanaged char * pointers. So an object that stores a single packet does this: 1. read the packet header into a structure overlay. Figure out the length (this is a TCP packet). 2. malloc a char array with the proper length, read in the packet. 3. verify the packet is correct, if not, free the buffer. 4. If it's correct, digest the packet (storing fields from the packet data itself, byteswap, etc.). Then store the data into an NSData object, giving it ownership of the buffer. All this happens within a factory method. I never directly store the char * array inside the object or elsewhere, it gets wrapped into an NSData object, which will then free the data once the object is destroyed. You must be conscious about memory management, because after all, Objective-C *is* C, and you are allowed to do stupid things just like in C. But you CAN have a certain set of rules, and as long as you follow those rules, you will not have to deal with memory management.
 Total replacement of GC with ARC in D will:
This is the wrong straw-man, I'm not advocating for this at all.
 1. Be a massive change that will break most every D program
 2. Require people to use unsafe escapes to recover lost performance
 3. Require multiple pointer types
This isn't what I had in mind anyway. I envisioned an ARC system like Objective-C, where the type pointed at defines whether it's ARC or GC. For instance, GC would be *required* for array appending in D, because arrays cannot be ARC-managed. You would need a wrapper type for ARC, with it's own semantics.
 4. Will not be memory safe (see (2))
 5. Require the invention of optimization technology that doesn't exist
Besides being a tautology, what does this mean?
 6. Become more or less ABI incompatible with C without a long list of  
 caveats and translations
Again, this is based on your straw-man which I don't advocate for. The change I had in mind was much less invasive.
 A much more tractable idea is to implement something like C++'s  
 shared_ptr<> as a library type, with usage strategies paralleling C++'s  
 (and yes, use of shared_ptr<> would be unsafe).
shared_ptr would work (and I've used that extensively too, it's awesome for C++), but I feel it's a much less effective solution than a compiler-augmented system that can optimize away needless increment/decrements. Note that shared_ptr would never be able to handle D's slice appending either. -Steve
Apr 21 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/21/2014 5:00 AM, Steven Schveighoffer wrote:
 Total replacement of GC with ARC in D will:
This is the wrong straw-man, I'm not advocating for this at all.
Many are when they advocate ARC for D.
 4. Will not be memory safe (see (2))
 5. Require the invention of optimization technology that doesn't exist
Besides being a tautology, what does this mean?
4. There is no language protection against misusing ARC, hence it cannot be mechanically verified to be memory safe. 5. Numerous posters here have posited that the overhead of ARC can be eliminated with a sufficiently smart compiler (which does not exist).
 Note that shared_ptr would never be able to handle D's slice appending either.
I know. shared_ptr would, of course, be used at the specific discretion of the programmer. It would not be under the hood, and it would not be memory safe.
Apr 21 2014
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 21 Apr 2014 13:28:24 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 4/21/2014 5:00 AM, Steven Schveighoffer wrote:
 Total replacement of GC with ARC in D will:
This is the wrong straw-man, I'm not advocating for this at all.
Many are when they advocate ARC for D.
Does that preclude you from accepting any kind of ARC for D?
 5. Numerous posters here have posited that the overhead of ARC can be  
 eliminated with a sufficiently smart compiler (which does not exist).
You continue to speak in extremes. People are saying that the compiler can eliminate most of the needless ARC increments and decrements, not all of them. Compilers that do this do exist.
 Note that shared_ptr would never be able to handle D's slice appending  
 either.
I know. shared_ptr would, of course, be used at the specific discretion of the programmer. It would not be under the hood, and it would not be memory safe.
Doesn't RefCounted do this already? -Steve
Apr 21 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/21/2014 10:57 AM, Steven Schveighoffer wrote:
 On Mon, 21 Apr 2014 13:28:24 -0400, Walter Bright <newshound2 digitalmars.com>
 wrote:

 On 4/21/2014 5:00 AM, Steven Schveighoffer wrote:
 Total replacement of GC with ARC in D will:
This is the wrong straw-man, I'm not advocating for this at all.
Many are when they advocate ARC for D.
Does that preclude you from accepting any kind of ARC for D?
No. My objection is to pervasive ARC, i.e. all gc is replaced with ARC, and it all magically works.
 5. Numerous posters here have posited that the overhead of ARC can be
 eliminated with a sufficiently smart compiler (which does not exist).
You continue to speak in extremes. People are saying that the compiler can eliminate most of the needless ARC increments and decrements, not all of them.
Manu, for example, suggests it is good enough to make the overhead insignificant. I'm skeptical.
 Compilers that do this do exist.
I can't reconcile agreeing that ARC isn't good enough to be pervasive with compiler technology eliminates unnecessary ARC overhead.
 I know. shared_ptr would, of course, be used at the specific discretion of the
 programmer. It would not be under the hood, and it would not be memory safe.
Doesn't RefCounted do this already?
Yes, but I haven't really looked into RefCounted.
Apr 21 2014
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 21 Apr 2014 15:03:18 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 4/21/2014 10:57 AM, Steven Schveighoffer wrote:
 On Mon, 21 Apr 2014 13:28:24 -0400, Walter Bright  
 <newshound2 digitalmars.com>
 wrote:

 On 4/21/2014 5:00 AM, Steven Schveighoffer wrote:
 Total replacement of GC with ARC in D will:
This is the wrong straw-man, I'm not advocating for this at all.
Many are when they advocate ARC for D.
Does that preclude you from accepting any kind of ARC for D?
No. My objection is to pervasive ARC, i.e. all gc is replaced with ARC, and it all magically works.
With slicing, I don't think it's possible. Looking up the owner of a block for an INTERNAL pointer costs O(lg(n)), where n is all of your memory, because you must find out what block the pointer is accessing. Doing this during mark and sweep is bad enough, doing it for every ref count add or remove would be ghastly. ARC, and ref counting in general, is MUCH better implemented as having a reference count inside the object referenced. This means the type MUST BE AWARE of ref counting, or be wrapped in an aware type. Slices couldn't possibly do this correctly, since they do not point at the general block where all the info is maintained, but just at "some memory." An ARC-aware slice would be fatter, or less performant.
 5. Numerous posters here have posited that the overhead of ARC can be
 eliminated with a sufficiently smart compiler (which does not exist).
You continue to speak in extremes. People are saying that the compiler can eliminate most of the needless ARC increments and decrements, not all of them.
Manu, for example, suggests it is good enough to make the overhead insignificant. I'm skeptical.
I think you are misunderstanding something. This is not for a pervasive ARC-only, statically guaranteed system. The best example he gives (and I agree with him) is iOS. Just look at the success of iOS, where the entire OS API is based on ARC (actually RC, with an option for both ARC and manual, but the latter is going away). If ARC was "so bad", the iOS experience would show it. You may have doubts, but I can assure you I can build very robust and performant code with ARC in iOS. Let us consider that the single most difficult thing with memory management is giving away a piece of memory you created, never to see again, either via pushing into something, or returning it. The code that is responsible for creating that memory is not the code that is responsible for reclaiming it. This means you need an agreement between your code and the code that's accepting it, as to what the recipient needs to do once he's done. The giving away piece is easy, it's the "when am I done with this?" that makes it difficult. If it's going to one place, that's pretty simple, but if it's going to multiple places, that's where it becomes a headache. Basically, as long as someone is looking at this, it needs to exist, and you don't know who else is looking! In that respect, both ARC and GC handle the job admirably. But it so happens that some of the load that GC happens to require (more memory; infrequent but lagging pauses) is not conducive to the environments Manu is targeting (embedded games). The tradeoffs are not a complete win for ARC, and I don't think D should switch to ARC (I think it's impossible anyway), but providing a builtin compiler-assisted mechanism to use ARC would allow a style of coding that would complement D's existing tools. Consider that with D's version of pure, I can easily do functional and imperative programming, even mixing the two, without even thinking about it! I'd like to see something with ARC that makes it easy to intermix with GC, and replace some of the Object management in D. It can never be a fully generic solution, except by object wrapping, because the type needs to be aware of the RC. Other memory management tasks are simple. For example, owned arrays encapsulated inside an object, or temporary buffer space that you free before exiting a function. Those don't need complex memory management tools to make safe or effective. These have *defined lifetimes*.
 Compilers that do this do exist.
I can't reconcile agreeing that ARC isn't good enough to be pervasive with compiler technology eliminates unnecessary ARC overhead.
It's pretty pervasive on iOS. ARC has been around since iOS 4.3 (circa 2011). It's pretty difficult to use manual RC and beat ARC. In fact in some cases, ARC can beat manual, because the compiler has more insight and knowledge of the rules being followed. -Steve
Apr 21 2014
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 21 April 2014 at 20:29:46 UTC, Steven Schveighoffer 
wrote:
 example he gives (and I agree with him) is iOS. Just look at 
 the success of iOS, where the entire OS API is based on ARC 
 (actually RC, with an option for both ARC and manual, but the 
 latter is going away). If ARC was "so bad", the iOS experience 
 would show it. You may have doubts, but I can assure you I can 
 build very robust and performant code with ARC in iOS.
I am sure you are right about the last part, but the entire iOS API is not based on ARC. You only use Objective-C to obtain contexts and configure them. After that you do everything performance sensitive in pure C. That goes for everything from Quartz (which is C, not Objective-C), OpenGL (C, not Objective-C) to AudioUnits (C, not Objective-C). What is true is that you have bridges between manual ref counting where it exists in Core Foundation and Foundation, but just because you have a counter does not mean that you use it. ;-)
 It's pretty pervasive on iOS. ARC has been around since iOS 4.3 
 (circa 2011).
Not if you are doing systems level programming. You are talking about application level programming and on that level it is pervasive, but iOS apps have advanced rendering-engines to rely on for audio/video.
Apr 21 2014
prev sibling next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 21 April 2014 at 20:29:46 UTC, Steven Schveighoffer 
wrote:
 It's pretty difficult to use manual RC and beat ARC. In fact in 
 some cases, ARC can beat manual, because the compiler has more 
 insight and knowledge of the rules being followed.
Are you sure? Have you tried to do it first with CFRelease/CFRetain, then with ARC? I believe this is the real reason (but I could be wrong): «You can’t implement custom retain or release methods.» from https://developer.apple.com/library/mac/releasenotes/ObjectiveC/RN-TransitioningToARC/Introduction/Introduction.html
Apr 21 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
Here are two very good reasons to avoid extensive ref-counting:

1. transactional memory ( you don't want a lock on two reads )

2. cache coherency ( you don't want barriers everywhere )

Betting everything on ref counting is the same as saying no to 
upcoming CPUs.

IMO that means ARC is DOA. It might be useful for some high level 
objects… but I don't understand why one would think that it is a 
good low level solution. It is a single-threaded solution.
Apr 21 2014
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 21 Apr 2014 17:52:30 -0400, Ola Fosheim Gr=C3=B8stad  =

<ola.fosheim.grostad+dlang gmail.com> wrote:

 Here are two very good reasons to avoid extensive ref-counting:

 1. transactional memory ( you don't want a lock on two reads )

 2. cache coherency ( you don't want barriers everywhere )

 Betting everything on ref counting is the same as saying no to upcomin=
g =
 CPUs.

 IMO that means ARC is DOA. It might be useful for some high level  =
 objects=E2=80=A6 but I don't understand why one would think that it is=
a good =
 low level solution. It is a single-threaded solution.
Single threaded ARC can go a long way in D. We statically know whether = data is shared or not. -Steve
Apr 22 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 22 April 2014 at 13:11:55 UTC, Steven Schveighoffer 
wrote:
 Single threaded ARC can go a long way in D.
Not without changing language semantics/mechanisms?
We statically know  whether data is shared or not.
I don't understand how you can know this when you allow foreign function invocation. Even thread local globals can leak into another thread by mistake, unnoticed? Meaning random crashes that are hard to debug?
Apr 22 2014
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 22 Apr 2014 09:29:28 -0400, Ola Fosheim Gr=C3=B8stad  =

<ola.fosheim.grostad+dlang gmail.com> wrote:

 On Tuesday, 22 April 2014 at 13:11:55 UTC, Steven Schveighoffer wrote:=
 Single threaded ARC can go a long way in D.
Not without changing language semantics/mechanisms?
 We statically know  whether data is shared or not.
I don't understand how you can know this when you allow foreign functi=
on =
 invocation.
Can you explain this?
 Even thread local globals can leak into another thread by mistake,  =
 unnoticed? Meaning random crashes that are hard to debug?
By mistake? How? -Steve
Apr 22 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 22 April 2014 at 13:39:54 UTC, Steven Schveighoffer 
wrote:
 On Tue, 22 Apr 2014 09:29:28 -0400, Ola Fosheim Grøstad Can you 
 explain this?
When you use a C/C++ framework you don't know what happens to the pointers you hand to it. You also don't know which threads call your D-functons from that framework. (Assuming the framework is multi-threaded). To know this you are required to know the internals of the framework you are utilizing or inject runtime guards into your D functions?
 By mistake? How?
By insertion into a global datastructure that happens at a lower layer than the higher level you allocate on.
Apr 22 2014
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 22 Apr 2014 09:48:28 -0400, Ola Fosheim Gr=C3=B8stad  =

<ola.fosheim.grostad+dlang gmail.com> wrote:

 On Tuesday, 22 April 2014 at 13:39:54 UTC, Steven Schveighoffer wrote:=
 On Tue, 22 Apr 2014 09:29:28 -0400, Ola Fosheim Gr=C3=B8stad Can you =
explain =
 this?
When you use a C/C++ framework you don't know what happens to the =
 pointers you hand to it.
Those are typically well-documented, but yes, you rely on "insider" = knowledge. You can mark C functions with the appropriate attributes so t= he = D compiler can enforce this for you. I think we should make reference = counting work, as long as you don't call mischievous library code. We ta= ke = a similar approach to other safety aspects of D.
 You also don't know which threads call your D-functons from that  =
 framework. (Assuming the framework is multi-threaded). To know this yo=
u =
 are required to know the internals of the framework you are utilizing =
or =
 inject runtime guards into your D functions?
Or just mark those objects sent into the framework as shared. Having = multi-threaded RC isn't bad, just not as efficient. One thing that would be nice is to allow moving a data pointer from one = = thread to another. In other words, as long as your data is contained, it= = can pass from one thread to another, and still be considered unshared. I think sooner or later, we are going to have to figure that one out.
 By mistake? How?
By insertion into a global datastructure that happens at a lower layer=
=
 than the higher level you allocate on.
I think this is what you are talking about above, or is there something = = else? -Steve
Apr 22 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 22 April 2014 at 14:07:47 UTC, Steven Schveighoffer 
wrote:
 know this you are required to know the internals of the 
 framework you are utilizing or inject runtime guards into your 
 D functions?
Or just mark those objects sent into the framework as shared. Having multi-threaded RC isn't bad, just not as efficient.
Actually, when I think of it, guards probably would be cheap. All you have to do is to store the thread-context-pointer-register into a global when the thread starts up. Then just do a simple if-test at the function invocation. (assuming the register doesn't change over time). Actually, it could be done as a self-modifying pass at startup… That would make it a register test against an immediate value, no memory buss implications.
 One thing that would be nice is to allow moving a data pointer 
 from one thread to another. In other words, as long as your 
 data is contained, it can pass from one thread to another, and 
 still be considered unshared.
Yes, that sounds plausible.
 I think this is what you are talking about above, or is there 
 something else?
You are right :). Ola.
Apr 22 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/21/2014 1:29 PM, Steven Schveighoffer wrote:
 I think you are misunderstanding something. This is not for a pervasive
 ARC-only, statically guaranteed system. The best example he gives (and I agree
 with him) is iOS. Just look at the success of iOS, where the entire OS API is
 based on ARC (actually RC, with an option for both ARC and manual, but the
 latter is going away). If ARC was "so bad", the iOS experience would show it.
 You may have doubts, but I can assure you I can build very robust and
performant
 code with ARC in iOS.
The thing is, with iOS ARC, it cannot be statically guaranteed to be memory safe. This makes it simply not acceptable for D in the general case. It "works" with iOS because iOS allows all kinds of (unsafe) ways to escape it, and it must offer those ways because it is not performant. Kinda sorta memory safe, mostly memory safe, etc., is not a static guarantee. There is JUST NO WAY that: struct RefCount { T* data; int* count; } is going to be near as performant as: T* 1. A dereference requires two indirections. Cache performance, poof! 2. A copy requires two indirections to inc, two indirections to dec, and an exception unwind handler for dec. 3. Those two word structs add to memory consumption. As you pointed out, performant code is going to have to cache the data* value. That cannot be guaranteed memory safe.
 I can't reconcile agreeing that ARC isn't good enough to be pervasive with
 compiler technology eliminates unnecessary ARC overhead.
It's pretty pervasive on iOS. ARC has been around since iOS 4.3 (circa 2011).
Pervasive means "for all pointers". This is not true of iOS. It's fine for iOS to do a half job of it, because the language makes no pretensions about memory safety. It is not fine for D to replace a guaranteed memory safe system with an unsafe, hope-your-programmers-get-it-right, solution.
Apr 21 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 21 April 2014 at 23:02:54 UTC, Walter Bright wrote:
 There is JUST NO WAY that:

     struct RefCount {
         T* data;
         int* count;
     }
This is actually quite efficient compared to the standard NSObject which uses a hashtable for refcounting: http://www.opensource.apple.com/source/objc4/objc4-551.1/runtime/NSObject.mm http://www.opensource.apple.com/source/objc4/objc4-551.1/runtime/llvm-DenseMap.h This is how Core Foundation does it: http://www.opensource.apple.com/source/CF/CF-855.11/CFRuntime.c Pretty longwinded: CFTypeRef CFRetain(CFTypeRef cf) { if (NULL == cf) { CRSetCrashLogMessage("*** CFRetain() called with NULL ***"); HALT; } if (cf) __CFGenericAssertIsCF(cf); return _CFRetain(cf, false); } static CFTypeRef _CFRetain(CFTypeRef cf, Boolean tryR) { uint32_t cfinfo = *(uint32_t *)&(((CFRuntimeBase *)cf)->_cfinfo); if (cfinfo & 0x800000) { // custom ref counting for object ...stuff deleted… refcount(+1, cf); return cf; } …lots of stuff deleted… return cf; }
Apr 21 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/21/2014 11:51 PM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 This is actually quite efficient compared to the standard NSObject which uses a
 hashtable for refcounting:
It's not efficient compared to pointers.
Apr 22 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 22 April 2014 at 09:01:20 UTC, Walter Bright wrote:
 On 4/21/2014 11:51 PM, "Ola Fosheim Grøstad" 
 <ola.fosheim.grostad+dlang gmail.com>" wrote:
 This is actually quite efficient compared to the standard 
 NSObject which uses a
 hashtable for refcounting:
It's not efficient compared to pointers.
It isn't efficient compared to pointers if you use a blind ARC implementation. If you use ARC to track ownership of regions then it can be efficient. The real culprit is multithreading, it can be resolved though. If you put the counters on cachelines that are local to the thread, either by offset or TLS. E.g. pseudocode for 8 refcounted pointers on 4 threads with 32 bytes cachelines could be something along the lines of: struct { func* destructor[8]; // cacheline -1 void* ptr[8]; //cacheline 0 uint bitmask[8]; //cacheline 1 int refcount[8*4] ; //cacheline2-6 initialized to -1 } THREADOFFSET = (THREADID+2)*32 retain(ref){ //ref is a pointer to cacheline 0 if ( increment(ref+THREADOFFSET) == 0) { if( CAS_SET_BIT(ref+32,THREADID)==THREADID ){ HALT_DESTRUCTED() } } } release(ref){ if( decrement(ref+THREADOFFSET)<0 ){ if( CAS_CLR_BIT(ref+32,THREADID)==0){ call_destructor(ref-32,*ref); } } }
Apr 22 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 22 April 2014 at 09:40:15 UTC, Ola Fosheim Grøstad 
wrote:
    if( CAS_SET_BIT(ref+32,THREADID)==THREADID ){
Make that: if( CAS_SET_BIT(ref+32,THREADID) == (1<<THREADID) ){
Apr 22 2014
prev sibling parent reply "Kapps" <opantm2+spam gmail.com> writes:
On Tuesday, 22 April 2014 at 06:51:40 UTC, Ola Fosheim Grøstad
wrote:
 On Monday, 21 April 2014 at 23:02:54 UTC, Walter Bright wrote:
 There is JUST NO WAY that:

    struct RefCount {
        T* data;
        int* count;
    }
This is actually quite efficient compared to the standard NSObject which uses a hashtable for refcounting:
iOS now on 64-bit processors doesn't necessarily use a hashtable for refcounting. Basically, only 33 bits of the 64-bit pointer are used to actually refer to an address, then 19 of the remaining bits are used to store an inline reference count. Only if the inline reference count exceeds these 19 bits (very rare) do they switch to using a hashtable. It was one of the large benefits of switching to ARM64 for the iPhone 5.
Apr 22 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 22 April 2014 at 16:58:23 UTC, Kapps wrote:
 iOS now on 64-bit processors doesn't necessarily use a hashtable
 for refcounting. Basically, only 33 bits of the 64-bit pointer
 are used to actually refer to an address, then 19 of the
 remaining bits are used to store an inline reference count.
I am sure you are right, but how does that work? Do you have a link?
Apr 22 2014
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 22 Apr 2014 13:22:19 -0400, Ola Fosheim Gr=C3=B8stad  =

<ola.fosheim.grostad+dlang gmail.com> wrote:

 On Tuesday, 22 April 2014 at 16:58:23 UTC, Kapps wrote:
 iOS now on 64-bit processors doesn't necessarily use a hashtable
 for refcounting. Basically, only 33 bits of the 64-bit pointer
 are used to actually refer to an address, then 19 of the
 remaining bits are used to store an inline reference count.
I am sure you are right, but how does that work? Do you have a link?
I think what he's saying is the 19 bits are an offset into the global re= f = count container. Some sentinel value means "lookup the pointer in a = hashtable." -Steve
Apr 22 2014
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 22 Apr 2014 13:35:20 -0400, Steven Schveighoffer  =

<schveiguy yahoo.com> wrote:

 On Tue, 22 Apr 2014 13:22:19 -0400, Ola Fosheim Gr=C3=B8stad  =
 <ola.fosheim.grostad+dlang gmail.com> wrote:

 On Tuesday, 22 April 2014 at 16:58:23 UTC, Kapps wrote:
 iOS now on 64-bit processors doesn't necessarily use a hashtable
 for refcounting. Basically, only 33 bits of the 64-bit pointer
 are used to actually refer to an address, then 19 of the
 remaining bits are used to store an inline reference count.
I am sure you are right, but how does that work? Do you have a link?
I think what he's saying is the 19 bits are an offset into the global =
=
 ref count container. Some sentinel value means "lookup the pointer in =
a =
 hashtable."
Sorry, I was wrong. And so was Kapps, according to the article he = referenced. The 33 bits of the 64 bit pointer are used to point at a *class*, via th= e = object's isa member. The other 19 bits can be used for ref counting. I = mistakenly thought he meant the pointer to the object. This is kind of weird though. Why do it this way? If there is such an = advantage to having the ref count inside the object (and I don't disagre= e = with that), why didn't they do that before? Surely adding another 32-bit= = field wouldn't have killed the runtime. Even doing it the way they have seems unnecessarily complex, given that = = iOS 64-bit was brand new. -Steve
Apr 22 2014
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 22 April 2014 at 17:52:28 UTC, Steven Schveighoffer 
wrote:
 Even doing it the way they have seems unnecessarily complex, 
 given that iOS 64-bit was brand new.
I dislike this too… The only reason I can think of is that Apple themselves have code for OS-X that is optimized 64 bit code and that will break if they add another field? The lower bit is used for "compatibility mode". Hmm…
Apr 22 2014
prev sibling parent Michel Fortin <michel.fortin michelf.ca> writes:
On 2014-04-22 17:52:27 +0000, "Steven Schveighoffer" 
<schveiguy yahoo.com> said:

 Even doing it the way they have seems unnecessarily complex, given that
 iOS 64-bit was brand new.
Perhaps it's faster that way due to some caching effect. Or perhaps it's to be able to have static constant string objects in the readonly segments. Apple could always change their mind and add another field for the reference count. The Modern runtime has non-fragile classes, so you can change the base class layout breaking ABI compatibility. -- Michel Fortin michel.fortin michelf.ca http://michelf.ca
Apr 22 2014
prev sibling parent reply "Kapps" <opantm2+spam gmail.com> writes:
On Tuesday, 22 April 2014 at 17:22:21 UTC, Ola Fosheim Grøstad
wrote:
 On Tuesday, 22 April 2014 at 16:58:23 UTC, Kapps wrote:
 iOS now on 64-bit processors doesn't necessarily use a 
 hashtable
 for refcounting. Basically, only 33 bits of the 64-bit pointer
 are used to actually refer to an address, then 19 of the
 remaining bits are used to store an inline reference count.
I am sure you are right, but how does that work? Do you have a link?
https://www.mikeash.com/pyblog/friday-qa-2013-09-27-arm64-and-you.html Ctrl +F "Repurposed isa Pointer" Details about isa structure: http://www.sealiesoftware.com/blog/archive/2013/09/24/objc_explain_Non-pointer_isa.html
Apr 22 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 22 April 2014 at 17:39:53 UTC, Kapps wrote:
 https://www.mikeash.com/pyblog/friday-qa-2013-09-27-arm64-and-you.html
 Ctrl +F "Repurposed isa Pointer"
Ah, ok, the refcount is embedded in the "class-table" pointer.
Apr 22 2014
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 22/04/14 01:02, Walter Bright wrote:

 The thing is, with iOS ARC, it cannot be statically guaranteed to be
 memory safe. This makes it simply not acceptable for D in the general
 case. It "works" with iOS because iOS allows all kinds of (unsafe) ways
 to escape it, and it must offer those ways because it is not performant.
So does D. That's why there is safe, trusted and system. What is the unsafe part of ARC anyway? -- /Jacob Carlborg
Apr 22 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/22/2014 12:11 AM, Jacob Carlborg wrote:
 On 22/04/14 01:02, Walter Bright wrote:

 The thing is, with iOS ARC, it cannot be statically guaranteed to be
 memory safe. This makes it simply not acceptable for D in the general
 case. It "works" with iOS because iOS allows all kinds of (unsafe) ways
 to escape it, and it must offer those ways because it is not performant.
So does D. That's why there is safe, trusted and system. What is the unsafe part of ARC anyway?
As I said, it is when it is bypassed for performance reasons.
Apr 22 2014
parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Tuesday, 22 April 2014 at 09:02:21 UTC, Walter Bright wrote:
 On 4/22/2014 12:11 AM, Jacob Carlborg wrote:
 On 22/04/14 01:02, Walter Bright wrote:

 The thing is, with iOS ARC, it cannot be statically 
 guaranteed to be
 memory safe. This makes it simply not acceptable for D in the 
 general
 case. It "works" with iOS because iOS allows all kinds of 
 (unsafe) ways
 to escape it, and it must offer those ways because it is not 
 performant.
So does D. That's why there is safe, trusted and system. What is the unsafe part of ARC anyway?
As I said, it is when it is bypassed for performance reasons.
A system that is automatically safe but can be manually managed for extra performance. That sounds very D-ish.
Apr 22 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/22/2014 2:20 AM, John Colvin wrote:
 A system that is automatically safe but can be manually managed for extra
 performance. That sounds very D-ish.
Needing to write system code with a GC to get performance is rare in D. It's normal in O-C, as has been pointed out here a couple times.
Apr 22 2014
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 21 Apr 2014 19:02:53 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 4/21/2014 1:29 PM, Steven Schveighoffer wrote:
 I think you are misunderstanding something. This is not for a pervasive
 ARC-only, statically guaranteed system. The best example he gives (and  
 I agree
 with him) is iOS. Just look at the success of iOS, where the entire OS  
 API is
 based on ARC (actually RC, with an option for both ARC and manual, but  
 the
 latter is going away). If ARC was "so bad", the iOS experience would  
 show it.
 You may have doubts, but I can assure you I can build very robust and  
 performant
 code with ARC in iOS.
The thing is, with iOS ARC, it cannot be statically guaranteed to be memory safe.
So?
 This makes it simply not acceptable for D in the general case.
Because it can't live beside all the other unsafe code in D? I don't get it...
 It "works" with iOS because iOS allows all kinds of (unsafe) ways to  
 escape it, and it must offer those ways because it is not performant.
I think we're officially going in circles here.
 Kinda sorta memory safe, mostly memory safe, etc., is not a static  
 guarantee.

 There is JUST NO WAY that:

      struct RefCount {
          T* data;
          int* count;
      }

 is going to be near as performant as:

      T*
Again with the straw man!
 1. A dereference requires two indirections. Cache performance, poof!

 2. A copy requires two indirections to inc, two indirections to dec, and  
 an exception unwind handler for dec.

 3. Those two word structs add to memory consumption.
Consider the straw man destroyed :)
 As you pointed out, performant code is going to have to cache the data*  
 value. That cannot be guaranteed memory safe.


 I can't reconcile agreeing that ARC isn't good enough to be pervasive  
 with
 compiler technology eliminates unnecessary ARC overhead.
It's pretty pervasive on iOS. ARC has been around since iOS 4.3 (circa 2011).
Pervasive means "for all pointers". This is not true of iOS. It's fine for iOS to do a half job of it, because the language makes no pretensions about memory safety. It is not fine for D to replace a guaranteed memory safe system with an unsafe, hope-your-programmers-get-it-right, solution.
Totally agree, which is why nobody is saying that. -Steve
Apr 22 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/22/2014 6:18 AM, Steven Schveighoffer wrote:
 On Mon, 21 Apr 2014 19:02:53 -0400, Walter Bright <newshound2 digitalmars.com>
 wrote:
 The thing is, with iOS ARC, it cannot be statically guaranteed to be memory
safe.
So?
If you see no value in static guarantees of memory safety, then what can I say?
Apr 22 2014
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 22 Apr 2014 14:12:17 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 4/22/2014 6:18 AM, Steven Schveighoffer wrote:
 On Mon, 21 Apr 2014 19:02:53 -0400, Walter Bright  
 <newshound2 digitalmars.com>
 wrote:
 The thing is, with iOS ARC, it cannot be statically guaranteed to be  
 memory safe.
So?
If you see no value in static guarantees of memory safety, then what can I say?
Seriously, the straw man arguments have to stop. There is plenty of valuable D code that is not guaranteed memory safe. For example, druntime. ARC does not equal guaranteed memory safety. So NO, it cannot replace the GC for D safe code. That doesn't make it useless. -Steve
Apr 22 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/22/2014 11:28 AM, Steven Schveighoffer wrote:
 On Tue, 22 Apr 2014 14:12:17 -0400, Walter Bright <newshound2 digitalmars.com>
 wrote:

 On 4/22/2014 6:18 AM, Steven Schveighoffer wrote:
 On Mon, 21 Apr 2014 19:02:53 -0400, Walter Bright <newshound2 digitalmars.com>
 wrote:
 The thing is, with iOS ARC, it cannot be statically guaranteed to be memory
 safe.
So?
If you see no value in static guarantees of memory safety, then what can I say?
Seriously, the straw man arguments have to stop. There is plenty of valuable D code that is not guaranteed memory safe.
Memory safety is not a strawman. It's a critical feature for a modern language, and will become ever more important.
 For example, druntime.
Nobody expects a GC's guts to be guaranteed memory safe. But they do expect the interface to it to be memory safe, and using GC allocated data to be memory safe, and being able to write performant memory safe code. And by memory safe, I don't mean hand verified. I mean machine verified.
Apr 22 2014
next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 22 Apr 2014 15:02:05 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 4/22/2014 11:28 AM, Steven Schveighoffer wrote:
 On Tue, 22 Apr 2014 14:12:17 -0400, Walter Bright  
 <newshound2 digitalmars.com>
 wrote:

 On 4/22/2014 6:18 AM, Steven Schveighoffer wrote:
 On Mon, 21 Apr 2014 19:02:53 -0400, Walter Bright  
 <newshound2 digitalmars.com>
 wrote:
 The thing is, with iOS ARC, it cannot be statically guaranteed to be  
 memory
 safe.
So?
If you see no value in static guarantees of memory safety, then what can I say?
Seriously, the straw man arguments have to stop. There is plenty of valuable D code that is not guaranteed memory safe.
Memory safety is not a strawman. It's a critical feature for a modern language, and will become ever more important.
No, a straw man argument is when you imply that I am arguing from a position that is similar to my actual position, but obviously flawed. Then proceed to attack the straw man. Example: A: Sunny days are good. B: If all days were sunny, we'd never have rain, and without rain, we'd have famine and death. At no time did I ever say I see no value in static guarantees of memory safety. But I also see value in ref counting for performance and memory purposes in NON memory-safe code. -Steve
Apr 22 2014
prev sibling parent reply Michel Fortin <michel.fortin michelf.ca> writes:
On 2014-04-22 19:02:05 +0000, Walter Bright <newshound2 digitalmars.com> said:

 Memory safety is not a strawman. It's a critical feature for a modern 
 language, and will become ever more important.
What you don't seem to get is that ARC, by itself, is memory-safe. Objective-C isn't memory safe because it lets you play with raw pointers too. If you limit yourself to ARC-managed pointers (and avoid undefined behaviours inherited from C) everything is perfectly memory safe. I'm pretty confident that had I continued my work on D/Objective-C we'd now be able to interact with Objective-C objects using ARC in safe code. I was planning for that. Objective-C actually isn't very far from memory safety now that it has ARC, it just lacks the safe attribute to enable compiler verification. -- Michel Fortin michel.fortin michelf.ca http://michelf.ca
Apr 22 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/22/2014 12:42 PM, Michel Fortin wrote:
 On 2014-04-22 19:02:05 +0000, Walter Bright <newshound2 digitalmars.com> said:

 Memory safety is not a strawman. It's a critical feature for a modern
 language, and will become ever more important.
What you don't seem to get is that ARC, by itself, is memory-safe.
I repeatedly said that it is not memory safe because you must employ escapes from it to get performance.
 Objective-C isn't memory safe because it lets you play with raw pointers too.
If
 you limit yourself to ARC-managed pointers (and avoid undefined behaviours
 inherited from C) everything is perfectly memory safe.
Allow me to make it clear that IF you never convert an ARC reference to a raw pointer in userland, I agree that it is memory safe. But this is not practical for high performance code.
 I'm pretty confident that had I continued my work on D/Objective-C we'd now be
 able to interact with Objective-C objects using ARC in  safe code. I was
 planning for that. Objective-C actually isn't very far from memory safety now
 that it has ARC, it just lacks the  safe attribute to enable compiler
verification.
I wish you would continue that work!
Apr 22 2014
next sibling parent Jacob Carlborg <doob me.com> writes:
On 23/04/14 06:33, Walter Bright wrote:

 I repeatedly said that it is not memory safe because you must employ
 escapes from it to get performance.
Apparently you need that for the GC as well, that's why this thread was started to begin with. -- /Jacob Carlborg
Apr 22 2014
prev sibling parent Michel Fortin <michel.fortin michelf.ca> writes:
On 2014-04-23 04:33:00 +0000, Walter Bright <newshound2 digitalmars.com> said:

 On 4/22/2014 12:42 PM, Michel Fortin wrote:
 On 2014-04-22 19:02:05 +0000, Walter Bright <newshound2 digitalmars.com> said:
 
 Memory safety is not a strawman. It's a critical feature for a modern
 language, and will become ever more important.
What you don't seem to get is that ARC, by itself, is memory-safe.
I repeatedly said that it is not memory safe because you must employ escapes from it to get performance.
It wasn't that clear to me you were saying that, but now it makes sense. In Objective-C, the performance-sensitive parts are going to be implemented in C, that's true. But that's rarely going to be more than 5% of your code, and probably only a few isolated parts where you're using preallocated memory blocks retained by ARC while you're playing with the content. If you're writing something that can't tolerate a GC pause, then it makes perfect sense to make this performance-critical code unsafe so you can write the remaining 95% of your app in a memory-safe environment with no GC pause. D on the other hand forces you to have those GC pauses or have no memory management at all. It's a different tradeoff and it isn't suitable everywhere, but I acknowledge it makes it easier to make performance-sensitive code safe, something that'd be a shame to lose.
 Objective-C isn't memory safe because it lets you play with raw 
 pointers too. If
 you limit yourself to ARC-managed pointers (and avoid undefined behaviours
 inherited from C) everything is perfectly memory safe.
Allow me to make it clear that IF you never convert an ARC reference to a raw pointer in userland, I agree that it is memory safe. But this is not practical for high performance code.
Framing the problem this way makes it easier to find a solution. I wonder, would it be acceptable if ARC was used everywhere by default but could easily be disabled inside performance-sensitive code by allowing the user choose between safe GC-based memory management or unsafe manual memory management? I have an idea that'd permit just that. Perhaps I should write a DIP about it.
 I'm pretty confident that had I continued my work on D/Objective-C we'd now be
 able to interact with Objective-C objects using ARC in  safe code. I was
 planning for that. Objective-C actually isn't very far from memory safety now
 that it has ARC, it just lacks the  safe attribute to enable compiler 
 verification.
I wish you would continue that work!
I wish I had the time too. -- Michel Fortin michel.fortin michelf.ca http://michelf.ca
Apr 23 2014
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 22 April 2014 at 19:42:20 UTC, Michel Fortin wrote:
 Objective-C isn't memory safe because it lets you play with raw 
 pointers too. If you limit yourself to ARC-managed pointers 
 (and avoid undefined behaviours inherited from C) everything is 
 perfectly memory safe.
I'm not convinced that it is safe in multi-threaded mode. How does ARC deal with parallell reads and writes from two different threads? IIRC the most common implementations deals with read/read and write/write, but read/write is too costly?
Apr 23 2014
parent Michel Fortin <michel.fortin michelf.ca> writes:
On 2014-04-23 09:50:57 +0000, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com> said:

 On Tuesday, 22 April 2014 at 19:42:20 UTC, Michel Fortin wrote:
 Objective-C isn't memory safe because it lets you play with raw 
 pointers too. If you limit yourself to ARC-managed pointers (and avoid 
 undefined behaviours inherited from C) everything is perfectly memory 
 safe.
I'm not convinced that it is safe in multi-threaded mode. How does ARC deal with parallell reads and writes from two different threads? IIRC the most common implementations deals with read/read and write/write, but read/write is too costly?
The answer is that in the general case you should protect reads and writes to an ARC pointer with locks. Otherwise the counter risk being getting out of sync and later you'll get corruption somewhere. There are atomic properties which are safe to read and write from multiple threads. Internally they use the synchronized keyword on the object. But since there's no 'shared' attribute in Objective-C, you can't go very far if you wanted the compiler to check things for memory safety. That said, if you assume a correct implementation of the NSCopying protocol (deep copying), objects following that protocol would be safe to pass through a std.concurrency-like interface. In all honesty, I'm not that impressed with the multithreading protections in D either. It seems you so often have to bypass the type system to make something useful that it doesn't appear very different from not having them. And don't get me started with synchronized classes... -- Michel Fortin michel.fortin michelf.ca http://michelf.ca
Apr 23 2014
prev sibling parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 23 April 2014 04:28, Steven Schveighoffer via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On Tue, 22 Apr 2014 14:12:17 -0400, Walter Bright
 <newshound2 digitalmars.com> wrote:

 On 4/22/2014 6:18 AM, Steven Schveighoffer wrote:
 On Mon, 21 Apr 2014 19:02:53 -0400, Walter Bright
 <newshound2 digitalmars.com>
 wrote:
 The thing is, with iOS ARC, it cannot be statically guaranteed to be
 memory safe.
So?
If you see no value in static guarantees of memory safety, then what can I say?
Seriously, the straw man arguments have to stop. There is plenty of valuable D code that is not guaranteed memory safe. For example, druntime. ARC does not equal guaranteed memory safety. So NO, it cannot replace the GC for D safe code. That doesn't make it useless.
Why not? Assuming that direct access to the refcount is not safe, why would ARC be unsafe? What makes it less safe than the GC?
Apr 23 2014
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 23 Apr 2014 05:14:44 -0400, Manu via Digitalmars-d  
<digitalmars-d puremagic.com> wrote:

 On 23 April 2014 04:28, Steven Schveighoffer via Digitalmars-d
 ARC does not equal guaranteed memory safety. So NO, it cannot replace  
 the GC
 for D  safe code. That doesn't make it useless.
Why not? Assuming that direct access to the refcount is not safe, why would ARC be unsafe? What makes it less safe than the GC?
Arguably, it is safe, as long as you only use ARC pointers. I don't know that I would ever want or use that in D (or even Objective-C). So it's not that it's not safe, it's that it cannot be a drop-in-replacement for the GC in existing D safe code. For example, you could never use slices or ranges, or these would have to be rewritten to keep references to the full object. -Steve
Apr 23 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/22/2014 6:18 AM, Steven Schveighoffer wrote:
 Again with the straw man!
If you really believe you can make a performant ARC system, and have it be memory safe, feel free to write a complete proposal on it.
Apr 22 2014
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 22 Apr 2014 14:15:35 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 4/22/2014 6:18 AM, Steven Schveighoffer wrote:
 Again with the straw man!
If you really believe you can make a performant ARC system, and have it be memory safe, feel free to write a complete proposal on it.
I hope you can understand that from this discussion, I'm not to motivated to devote time on it. Not that I could do it anyway :) Generally, when investing a lot of time and energy into something, you want to make sure the market is there first... -Steve
Apr 22 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 22 April 2014 at 18:38:28 UTC, Steven Schveighoffer 
wrote:
 I hope you can understand that from this discussion, I'm not to 
 motivated to devote time on it. Not that I could do it anyway :)
Do it anyway. This is such a fun topic and many would be entertained by the ensuing excruciating inquisitorial process.
Apr 22 2014
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 22 Apr 2014 14:43:04 -0400, Ola Fosheim Gr=C3=B8stad  =

<ola.fosheim.grostad+dlang gmail.com> wrote:

 On Tuesday, 22 April 2014 at 18:38:28 UTC, Steven Schveighoffer wrote:=
 I hope you can understand that from this discussion, I'm not to  =
 motivated to devote time on it. Not that I could do it anyway :)
Do it anyway.
I mean not like I can't because I don't want to or don't have time, but = = can't as in I lack the skill set :) It's interesting to debate, and I ge= t = the concepts, but I am not a CPU/cache guy, and these things are really = = important to get right for performance, since ref counting would be used= = frequently. -Steve
Apr 22 2014
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 22 April 2014 at 18:48:04 UTC, Steven Schveighoffer 
wrote:
 I mean not like I can't because I don't want to or don't have 
 time, but can't as in I lack the skill set :) It's interesting 
 to debate, and I get the concepts, but I am not a CPU/cache 
 guy, and these things are really important to get right for 
 performance, since ref counting would be used frequently.
I think RC performance unfortunately is very hardware dependent. So, it involves testing, benchmarking etc… Still, I think these discussions are good because they are opportunities to look at what others do (like Objective-C) which is educational (at least for me). The trouble I have with ref counting is that the future of HW is uncertain, but that can be held against GC too. Take for instance Phi with 80 cores and a crossbar interconnects (?), how does that affect memory management? Unfortunately I have no personal experience with Phi so my take on it is rather vague… But if ARC takes a long time to get right I think one should consider effects of upcoming HW when considering for/against. Maybe Go's CSP model is better for Phi. Maybe not. I don't know, but I think it is an interesting topic, because 4-8 cores is not going to be enough in the next decade. I think…
Apr 22 2014
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 22/04/14 20:48, Steven Schveighoffer wrote:

 I mean not like I can't because I don't want to or don't have time, but
 can't as in I lack the skill set :) It's interesting to debate, and I
 get the concepts, but I am not a CPU/cache guy, and these things are
 really important to get right for performance, since ref counting would
 be used frequently.
That's the worst kind of excuses :) I don't remember the last time I started working on a project and know what I was doing/had the right skill set. I mean, that's how you learn. -- /Jacob Carlborg
Apr 22 2014
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 23 Apr 2014 02:11:38 -0400, Jacob Carlborg <doob me.com> wrote:

 On 22/04/14 20:48, Steven Schveighoffer wrote:

 I mean not like I can't because I don't want to or don't have time, but
 can't as in I lack the skill set :) It's interesting to debate, and I
 get the concepts, but I am not a CPU/cache guy, and these things are
 really important to get right for performance, since ref counting would
 be used frequently.
That's the worst kind of excuses :) I don't remember the last time I started working on a project and know what I was doing/had the right skill set. I mean, that's how you learn.
Sure, but there are things I CAN do with my limited time, that I do have the expertise for. I've already been schooled by the likes of you and Michel Fortin on my knowledge of ref counting implementation. BTW, this is how RedBlackTree (dcollections) came into existence, I had no idea what I was doing, just the API that I wanted (and back then, I had more time). The code is actually a slightly-hand-optimized copy of my CLR book's red-black algorithm. -Steve
Apr 23 2014
parent Jacob Carlborg <doob me.com> writes:
On 2014-04-23 17:57, Steven Schveighoffer wrote:

 Sure, but there are things I CAN do with my limited time, that I do have
 the expertise for. I've already been schooled by the likes of you and
 Michel Fortin on my knowledge of ref counting implementation.
That's completely different. I've felt the same for a long time. Instead of working on the compiler I built tools and libraries for D. Then I finally couldn't keep my hands off and now I have D/Objective-C working for 64bit :) -- /Jacob Carlborg
Apr 23 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/22/2014 11:38 AM, Steven Schveighoffer wrote:
 Generally, when investing a lot of time and energy into something, you want to
 make sure the market is there first...
Ironic, considering that nobody but me believed there was a market for D before it existed :-) I do believe there is a market for ARC in D. What I don't believe are the various claims about how insignificant its costs are, and I'm not so willing to give up on memory safety.
Apr 22 2014
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 22 Apr 2014 15:10:31 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 4/22/2014 11:38 AM, Steven Schveighoffer wrote:
 Generally, when investing a lot of time and energy into something, you  
 want to
 make sure the market is there first...
Ironic, considering that nobody but me believed there was a market for D before it existed :-)
Sure but I'm not prepared to generate a new language over this. If you won't accept it, there's no point in making it. In other words, YOU are the market ;) *disclaimer* I am in no shape to actually make such a proposal/change, and probably it would come out horribly if I tried, implementation-wise. I simply am trying to convince you that it would be a valuable addition to D so others may see an opportunity there.
 I do believe there is a market for ARC in D. What I don't believe are  
 the various claims about how insignificant its costs are, and I'm not so  
 willing to give up on memory safety.
You don't have to. Just make ARC not safe. -Steve
Apr 22 2014
prev sibling parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 22 April 2014 05:03, Walter Bright via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 4/21/2014 10:57 AM, Steven Schveighoffer wrote:
 On Mon, 21 Apr 2014 13:28:24 -0400, Walter Bright
 <newshound2 digitalmars.com>
 wrote:

 On 4/21/2014 5:00 AM, Steven Schveighoffer wrote:
 Total replacement of GC with ARC in D will:
This is the wrong straw-man, I'm not advocating for this at all.
Many are when they advocate ARC for D.
Does that preclude you from accepting any kind of ARC for D?
No. My objection is to pervasive ARC, i.e. all gc is replaced with ARC, and it all magically works.
It's not magic, it's careful engineering, and considering each problem case one by one as they arise until it's good.
 5. Numerous posters here have posited that the overhead of ARC can be
 eliminated with a sufficiently smart compiler (which does not exist).
You continue to speak in extremes. People are saying that the compiler can eliminate most of the needless ARC increments and decrements, not all of them.
Manu, for example, suggests it is good enough to make the overhead insignificant. I'm skeptical.
I didn't quite say that, but let me justify that claim if you want to put it in those words. RC fiddling in low-frequency code is insignificant. High-frequency code doesn't typically allocate, and is also likely to implement a context specific solution anyway if it is truly performance sensitive. In the event of code where RC fiddling is found to make a significant impact on performance, there are various tools available to address this directly. There's a middle-ground that might suffer compared to GC; moderate-frequency, where code is sloppily written doing whatever it likes without any real care, and run lots of iterations. But that's not usually an example of performance sensitive code, it's just crappy code run many times, and again, they have the tools to improve it easily if they care enough to do so. I also believe programmers will learn the performance characteristics of ARC very quickly, and work with it effectively. The core of my argument is that it's _possible_ to work with ARC, it's not possible to work with GC if it is fundamentally incompatible with your application.
 Compilers that do this do exist.
I can't reconcile agreeing that ARC isn't good enough to be pervasive with compiler technology eliminates unnecessary ARC overhead.
The most important elimination is objects being passed down a call-tree via args. That can certainly eliminate properly. High-frequency code always exists nearer to the leaves. D has pure (which is definitely well employed), and 'shared' is an explicit attribute, which allows the compiler to make way more assumptions than O-C. The ARC optimisations are predictable and reliable. The suggestion that acceptable code performance relying on specific optimisation is a concept you can't reconcile is a bit strange. Programmers rely on optimisation all the time for acceptable performance. This is no different.
Apr 23 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Manu, you obviously believe in ARC. I've made an attempt to do ARC, detailed in 
the other thread here. I failed.

http://forum.dlang.org/thread/l34lei$255v$1 digitalmars.com

Michel Fortin also wants to bring iOS ARC to D.

I suggest you get together with Michel and work out a detailed design, and 
propose it.
Apr 23 2014
parent reply Jacob Carlborg <doob me.com> writes:
On 23/04/14 10:31, Walter Bright wrote:
 Manu, you obviously believe in ARC. I've made an attempt to do ARC,
 detailed in the other thread here. I failed.

 http://forum.dlang.org/thread/l34lei$255v$1 digitalmars.com
That conversation started out from the D/Objective-C conversations. To have ARC in D and be compatible with the one in Objective-C you don't have many choices. I'm not sure but I don't think your proposal was not compatible with ARC in Objective-C. -- /Jacob Carlborg
Apr 23 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/23/2014 6:10 AM, Jacob Carlborg wrote:
 That conversation started out from the D/Objective-C conversations. To have ARC
 in D and be compatible with the one in Objective-C you don't have many choices.
 I'm not sure but I don't think your proposal was not compatible with ARC in
 Objective-C.
Too many double negatives for me to be sure what you're saying. But it is clear to me that with Michel's experience with ARC in iOS combined with Manu's enthusiasm for it suggests that they are the right team to come up with a workable proposal, where mine failed.
Apr 23 2014
parent Jacob Carlborg <doob me.com> writes:
On 23/04/14 19:12, Walter Bright wrote:

 Too many double negatives for me to be sure what you're saying. But it
 is clear to me that with Michel's experience with ARC in iOS combined
 with Manu's enthusiasm for it suggests that they are the right team to
 come up with a workable proposal, where mine failed.
Sorry, now that I read it out loud it is confusing. Here's another try: You're proposal wasn't compatible with ARC in Objective-C. I'm not sure if I remember correctly. -- /Jacob Carlborg
Apr 23 2014
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 17 April 2014 at 19:55:08 UTC, Walter Bright wrote:
 I know that with ARC the compiler inserts the code for you. 
 That doesn't make it costless.
No, but Objective-C has some overhead to begin with, so it matters less. Cocoa is a very powerful framework that will do most of the weight-lifting for you, kinda like a swiss army knife. In the same league as Python. Slow high level, a variety of highly optimized C functions under the hood. IMHO Python and Objective-C wouldn't stand a chance without their libraries.
 I know it's done automatically. But you might be horrified at 
 what the generated code looks like.
Apple has put a lot of resources into ARC. How much slower than manual RC varies, some claim as little as 10%, others 30%, 50%, 100%. In that sense it is proof-of-concept. It is worse, but not a lot worse than manual ref counting if you have a compiler that does a very good job of it. But compiled Objective-C code looks "horrible" to begin with… so I am not sure how well that translates to D.
Apr 17 2014
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 17 April 2014 at 20:46:57 UTC, Ola Fosheim Grøstad 
wrote:
 But compiled Objective-C code looks "horrible" to begin with… 
 so I am not sure how well that translates to D.
Just to make it clear: ARC can make more assumptions than manual Objective-C calls to retain/release. So ARC being "surprisingly fast" relative to manual RC might be due to getting rid of Objective-C inefficiencies caused by explicit calls to retain/release rather than ARC being an excellent solution. YMMV.
Apr 17 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/17/2014 1:46 PM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 Apple has put a lot of resources into ARC. How much slower than manual RC
 varies, some claim as little as 10%, others 30%, 50%, 100%.
That pretty much kills it, even at 10%.
Apr 17 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 17 April 2014 at 22:10:00 UTC, Walter Bright wrote:
 That pretty much kills it, even at 10%.
It probably is better than C++ shared_ptr though... D can probably do better than objective-c with whole program compilation, since the dynamic aspects of objective-c methods are challenging.
Apr 17 2014
prev sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
1986 - Brad Cox and Tom Love create Objective-C, announcing "this 
language has all the memory safety of C combined with all the 
blazing speed of Smalltalk." Modern historians suspect the two 
were dyslexic.

( 
http://james-iry.blogspot.no/2009/05/brief-incomplete-and-mostly-wrong.html 
)
Apr 17 2014
prev sibling parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 17 April 2014 18:35, Walter Bright via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 On 4/16/2014 8:13 PM, Manu via Digitalmars-d wrote:

 On 17 April 2014 03:37, Walter Bright via Digitalmars-d
 <digitalmars-d puremagic.com <mailto:digitalmars-d puremagic.com>> wrote:
     ARC has very serious problems with bloat and performance.
 This is the first I've heard of it, and I've been going on about it for
 ages.
Consider two points: 1. I can't think of any performant ARC systems.
Consensus is that there's no performant GC available to D either. Can you show that Obj-C suffers serious performance penalty from it's ARC system? Have there been comparisons? 2. Java would be a relatively easy language to implement ARC in. There's
 probably a billion dollars invested in Java's GC. Why not ARC?
Java is designed to be GC compatible from the ground up. D is practically incompatible with GC in the same way as C, but it was shoe-horned in there anyway. Everyone that talks about fantasy 'awesome-GC's admits it would be impossible to implement in D for various reasons. I think Rainer's GC is a step forward; precision is definitely valuable, but benchmarks showed that it was slower. That didn't put anyone off though. I think it was generally agreed that precision trumped a small performance impact. Obviously, a critical part of ARC is the compilers ability to reduce
 redundant
 inc/dec sequences. At which point your 'every time' assertion is false.
 C++
 can't do ARC, so it's not comparable.
C++ has shared_ptr, with all kinds of escapes.
That's a library. The compiler knows nothing. Ref counting is useless without compiler support. With proper elimination, transferring ownership results in no cost, only
 duplication/destruction, and those are moments where I've deliberately
 committed
 to creation/destruction of an instance of something, at which point I'm
 happy to
 pay for an inc/dec; creation/destruction are rarely high-frequency
 operations.
inc/dec isn't as cheap as you imply. The dec usually requires the creation of an exception handling unwinder to do it.
Why do you need to do that? The thing is, it's potentially 'workable'. It's also easy to rearrange and factor out of hot code. It's predictable and well understood. Have you measured the impact?

 No. I don't really know how I could, as I haven't seen an ARC system.
Why do you have such a strong opposition if this is the case? I've never heard of Obj-C users complaining about the inc/dec costs.

 Obj-C only uses ARC for a minority of the objects.
But you're always talking about how D creates way less garbage than other languages, which seems to be generally true. It needs to be tested before you can make presumptions about performance. D offers some unique opportunities to improve on existing ARC implementations, and combined with D's relative less garbage than other languages, you might be surprised... How often does ref fiddling occur in reality? My guess is that with
 redundancy
 elimination, it would be surprisingly rare, and insignificant.
Yes, I would be surprised.
Well, I'd like to see it measured in practise. But most common scenarios I imagine appear like they'd eliminate nicely. pure, and perhaps proper escape analysis (planned?) offer great opportunity for better elimination than other implementations like Obj-C. Further problems with ARC are inability to mix ARC references with
 non-ARC
     references, seriously hampering generic code.
 That's why the only workable solution is that all references are ARC
 references.
 The obvious complication is reconciling malloc pointers, but I'm sure
 this can
 be addressed with some creativity.

 I imagine it would look something like:
 By default, pointers are fat: struct ref { void* ptr, ref_t* rc; }
First off, now pointers are 24 bytes in size. Secondly, every pointer dereference becomes two dereferences (not so good for cache performance).
That's not necessarily true. What length is the compiler typically able to eliminate inc/dec pairs? How many remain in practise? We don't know. The performance is to be proven. Under this approach, you'd bunch references up close together, so there's a higher than usual probability the rc will be in cache already. I agree, it's theoretically a problem, but I have no evidence to show that it's a deal breaker. In lieu of any other options, it's worth exploring. malloc pointers could conceivably just have a null entry for 'rc' and
 therefore
 interact comfortably with rc pointers.
 I imagine that a 'raw-pointer' type would be required to refer to a thin
 pointer. Raw pointers would implicitly cast to fat pointers, and a
 fat->thin
 casts may throw if the fat pointer's rc is non-null, or compile error if
 it can
 be known at compile time.
Now we throw in a null check and branch for pointer operations.
Well the alternative is to distinguish them in the type system. Without making any language changes, I guess it's a requirement. It would be completely predictable though, so it only amounts to a couple of cycles. Perhaps a solution is possible where an explicit rc record is not required
 (such
 that all pointers remain 'thin' pointers)...
 A clever hash of the pointer itself can look up the rc?
 Perhaps the rc can be found at ptr[-1]? But then how do you know if the
 pointer
 is rc allocated or not? An unlikely sentinel value at ptr[-1]? Perhaps the
 virtual memory page can imply whether pointers allocated in that region
 are ref
 counted or not? Some clever method of assigning the virtual address space
 so
 that recognition of rc memory can amount to testing a couple of bits in
 pointers?

 I'm just making things up,
Yes.
I'm sure there are plenty of cool/clever tricks that may help. but my point is, there are lots of creative
 possibilities, and I have never seen any work to properly explore the
 options.
ARC has been known about for many decades. If you haven't seen it "properly explored", perhaps it isn't as simple and cost-effective as it may appear at first blush.
I didn't say it was simple or cost effective. I don't know. I'm asking, is it *possible* and would it work well? I want to know why it's not possible, and if it is, then promote exploration as a potential solution in D. So then consider ARC seriously. If it can't work, articulate why. I still
 don't
 know, nobody has told me.
 It works well in other languages, and as far as I can tell, it has the
 potential
 to produce acceptable results for _all_ D users.
What other languages?
Well, Apple are the obvious demonstration that I'm familiar with. I haven't worked with any others, but they have been raised by other people in prior threads. iOS is a competent realtime platform, Apple are well known for their
 commitment
 to silky-smooth, jitter-free UI and general feel.
A UI is a good use case for ARC. A UI doesn't require high performance.
Nothing that requires high-frequency performance is a good case for managed memory at all. You can't invoke a system allocator no matter how it's implemented at the sort of frequency I think you're suggesting. Apple demonstrate that direct application of ARC results in silky smooth performance. That's hard to do, and it's throughout all user facing API's, not just UI, so it's clearly not inhibiting that goal substantially. I've haven't heard iOS programmers complain about it. I have heard Android programmers complain about the GC extensively though. Like I say, programmers would quickly learn the patterns (since they are predictable and reliable), and it appears that D offers substantially more opportunity for effective ref-fiddling-elimination than Obj-C. Okay. Where can I read about that? It doesn't seem to have surfaced, at
 least,
 it was never presented in response to my many instances of raising the
 topic.
 What are the impasses?
I'd have to go look to find the thread. The impasses were as I pointed out here.
I don't feel like you pointed any out. Just FUD. I'm very worried about this. ARC is the only imaginary solution I have
 left. In
 lieu of that, we make a long-term commitment to a total fracturing of
 memory
 allocation techniques, just like C++ today where interaction between
 libraries
 is always a massive pain in the arse. It's one of the most painful things
 about
 C/C++, and perhaps one of the primary causes of incompatibility between
 libraries and frameworks. This will transfer into D, but it's much worse
 in D
 because of the relatively high number of implicit allocations ('~',
 closures, etc).
There are only about 3 cases of implicit allocation in D, all easily avoided, and with nogc they'll be trivial to avoid. It is not "much worse".
But they are fundamentally useful and convenient features though. I don't want to have to emplace policy to ban them. It separates a subset of D users as second class citizens that can't enjoy the modern features of the language, and restricts their access to libraries. The thing is, these policies must become system-wide. GC must be banned everywhere to have any effect. Under a GC, even the low frequency code loses these conveniences offered by the D language, and you're creating a situation where libraries become very hard to trust. Frameworks and libraries become incompatible with each other, which is a
 problem

 suffer.
A GC makes libraries compatible with each other, which is one reason why GCs are very popular.
Correct, and ARC as a form of GC would equally be applicable everywhere. That's my point. By sticking with a crappy GC, you're isolating an (important?) subset of D users into a world where we suffer annoying C++ patterns yet longer, and still can't depend on interoperability with useful libraries. These are critical failings of C++ by my measure. And foundational to my attraction to D in the first place. I don't feel like you've given me any evidence that ARC in not feasible. Just that you're not interested in trying it out. Please, kill it technically. Not just with dismissal and FUD. I fear that nogc is a sort of commitment to the notion that the GC is here to stay, will never be improved or changed, and that we are being boxed into a world no different from C++. I'm not here because I want C++ with better syntax, I'm here because I want to encourage the language that best embodies the future of my industry. I've put decades into manual memory management. I'm tired of it. So are all other languages; which may be compatible with modern casual games, but not compatible with major titles, or console/embedded titles. Nobody wants to work with C++ anymore.
Apr 18 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/18/2014 2:46 AM, Manu via Digitalmars-d wrote:
 Consensus is that there's no performant GC available to D either.
Switching from one non-performant system to another won't help matters.
 Can you show that Obj-C suffers serious performance penalty from it's ARC
 system? Have there been comparisons?
O-C doesn't use ARC for all pointers, nor is it memory safe.
 Java is designed to be GC compatible from the ground up. D is practically
 incompatible with GC in the same way as C, but it was shoe-horned in there
anyway.
This isn't quite correct. I implemented a GC for Java back in the 90's. D has semantics that are conducive to GC that C doesn't have, and this was done based on my experience with GC's. To wit, objects can't have internal references, so that a moving collector can be implemented.
 Everyone that talks about fantasy 'awesome-GC's admits it would be impossible
to
 implement in D for various reasons.
The same goes for fantasy ARC :-)
     inc/dec isn't as cheap as you imply. The dec usually requires the creation
     of an exception handling unwinder to do it.
 Why do you need to do that?
Because if a function exits via a thrown exception, the dec's need to happen.
 Why do you have such a strong opposition if this is the case?
Because I know what kind of code will have to be generated for it.
 But you're always talking about how D creates way less garbage than other
 languages, which seems to be generally true. It needs to be tested before you
 can make presumptions about performance.
ARC, in order to be memory safe, would have to be there for ALL pointers, not just allocated objects.
 Well, I'd like to see it measured in practise. But most common scenarios I
 imagine appear like they'd eliminate nicely.
 pure, and perhaps proper escape analysis (planned?) offer great opportunity for
 better elimination than other implementations like Obj-C.
If you're not aware of the exception handler issue, then I think those assumptions about performance are unwarranted. Furthermore, if we implement ARC, then it is way too slow, then D simply loses its appeal. We could then spend the next 5 years attempting to produce a "sufficiently smart compiler" to buy that performance back, and by then it will be far too late.
     First off, now pointers are 24 bytes in size. Secondly, every pointer
     dereference becomes two dereferences (not so good for cache performance).
 That's not necessarily true. What length is the compiler typically able to
 eliminate inc/dec pairs? How many remain in practise? We don't know.
Yes, we don't know. We do know that we don't have an optimizer that will do that, and we know that GDC and LDC won't do it either, because those optimizers are designed for C++, not ARC.
 The performance is to be proven. Under this approach, you'd bunch references up
 close together, so there's a higher than usual probability the rc will be in
 cache already.
 I agree, it's theoretically a problem, but I have no evidence to show that it's
 a deal breaker. In lieu of any other options, it's worth exploring.
I'm just stunned you don't find the double indirection of rc a problem, given your adamant (and correct) issues with virtual function call dispatch.
 Well the alternative is to distinguish them in the type system.
There's a dramatic redesign of D.
 I don't feel like you've given me any evidence that ARC in not feasible. Just
 that you're not interested in trying it out.
 Please, kill it technically. Not just with dismissal and FUD.
I've given you technical reasons. You don't agree with them, that's ok, but doesn't mean I have not considered your arguments, all of which have come up before. See the thread for the previous discussion on this. It's not like I haven't tried. http://forum.dlang.org/thread/l34lei$255v$1 digitalmars.com
Apr 18 2014
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 18 Apr 2014 16:40:06 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 This isn't quite correct. I implemented a GC for Java back in the 90's.  
 D has semantics that are conducive to GC that C doesn't have, and this  
 was done based on my experience with GC's. To wit, objects can't have  
 internal references, so that a moving collector can be implemented.
This isn't correct. Only structs cannot have internal references. Objects don't move or copy so easily. Not only that, but internal pointers would not prevent a moving GC.
 I don't feel like you've given me any evidence that ARC in not  
 feasible. Just
 that you're not interested in trying it out.
 Please, kill it technically. Not just with dismissal and FUD.
I've given you technical reasons. You don't agree with them, that's ok, but doesn't mean I have not considered your arguments, all of which have come up before. See the thread for the previous discussion on this. It's not like I haven't tried. http://forum.dlang.org/thread/l34lei$255v$1 digitalmars.com
Note, there are much more practical reasons to enable reference counting -- interoperating natively with Objective-C and iOS/MacOS. -Steve
Apr 18 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/18/2014 1:58 PM, Steven Schveighoffer wrote:
 This isn't correct. Only structs cannot have internal references. Objects don't
 move or copy so easily.
Objects can't have internal references either, for the same reason.
 Not only that, but internal pointers would not prevent a moving GC.
They would just make it much more costly, as one would have to detect them at runtime.
 Note, there are much more practical reasons to enable reference counting --
 interoperating natively with Objective-C and iOS/MacOS.
Right, but that isn't pervasive ARC.
Apr 18 2014
prev sibling parent reply Michel Fortin <michel.fortin michelf.ca> writes:
On 2014-04-18 20:40:06 +0000, Walter Bright <newshound2 digitalmars.com> said:

 O-C doesn't use ARC for all pointers, nor is it memory safe.
safe would be very easy to implement in Objective-C now that ARC is there. This has got me thinking. Ever heard "C is the new assembly"? I think this describes very well the relation between C and Objective-C in most Objective-C programs today. Objective-C enables ARC by default for all pointers to Objective-C objects. Since virtually all Objective-C APIs deal with Objective-C objects (or integral values), if you limit yourself to Objective-C APIs you're pretty much memory-safe. When most people write Objective-C programs, they use exclusively Objective-C APIs (that deal with Objective-C objects and integrals, thus memory-safe), except for the few places where performance is important (tight loops, specialized data structures) or where Objective-C APIs are not available. You can mix and match C and Objective-C code, so no clear boundary separates the two, but that doesn't mean there couldn't be one. Adding a safe function attribute to Objective-C that'd prevent you from touching a non-managed pointer is clearly something I'd like to see in Objective-C. Most Objective-C code I know could already be labeled safe with no change. Only a small fraction would have to be updated or left unsafe. Silly me, here I am discussing a improvement proposal for Objective-C in a D forum! The point being, D could have managed and unmanaged pointers (like Objective-C with ARC has), make managed pointers the default, and let people escape pointer management if they want to inside system/ trusted functions. One way it could be done is by tagging specific pointers with some attribute to make them explicitly not managed (what __unsafe_unretained is for in Objective-C). Perhaps the whole function could be tagged too. But you won't need this in general, only when optimizing a tight loop or something similar where performance really counts. Whether that's the path D should take, I don't know. -- Michel Fortin michel.fortin michelf.ca http://michelf.ca
Apr 18 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/18/2014 3:02 PM, Michel Fortin wrote:
 Objective-C enables ARC by default for all pointers to Objective-C objects.
 Since virtually all Objective-C APIs deal with Objective-C objects (or integral
 values), if you limit yourself to Objective-C APIs you're pretty much
memory-safe.
"pretty much" isn't really what we're trying to achieve with safe.
 The point being, D could have managed and unmanaged pointers (like Objective-C
 with ARC has), make managed pointers the default, and let people escape pointer
 management if they want to inside  system/ trusted functions.
Yeah, it could, and the design of D has tried really hard to avoid such. "Managed C++" was a colossal failure. I've dealt with systems with multiple pointer types before (16 bit X86) and I was really, really happy to leave that **** behind.
Apr 18 2014
next sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Fri, 18 Apr 2014 16:48:43 -0700, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 4/18/2014 3:02 PM, Michel Fortin wrote:
 Objective-C enables ARC by default for all pointers to Objective-C  
 objects.
 Since virtually all Objective-C APIs deal with Objective-C objects (or  
 integral
 values), if you limit yourself to Objective-C APIs you're pretty much  
 memory-safe.
"pretty much" isn't really what we're trying to achieve with safe.
 The point being, D could have managed and unmanaged pointers (like  
 Objective-C
 with ARC has), make managed pointers the default, and let people escape  
 pointer
 management if they want to inside  system/ trusted functions.
Yeah, it could, and the design of D has tried really hard to avoid such. "Managed C++" was a colossal failure. I've dealt with systems with multiple pointer types before (16 bit X86) and I was really, really happy to leave that **** behind.
Managed C++ was a colossal failure due to it's extreme verbosity. C++/CLI and C++/CX are much tighter and more importantly NOT failures. I use C++/CLI in production code, and yes, you have to be careful, it's easy to get wrong, but it does work. Note: that I am not advocating this for D and I think that avoiding it is the correct approach. But it wasn't a failure once the more obvious design flaws got worked out. -- Adam Wilson GitHub/IRC: LightBender Aurora Project Coordinator
Apr 18 2014
prev sibling parent reply Michel Fortin <michel.fortin michelf.ca> writes:
On 2014-04-18 23:48:43 +0000, Walter Bright <newshound2 digitalmars.com> said:

 On 4/18/2014 3:02 PM, Michel Fortin wrote:
 Objective-C enables ARC by default for all pointers to Objective-C objects.
 Since virtually all Objective-C APIs deal with Objective-C objects (or integral
 values), if you limit yourself to Objective-C APIs you're pretty much 
 memory-safe.
"pretty much" isn't really what we're trying to achieve with safe.
A lot of D code is memory safe too, but not all. Is D memory-safe? Yes, if you limit yourself to the safe subset (and avoid the few holes remaining in it). Same thing for Objective-C: there exists a subset of the language that is memory safe, and pretty much everyone limit itself to that subset already, unless there's a reason to go lower-level and use C. In other words, unmanaged pointers are the assembler of Objective-C. It's unsafe and error prone, but it lets you optimize things when the need arise.
 The point being, D could have managed and unmanaged pointers (like Objective-C
 with ARC has), make managed pointers the default, and let people escape pointer
 management if they want to inside  system/ trusted functions.
Yeah, it could, and the design of D has tried really hard to avoid such. "Managed C++" was a colossal failure. I've dealt with systems with multiple pointer types before (16 bit X86) and I was really, really happy to leave that **** behind.
Yet, there's C++ and its proliferation of library-implemented managed pointer types (shared_ptr, unique_ptr, weak_ptr, scoped_ptr, and various equivalents in libraries). Whether they're a success or a patch for shortcomings in the language, they're used everywhere despite the various mutually incompatible forms and being all leaky and arcane to use. And if I'm not mistaken, this is where the nogc subset of D is headed. Already, and with good reason, people are suggesting using library managed pointers (such as RefCounted) as a substitute to raw pointers in nogc code. That doesn't automatically make nogc a failure — C++ is a success after all — but it shows that you can't live in a modern world without managed pointers. If multiple pointer types really dooms a language (your theory) then the nogc D subset is doomed too. Yet, ARC-managed pointers are a huge success in Objective-C. I think the trick is to not bother people with various pointer types in regular code. Just make sure the default pointer type works everywhere in higher-level code, and then provide clear ways to escape that management and work at a lower level when you need to optimize a function or interface with external C code. D thrives with raw pointers only because its GC implementation happens to manage raw pointers. That's a brillant idea that makes things simpler, but this also compromises performance at other levels. I don't think there is a way out of that performance issue keeping raw pointers the default, even though I'd like to be proven wrong. -- Michel Fortin michel.fortin michelf.ca http://michelf.ca
Apr 19 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 19 April 2014 at 13:34:13 UTC, Michel Fortin wrote:
 Yet, ARC-managed pointers are a huge success in Objective-C. I 
 think the trick is to not bother people with various pointer 
 types in regular code.
But you have to take the following into account: 1. Objective-C has a separate type for RC objects (although you have a toll-free bridge for some of CF). 2. Objective-C started out with inefficient manual RC, then required some restrictions on that when introducing ARC that removed some overhead, so the ARC overhead is less noticable. ARC isn't trivial to implement: http://clang.llvm.org/docs/AutomaticReferenceCounting.html
 D thrives with raw pointers only because its GC implementation 
 happens to manage raw pointers. That's a brillant idea that 
 makes things simpler, but this also compromises performance at 
 other levels. I don't think there is a way out of that 
 performance issue keeping raw pointers the default, even though 
 I'd like to be proven wrong.
Depends on how you look at it. GC does not really have a horrible performance issue, it has a terrible latency issue. If you can put everything that is latency sensitive into separate units then having background collection isn't all that bad. That is ok if you only read from the GC heap in the real time and write into non-GC buffers in real time (or have a backdoor into the GC heap during collection). If you can establish isolates of some sort (with multiple threads), then you can segment GC and reduce latency. If you take a graph that is 100% immutable, then you can GC-handle that graph as a single object. So, if you get semantics for "freezing" graphs (conversion to immutable) then you probably can cut down on collection time too. As the gap between memory bus speed and memory capacity increases, then more an more memory will stay "mostly untouched". There is obviously opportunities for optimizing a GC for that, but you need the right semantics. Semantics beyond const-types. Surely, you can have both GC, and acceptable performance. I agree with Paulo Pinto on that point. But not with C-like semantics.
Apr 19 2014
prev sibling parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Wed, 16 Apr 2014 04:50:51 -0700, Manu via Digitalmars-d  
<digitalmars-d puremagic.com> wrote:

 I am convinced that ARC would be acceptable, and I've never heard anyone
 suggest any proposal/fantasy/imaginary GC implementation that would be
 acceptable...
 In complete absence of a path towards an acceptable GC implementation,  
 I'd
 prefer to see people that know what they're talking about explore how
 refcounting could be used instead.
 GC backed ARC sounds like it would acceptably automate the circular
 reference catching that people fuss about, while still providing a  
 workable
 solution for embedded/realtime users; disable(/don't link) the backing  
 GC,
 make sure you mark weak references properly.
I'm just going to leave this here. I mentioned it previously in a debate over ARC vs. GC but I couldn't find the link at the time. http://www.cs.virginia.edu/~cs415/reading/bacon-garbage.pdf The paper is pretty math heavy. Long story short, Tracing vs. Ref-counting are algorithmic duals and therefore do not significantly differ. My read of the article is that all the different GC styles are doing is pushing the cost somewhere else. ARC may in fact be the most advantageous for a specific use case, but that in no way means that all use cases will see a performance improvement, and in all likelihood, may see a decrease in performance. That makes ARC a specialization for a certain type of programming, which would then remove D the "Systems" category and place it in a "Specialist" category. One could argue that due to the currently non-optional status of the GC that D is currently a "Specialist" language, and I would be hard pressed to argue against that. nogc removes the shackles of the GC from the language and thus brings it closer to the definition of "Systems". nogc allows programmers to revert to C-style resource management without enforcing a specialized RM system, be it GC or ARC. nogc might not make you run through the fields singing D's praises, but it is entirely consistent with the goals and direction of D. -- Adam Wilson GitHub/IRC: LightBender Aurora Project Coordinator
Apr 16 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/16/2014 3:42 PM, Adam Wilson wrote:
 ARC may in fact be the most advantageous for a specific use case, but that in
no
 way means that all use cases will see a performance improvement, and in all
 likelihood, may see a decrease in performance.
Right on. Pervasive ARC is very costly, meaning that one will have to define alongside it all kinds of schemes to mitigate those costs, all of which are expensive for the programmer to get right.
Apr 16 2014
next sibling parent reply Michel Fortin <michel.fortin michelf.ca> writes:
On 2014-04-16 23:20:07 +0000, Walter Bright <newshound2 digitalmars.com> said:

 On 4/16/2014 3:42 PM, Adam Wilson wrote:
 ARC may in fact be the most advantageous for a specific use case, but 
 that in no
 way means that all use cases will see a performance improvement, and in all
 likelihood, may see a decrease in performance.
Right on. Pervasive ARC is very costly, meaning that one will have to define alongside it all kinds of schemes to mitigate those costs, all of which are expensive for the programmer to get right.
It's not just ARC. As far as I know, most GC algorithms require some action to be taken when changing the value of a pointer. If you're seeing this as unnecessary bloat, then there's not much hope in a better GC for D either. But beyond that I wonder if nogc won't entrench that stance even more. Here's the question: is assigning to a pointer allowed in a nogc function? Of course it's allowed! Assigning to a pointer does not involve the GC in its current implementation... but what if another GC implementation to be used later needs something to be done every time a pointer is modified, is this "something to be done" allowed in a nogc function? -- Michel Fortin michel.fortin michelf.ca http://michelf.ca
Apr 16 2014
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/16/2014 5:06 PM, Michel Fortin wrote:
 It's not just ARC. As far as I know, most GC algorithms require some action to
 be taken when changing the value of a pointer. If you're seeing this as
 unnecessary bloat, then there's not much hope in a better GC for D either.
Yeah, those are called write gates. The write gate is used to tell the GC that "I wrote to this section of memory, so that bucket is dirty now." They're fine in a language without pointers, but I just don't see how one could write fast loops using pointers with write gates.
 But beyond that I wonder if  nogc won't entrench that stance even more. Here's
 the question: is assigning to a pointer allowed in a  nogc function?  Of course
 it's allowed! Assigning to a pointer does not involve the GC in its current
 implementation... but what if another GC implementation to be used later needs
 something to be done every time a pointer is modified, is this "something to be
 done" allowed in a  nogc function?
It would have to be.
Apr 16 2014
prev sibling parent Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 17 April 2014 10:06, Michel Fortin via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 On 2014-04-16 23:20:07 +0000, Walter Bright <newshound2 digitalmars.com>
 said:

  On 4/16/2014 3:42 PM, Adam Wilson wrote:
 ARC may in fact be the most advantageous for a specific use case, but
 that in no
 way means that all use cases will see a performance improvement, and in
 all
 likelihood, may see a decrease in performance.
Right on. Pervasive ARC is very costly, meaning that one will have to define alongside it all kinds of schemes to mitigate those costs, all of which are expensive for the programmer to get right.
It's not just ARC. As far as I know, most GC algorithms require some action to be taken when changing the value of a pointer. If you're seeing this as unnecessary bloat, then there's not much hope in a better GC for D either.
Indeed. But beyond that I wonder if nogc won't entrench that stance even more. This is *precisely* my concern. I'm really worried about this.
Apr 16 2014
prev sibling parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 17 April 2014 09:20, Walter Bright via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 On 4/16/2014 3:42 PM, Adam Wilson wrote:

 ARC may in fact be the most advantageous for a specific use case, but
 that in no
 way means that all use cases will see a performance improvement, and in
 all
 likelihood, may see a decrease in performance.
Right on. Pervasive ARC is very costly, meaning that one will have to define alongside it all kinds of schemes to mitigate those costs, all of which are expensive for the programmer to get right.
GC is _very_ costly. From my experience comparing iOS and Android, it's clear that GC is vastly more costly and troublesome than ARC. What measure do you use to make that assertion? You're also making a hidden assertion that the D GC will never improve, since most GC implementations require some sort of work similar to ref fiddling anyway...
Apr 16 2014
parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Thursday, 17 April 2014 at 04:19:00 UTC, Manu via
Digitalmars-d wrote:
 On 17 April 2014 09:20, Walter Bright via Digitalmars-d <
 digitalmars-d puremagic.com> wrote:

 On 4/16/2014 3:42 PM, Adam Wilson wrote:

 ARC may in fact be the most advantageous for a specific use 
 case, but
 that in no
 way means that all use cases will see a performance 
 improvement, and in
 all
 likelihood, may see a decrease in performance.
Right on. Pervasive ARC is very costly, meaning that one will have to define alongside it all kinds of schemes to mitigate those costs, all of which are expensive for the programmer to get right.
GC is _very_ costly. From my experience comparing iOS and Android, it's clear that GC is vastly more costly and troublesome than ARC. What measure do you use to make that assertion? You're also making a hidden assertion that the D GC will never improve, since most GC implementations require some sort of work similar to ref fiddling anyway...
Except Dalvik's GC sucks, because it is hardly improved since Android 2.3 and very simple when compared to any other commercial JVM for embedded scenarios, for example Jamaica JVM https://www.aicas.com/cms/. Even Windows Phone .NET GC is better and additionally .NET is compiled to native code on the store. There is a reason why Dalvik is being replaced by ART. -- Paulo
Apr 16 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 17 April 2014 at 06:56:11 UTC, Paulo Pinto wrote:
 There is a reason why Dalvik is being replaced by ART.
AoT compilation? Btw, AFAIK the GC is deprecated for Objective-C from OS-X 10.8. Appstore requires apps to be GC free... Presumably for good reasons.
Apr 17 2014
parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Thursday, 17 April 2014 at 08:05:42 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 17 April 2014 at 06:56:11 UTC, Paulo Pinto wrote:
 There is a reason why Dalvik is being replaced by ART.
AoT compilation?
Not only. Dalvk was left to bit rotten and has hardly seen any updates since 2.3.
 Btw, AFAIK the GC is deprecated for Objective-C from OS-X 10.8. 
 Appstore requires apps to be GC free... Presumably for good 
 reasons.
Because Apple sucks at implementing GCs. It was not possible to mix binary libraries compiled with GC enabled and with ones compiled with it disabled. I already mentioned this multiple times here and can hunt the posts with respective links if you will. The forums were full of crash descriptions. Their ARC solution is based on Cocoa patterns and only applies to Cocoa and other Objective-C frameworks with the same lifetime semantics. Basically the compiler inserts the appropriate [... retain] / [... release] calls in the places where an Objective-C programmer is expected to write them by hand. Additionally a second pass removes extra invocation pairs. This way there is no interoperability issues between compiled libraries, as from the point of view from generated code there is no difference other that the optimized calls. Of course it was sold at WWDC as "ARC is better than GC" and not as "ARC is better than the crappy GC implementation we have done". -- Paulo
Apr 17 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 17 April 2014 at 08:22:32 UTC, Paulo Pinto wrote:
 Of course it was sold at WWDC as "ARC is better than GC" and 
 not as "ARC is better than the crappy GC implementation we have 
 done".
I have never seen a single instance of a GC based system doing anything smooth in the realm of audio/visual real time performance without being backed by a non-GC engine. You can get decent performance from GC backed languages on the higher level constructs on top of a low level engine. IMHO the same goes for ARC. ARC is a bit more predictable than GC. GC is a bit more convenient and less predictable. I think D has something to learn from this: 1. Support for manual memory management is important for low level engines. 2. Support for automatic memory management is important for high level code on top of that. The D community is torn because there is some idea that libraries should assume point 2 above and then be retrofitted to point 1. I am not sure if that will work out. Maybe it is better to just say that structs are bound to manual memory management and classes are bound to automatic memory management. Use structs for low level stuff with manual memory management. Use classes for high level stuff with automatic memory management. Then add language support for "union-based inheritance" in structs with a special construct for programmer-specified subtype identification. That is at least conceptually easy to grasp and the type system can more easily safeguard code than in a mixed model. Most successful frameworks that allow high-level programming have two layers: - Python/heavy duty c libraries - Javascript/browser engine - Objective-C/C and Cocoa / Core Foundation - ActionScript / c engine etc I personally favour the more integrated approach that D appears to be aiming for, but I am somehow starting to feel that for most programmers that model is going to be difficult to grasp in real projects, conceptually. Because they don't really want the low level stuff. And they don't want to have their high level code bastardized by low level requirements. As far as I am concerned D could just focus on the structs and the low level stuff, and then later try to work in the high level stuff. There is no efficient GC in sight and the language has not been designed for it either. ARC with whole-program optimization fits better into the low-level paradigm than GC. So if you start from low-level programming and work your way up to high-level programming then ARC is a better fit. Ola.
Apr 17 2014
next sibling parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Thursday, 17 April 2014 at 08:52:28 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 17 April 2014 at 08:22:32 UTC, Paulo Pinto wrote:
 Of course it was sold at WWDC as "ARC is better than GC" and 
 not as "ARC is better than the crappy GC implementation we 
 have done".
I have never seen a single instance of a GC based system doing anything smooth in the realm of audio/visual real time performance without being backed by a non-GC engine. You can get decent performance from GC backed languages on the higher level constructs on top of a low level engine. IMHO the same goes for ARC. ARC is a bit more predictable than GC. GC is a bit more convenient and less predictable. I think D has something to learn from this: 1. Support for manual memory management is important for low level engines. 2. Support for automatic memory management is important for high level code on top of that. The D community is torn because there is some idea that libraries should assume point 2 above and then be retrofitted to point 1. I am not sure if that will work out. Maybe it is better to just say that structs are bound to manual memory management and classes are bound to automatic memory management. Use structs for low level stuff with manual memory management. Use classes for high level stuff with automatic memory management. Then add language support for "union-based inheritance" in structs with a special construct for programmer-specified subtype identification. That is at least conceptually easy to grasp and the type system can more easily safeguard code than in a mixed model. Most successful frameworks that allow high-level programming have two layers: - Python/heavy duty c libraries - Javascript/browser engine - Objective-C/C and Cocoa / Core Foundation - ActionScript / c engine etc I personally favour the more integrated approach that D appears to be aiming for, but I am somehow starting to feel that for most programmers that model is going to be difficult to grasp in real projects, conceptually. Because they don't really want the low level stuff. And they don't want to have their high level code bastardized by low level requirements. As far as I am concerned D could just focus on the structs and the low level stuff, and then later try to work in the high level stuff. There is no efficient GC in sight and the language has not been designed for it either. ARC with whole-program optimization fits better into the low-level paradigm than GC. So if you start from low-level programming and work your way up to high-level programming then ARC is a better fit. Ola.
Looking at the hardware specifications of usable desktop OSs built with automatic memory managed system programming languages, we have: Interlisp, Mesa/Cedar, ARC with GC for cycle collection, running on Xerox 1132 (Dorado) and Xerox 1108 (Dandelion). http://archive.computerhistory.org/resources/access/text/2010/06/102660634-05-05-acc.pdf Oberon running on Ceres, ftp://ftp.inf.ethz.ch/pub/publications/tech-reports/1xx/070.pdf Bluebottle, Oberon's sucessor has a primitive video editor, http://www.ocp.inf.ethz.ch/wiki/Documentation/WindowManager?action=download&upname=AosScreenshot1.jpg Spin running on DEC Alpha, http://en.wikipedia.org/wiki/DEC_Alpha Any iOS device runs circles around those systems, hence why I always like to make clear it was Apple's failure to make a workable GC in a C based language and not the virtues of pure ARC over pure GC. Their solution has its merits, and as I mentioned the benefit of generating the same code, while releasing the developer of pain to write those retain/release themselves. Similar approach was taken by Microsoft with their C++/CX and COM integration. So any pure GC basher now uses Apple's example, with a high probability of not knowing the technical issues why it came to be like that. -- Paulo
Apr 17 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 17 April 2014 at 09:32:52 UTC, Paulo Pinto wrote:
 Any iOS device runs circles around those systems, hence why I 
 always like to make clear it was Apple's failure to make a 
 workable GC in a C based language and not the virtues of pure 
 ARC over pure GC.
I am not making an argument for pure ARC. Objective-C allows you to mix and Os-X is most certainly not pure ARC based. If we go back in time to the timeslot you point to even C was considered waaaay too slow for real time graphics. On the C64 and the Amiga you wrote in assembly and optimized for the hardware. E.g. using hardware scroll register on the C64 and the copperlist (a specialized scanline triggered processor writing to hardware registers) on the Amiga. No way you could do real time graphics in a GC backed language back then without a dedicated engine with HW support. Real time audio was done with DSPs until the mid 90s.
Apr 17 2014
parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Thursday, 17 April 2014 at 09:55:38 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 17 April 2014 at 09:32:52 UTC, Paulo Pinto wrote:
 Any iOS device runs circles around those systems, hence why I 
 always like to make clear it was Apple's failure to make a 
 workable GC in a C based language and not the virtues of pure 
 ARC over pure GC.
I am not making an argument for pure ARC. Objective-C allows you to mix and Os-X is most certainly not pure ARC based. If we go back in time to the timeslot you point to even C was considered waaaay too slow for real time graphics. On the C64 and the Amiga you wrote in assembly and optimized for the hardware. E.g. using hardware scroll register on the C64 and the copperlist (a specialized scanline triggered processor writing to hardware registers) on the Amiga. No way you could do real time graphics in a GC backed language back then without a dedicated engine with HW support. Real time audio was done with DSPs until the mid 90s.
Sure, old demoscener here.
Apr 17 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/17/2014 2:32 AM, Paulo Pinto wrote:
 Similar approach was taken by Microsoft with their C++/CX and COM integration.

 So any pure GC basher now uses Apple's example, with a high probability of not
 knowing the technical issues why it came to be like that.
I also wish to reiterate that GC's use of COM with ref counting contains many, many escapes where the user "knows" that he can just use a pointer directly without dealing with the ref count. This is critical to making ref counting perform. But the escapes come with a huge risk for memory corruption, i.e. user mistakes. Also, in C++ COM, relatively few of the data structures a C++ program uses will be in COM. But ARC would mean using ref counting for EVERYTHING. Using ARC for *everything* means slow and bloat, unless Manu's assumption that a sufficiently smart compiler could eliminate nearly all of that bloat is possible. Which I am not nearly as confident of.
Apr 17 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/17/14, 10:09 AM, Walter Bright wrote:
 On 4/17/2014 2:32 AM, Paulo Pinto wrote:
 Similar approach was taken by Microsoft with their C++/CX and COM
 integration.

 So any pure GC basher now uses Apple's example, with a high
 probability of not
 knowing the technical issues why it came to be like that.
I also wish to reiterate that GC's use of COM with ref counting contains many, many escapes where the user "knows" that he can just use a pointer directly without dealing with the ref count. This is critical to making ref counting perform. But the escapes come with a huge risk for memory corruption, i.e. user mistakes. Also, in C++ COM, relatively few of the data structures a C++ program uses will be in COM. But ARC would mean using ref counting for EVERYTHING.
As a COM programmer a long time ago, I concur.
 Using ARC for *everything* means slow and bloat, unless Manu's
 assumption that a sufficiently smart compiler could eliminate nearly all
 of that bloat is possible.

 Which I am not nearly as confident of.
Well there's been work on that. I mentioned this recent paper in this group: http://goo.gl/tavC1M, which claims RC backed by a cycle collector can reach parity with tracing. Worth a close read. Andrei
Apr 18 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/18/2014 8:34 AM, Andrei Alexandrescu wrote:
 Well there's been work on that. I mentioned this recent paper in this group:
 http://goo.gl/tavC1M, which claims RC backed by a cycle collector can reach
 parity with tracing. Worth a close read.
A couple crucial points: 1. It achieves near parity with tracing, i.e. with a regular GC. It says nothing about performance for regular pointer code, when those pointers are replaced with ref counts. 2. It's a Java VM implementation. You can bet that the VM internally isn't using ref counting - too slow. 3. I picked a GC for D because a GC coexists peacefully with pointers of all types. This is not remotely true with ref counting. It's not an issue with Java, which has no pointers, but this coexistence problem would be a huge one for D.
Apr 18 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/18/14, 10:12 AM, Walter Bright wrote:
 On 4/18/2014 8:34 AM, Andrei Alexandrescu wrote:
 Well there's been work on that. I mentioned this recent paper in this
 group:
 http://goo.gl/tavC1M, which claims RC backed by a cycle collector can
 reach
 parity with tracing. Worth a close read.
A couple crucial points: 1. It achieves near parity with tracing, i.e. with a regular GC. It says nothing about performance for regular pointer code, when those pointers are replaced with ref counts.
This is moving the goalposts. "Regular pointer code" is unsafe, and safety has been part of your past arguments. This is tracing GC compared to refcounting, cut and dried.
 2. It's a Java VM implementation. You can bet that the VM internally
 isn't using ref counting - too slow.
Not sure what this means in context.
 3. I picked a GC for D because a GC coexists peacefully with pointers of
 all types. This is not remotely true with ref counting. It's not an
 issue with Java, which has no pointers, but this coexistence problem
 would be a huge one for D.
Agreed. Andrei
Apr 18 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/18/2014 10:53 AM, Andrei Alexandrescu wrote:
 On 4/18/14, 10:12 AM, Walter Bright wrote:
 On 4/18/2014 8:34 AM, Andrei Alexandrescu wrote:
 Well there's been work on that. I mentioned this recent paper in this
 group:
 http://goo.gl/tavC1M, which claims RC backed by a cycle collector can
 reach
 parity with tracing. Worth a close read.
A couple crucial points: 1. It achieves near parity with tracing, i.e. with a regular GC. It says nothing about performance for regular pointer code, when those pointers are replaced with ref counts.
This is moving the goalposts. "Regular pointer code" is unsafe, and safety has been part of your past arguments. This is tracing GC compared to refcounting, cut and dried.
It applies equally to D arrays. Take a look at optimized array loops in D - the generated code is as good as C++ pointer style.
 2. It's a Java VM implementation. You can bet that the VM internally
 isn't using ref counting - too slow.
Not sure what this means in context.
Dogfood.
 3. I picked a GC for D because a GC coexists peacefully with pointers of
 all types. This is not remotely true with ref counting. It's not an
 issue with Java, which has no pointers, but this coexistence problem
 would be a huge one for D.
Agreed.
Phew! That's a fundamental point, and I'm glad we agree on it.
Apr 18 2014
prev sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Friday, 18 April 2014 at 17:12:11 UTC, Walter Bright wrote:
 3. I picked a GC for D because a GC coexists peacefully with 
 pointers of all types. This is not remotely true with ref 
 counting. It's not an issue with Java, which has no pointers, 
 but this coexistence problem would be a huge one for D.
My understanding is that a more sophisticated GC will also not coexist quite so peacefully with pointers of all types. Is it not the conservativeness* of the GC that enables this coexistence? *not necessarily in the GC-jargon sense
Apr 18 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/18/2014 10:54 AM, John Colvin wrote:
 My understanding is that a more sophisticated GC will also not coexist quite so
 peacefully with pointers of all types. Is it not the conservativeness* of the
GC
 that enables this coexistence?
Yes. Which is one reason why D doesn't emit write gates for indirect assignment.
Apr 18 2014
parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Friday, 18 April 2014 at 20:24:51 UTC, Walter Bright wrote:
 On 4/18/2014 10:54 AM, John Colvin wrote:
 My understanding is that a more sophisticated GC will also not 
 coexist quite so
 peacefully with pointers of all types. Is it not the 
 conservativeness* of the GC
 that enables this coexistence?
Yes. Which is one reason why D doesn't emit write gates for indirect assignment.
Which, if any, of the more sophisticated GC designs out there - in your opinion - would work well with D? Perhaps more importantly, which do you see as *not* working well with D.
Apr 18 2014
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/18/2014 3:10 PM, John Colvin wrote:
 On Friday, 18 April 2014 at 20:24:51 UTC, Walter Bright wrote:
 On 4/18/2014 10:54 AM, John Colvin wrote:
 My understanding is that a more sophisticated GC will also not coexist quite so
 peacefully with pointers of all types. Is it not the conservativeness* of the
GC
 that enables this coexistence?
Yes. Which is one reason why D doesn't emit write gates for indirect assignment.
Which, if any, of the more sophisticated GC designs out there - in your opinion - would work well with D? Perhaps more importantly, which do you see as *not* working well with D.
Ones that imply intrusive code gen changes won't work well with D. D is not Java, and does not have Java's every-pointer-is-a-gc-pointer semantics, not even remotely.
Apr 18 2014
prev sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 18 April 2014 at 22:10:06 UTC, John Colvin wrote:
 Which, if any, of the more sophisticated GC designs out there - 
 in your opinion - would work well with D? Perhaps more 
 importantly, which do you see as *not* working well with D.
I think you can improve GC by: - supporting c++ owned/shared pointers at the language level and marking them as no-scan. - aligning scanned pointers in structs and classes to the same cache line - having scan metadata at an offset from the return address of functions - segmented collection - whole program analysis to figure out what individual stacks can contain - meta level invariants specified by the programmer
Apr 18 2014
prev sibling parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 17 April 2014 18:52, via Digitalmars-d <digitalmars-d puremagic.com>wrote:

 On Thursday, 17 April 2014 at 08:22:32 UTC, Paulo Pinto wrote:

 Of course it was sold at WWDC as "ARC is better than GC" and not as "ARC
 is better than the crappy GC implementation we have done".
I have never seen a single instance of a GC based system doing anything smooth in the realm of audio/visual real time performance without being backed by a non-GC engine. You can get decent performance from GC backed languages on the higher level constructs on top of a low level engine. IMHO the same goes for ARC. ARC is a bit more predictable than GC. GC is a bit more convenient and less predictable. I think D has something to learn from this: 1. Support for manual memory management is important for low level engines. 2. Support for automatic memory management is important for high level code on top of that. The D community is torn because there is some idea that libraries should assume point 2 above and then be retrofitted to point 1. I am not sure if that will work out.
See, I just don't find managed memory incompatible with 'low level' realtime or embedded code, even on tiny microcontrollers in principle. ARC would be fine in low level code, assuming the language supported it to the fullest of it's abilities. I'm confident that programmers would learn it's performance characteristics and be able to work effectively with it in very little time. It's well understood, and predictable. You know exactly how it works, and precisely what the costs are. There are plenty of techniques to move any ref fiddling out of your function if you identify that to be the source of a bottleneck. I think with some care and experience, you could use ARC just as effectively as full manual memory management in the inner loops, but also gain the conveniences it offers on the periphery where the performance isn't critical. _Most_ code exists in this periphery, and therefore the importance of that convenience shouldn't be underestimated. Maybe it is better to just say that structs are bound to manual memory
 management and classes are bound to automatic memory management.
Use structs for low level stuff with manual memory management.
 Use classes for high level stuff with automatic memory management.

 Then add language support for "union-based inheritance" in structs with a
 special construct for programmer-specified subtype identification.

 That is at least conceptually easy to grasp and the type system can more
 easily safeguard code than in a mixed model.
No. It misses basically everything that compels the change. Strings, '~', closures. D largely depends on it's memory management. That's the entire reason why library solutions aren't particularly useful. I don't want to see D evolve to another C++ where libraries/frameworks are separated or excluded by allocation practise. Auto memory management in D is a reality. Unless you want to build yourself into a fully custom box (I don't!), then you have to deal with it. Any library that wasn't written by a gamedev will almost certainly rely on it, and games are huge complex things that typically incorporate lots of libraries. I've spent my entire adult lifetime dealing with these sorts of problems. Most successful frameworks that allow high-level programming have two
 layers:
 - Python/heavy duty c libraries
 - Javascript/browser engine
 - Objective-C/C and Cocoa / Core Foundation
 - ActionScript / c engine

 etc

 I personally favour the more integrated approach that D appears to be
 aiming for, but I am somehow starting to feel that for most programmers
 that model is going to be difficult to grasp in real projects,
 conceptually. Because they don't really want the low level stuff. And they
 don't want to have their high level code bastardized by low level
 requirements.

 As far as I am concerned D could just focus on the structs and the low
 level stuff, and then later try to work in the high level stuff. There is
 no efficient GC in sight and the language has not been designed for it
 either.

 ARC with whole-program optimization fits better into the low-level
 paradigm than GC. So if you start from low-level programming and work your
 way up to high-level programming then ARC is a better fit.
The thing is, D is not particularly new, it's pretty much 'done', so there will be no radical change in direction like you seem to suggest. But I generally agree with your final points. The future is not manual memory management. But D seems to be pushing us back into that box without a real solution to this problem. Indeed, it is agreed that there is no fantasy solution via GC on the horizon... so what? Take this seriously. I want to see ARC absolutely killed dead rather than dismissed.
Apr 17 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 17 April 2014 at 12:20:06 UTC, Manu via 
Digitalmars-d wrote:
 See, I just don't find managed memory incompatible with 'low 
 level' realtime or embedded code, even on tiny microcontrollers 
 in principle.
RC isn't incompatible with realtime, since the overhead is O(1). But it is slower than the alternatives where you want maximum performance. E.g. raytracing. And it is slower and less more "safe" than GC for long running servers that have uneven loads. E.g. web services. I think it would be useful to discuss real scenarios when discussing performance: 1. Web server request that can be handled instantly (no database lookup): small memory requirements and everything is released immediately. Best strategy might be to use a release pool (allocate incrementally and free all upon return in one go). 2. Web server, cached content-objects: lots of cycles, shared across threads. Best strategy is global GC. 3. Non-maskable interrupt: can cut into any running code at any time. No deallocation must happen and can only touch code that is consistent after atomic single instruction CPU operations. Best strategy is preallocation and single instruction atomic communication.
 ARC would be fine in low level code, assuming the language 
 supported it to
 the fullest of it's abilities.
Yes, but that requires whole program optimization, since function calls cross compilation unit boundaries frequently.
 No. It misses basically everything that compels the change. 
 Strings, '~',
 closures. D largely depends on it's memory management.
And that is the problem. Strings can usually be owned objects. What benefits most from GC are the big complex objects that have lots of links to other objects, so you get many circular references. You usually have fewer of those. If you somehow can limit GC to precise collection of those big objects, and forbid foreign references to those, then the collection cycle could complete quickly and you could use GC for soft real time. Which most code application code is. I don't know how to do it, but global-GC-everything only works for batch programming or servers with downtime.
 Take this seriously. I want to see ARC absolutely killed dead 
 rather than dismissed.
Why is that? I can see ARC in D3 with whole program optimization. I cannot see how D2 could be extended with ARC given all the other challenges. Ola.
Apr 17 2014
parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 17 April 2014 23:17, via Digitalmars-d <digitalmars-d puremagic.com>wrote:

 On Thursday, 17 April 2014 at 12:20:06 UTC, Manu via Digitalmars-d wrote:

 See, I just don't find managed memory incompatible with 'low level'
 realtime or embedded code, even on tiny microcontrollers in principle.
RC isn't incompatible with realtime, since the overhead is O(1). But it is slower than the alternatives where you want maximum performance. E.g. raytracing.
You would never allocate in a ray tracing loop. If you need a temp, you would use some pre-allocation strategy. This is a tiny, self-contained, and highly specialised loop, that will always have a highly specialised allocation strategy. You also don't make library calls inside a raytrace loop. And it is slower and less more "safe" than GC for long running servers that
 have uneven loads. E.g. web services.
Hey? I don't know what you mean. I think it would be useful to discuss real scenarios when discussing
 performance:

 1. Web server request that can be handled instantly (no database lookup):
 small memory requirements and everything is released immediately.

 Best strategy might be to use a release pool (allocate incrementally and
 free all upon return in one go).
Strings are the likely source of allocation. I don't think this suggests a preference from GC or ARC either way. A high-frequency webserver would use something more specialised in this case I imagine. 2. Web server, cached content-objects: lots of cycles, shared across
 threads.

 Best strategy is global GC.
You can't have web servers locking up for 10s-100s of ms at random intervals... that's completely unacceptable. Or if there is no realtime allocation, then management strategy is irrelevant. 3. Non-maskable interrupt: can cut into any running code at any time. No
 deallocation must happen and can only touch code that is consistent after
 atomic single instruction CPU operations.

 Best strategy is preallocation and single instruction atomic communication.
Right, interrupts wouldn't go allocating from the master heap. I don't think these scenarios are particularly relevant. ARC would be fine in low level code, assuming the language supported it to
 the fullest of it's abilities.
Yes, but that requires whole program optimization, since function calls cross compilation unit boundaries frequently.
D doesn't usually have compilation unit boundaries. And even if you do, assuming the source is available, it can still inline if it wants to, since the source of imported modules is available while compiling a single unit. I don't think WPO is as critical as you say. No. It misses basically everything that compels the change. Strings, '~',
 closures. D largely depends on it's memory management.
And that is the problem. Strings can usually be owned objects.
I find strings are often highly shared objects. What benefits most from GC are the big complex objects that have lots of
 links to other objects, so you get many circular references.

 You usually have fewer of those.
These tend not to change much at runtime. Transient/temporary allocations on the other hand are very unlikely to contain circular references. Also, I would mark weak references explicitly.
 Take this seriously. I want to see ARC absolutely killed dead rather than
 dismissed.
Why is that? I can see ARC in D3 with whole program optimization. I cannot see how D2 could be extended with ARC given all the other challenges.
Well it's still not clear to me what all the challenges are... that's my point. If it's not possible, I want to know WHY.
Apr 17 2014
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 17 April 2014 at 13:43:17 UTC, Manu via 
Digitalmars-d wrote:
 You would never allocate in a ray tracing loop. If you need a 
 temp, you
 would use some pre-allocation strategy.
Path-tracing is predictable, but regular ray tracing may spawn many rays per hit. So you pre-allocate a buffer, but might need to extend it. The point was: RC-per-object is unacceptable.
 And it is slower and less more "safe" than GC for long running 
 servers that have uneven loads. E.g. web services.
Hey? I don't know what you mean.
1. You can get memory leaks by not collecting cycles with RC. 2. You spend time RC accounting when you need speed and run idle when you could run GC collection. GC is faster than ARC.
 Best strategy is global GC.
You can't have web servers locking up for 10s-100s of ms at random intervals... that's completely unacceptable.
The kind of servers I write can live with occasional 100-200ms lockups. That is no worse than the time it takes to get a response from a database node with a transaction running on it. For a game server that is too much so you would need to get down to under ~50ms, but then you also tend to run with a in-memory database that you cannot run full GC frequently on because of all the pointers in it.
 D doesn't usually have compilation unit boundaries.
It does if you need multiple entry/return paths. E.g. need to compile 2 different versions of a function depending on the context. You don't want a copy in each object file.
 I find strings are often highly shared objects.
Depends on how you use them. You can usually tie them to the object "they describe".
 What benefits most from GC are the big complex objects that 
 have lots of
 links to other objects, so you get many circular references.

 You usually have fewer of those.
These tend not to change much at runtime.
On the contrary, content objects are the ones that do change both in terms of evolutionary programming which makes it easy to miss a cycle, and at runtime. This is especially true for caching web-servers, I think.
 Also, I would mark weak references explicitly.
Which can be difficult to figure out and then you also have to deal with "unexpected null references" and the exceptions it might cause. Weak references can be useful with GC too if the semantics are right for the situation, but it is a crutch if it is only used for killing cycles.
 Well it's still not clear to me what all the challenges are... 
 that's my
 point. If it's not possible, I want to know WHY.
I think it is possible, but I also think shipping D2 as a maintained stable product should have first priority. ARC would probably set D back one year? I think that would be a bad thing. Ola.
Apr 17 2014
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/17/2014 6:43 AM, Manu via Digitalmars-d wrote:
 Well it's still not clear to me what all the challenges are... that's my point.
http://forum.dlang.org/thread/l34lei$255v$1 digitalmars.com
Apr 18 2014
prev sibling parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 17 April 2014 18:22, Paulo Pinto via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 Of course it was sold at WWDC as "ARC is better than GC" and not as "ARC
 is better than the crappy GC implementation we have done".
The argument is, GC is not appropriate for various classes of software. It is unacceptable. No GC that anyone has yet imagined/proposed will address this fact. ARC offers a solution that is usable by all parties. We're not making comparisons between contestants or their implementation quality here, GC is not in the race.
Apr 17 2014
next sibling parent reply "w0rp" <devw0rp gmail.com> writes:
I'm not convinced that any automatic memory management scheme 
will buy much with real time applications. Generally with 
real-time processes, you need to pre-allocate. I think GC could 
be feasible for a real-time application if the GC is precise and 
collections are scheduled, instead of run randomly. Scoped memory 
also helps.
Apr 17 2014
parent reply Byron <byron.heads gmail.com> writes:
On Thu, 17 Apr 2014 11:55:14 +0000, w0rp wrote:

 I'm not convinced that any automatic memory management scheme will buy
 much with real time applications. Generally with real-time processes,
 you need to pre-allocate. I think GC could be feasible for a real-time
 application if the GC is precise and collections are scheduled, instead
 of run randomly. Scoped memory also helps.
I thought the current GC only ran on allocations? If so nogc is *very* useful to enforce critical paths. If we added a nogcscan on blocks that do not contain pointers we maybe able to reduce the collection time, not as good as a precise collector. I would think we can get decent compiler support for this (ie. no refs, pointers, class, dynamic array).
Apr 18 2014
parent reply "Brad Anderson" <eco gnuk.net> writes:
On Friday, 18 April 2014 at 14:45:37 UTC, Byron wrote:
 On Thu, 17 Apr 2014 11:55:14 +0000, w0rp wrote:

 I'm not convinced that any automatic memory management scheme 
 will buy
 much with real time applications. Generally with real-time 
 processes,
 you need to pre-allocate. I think GC could be feasible for a 
 real-time
 application if the GC is precise and collections are 
 scheduled, instead
 of run randomly. Scoped memory also helps.
I thought the current GC only ran on allocations? If so nogc is *very* useful to enforce critical paths. If we added a nogcscan on blocks that do not contain pointers we maybe able to reduce the collection time, not as good as a precise collector. I would think we can get decent compiler support for this (ie. no refs, pointers, class, dynamic array).
You can actually prevent scanning/collection already without much difficulty: GC.disable(); scope(exit) GC.enable(); I feel like nogc is most useful in avoiding surprises by declaring your assumptions. Problems like how toUpperInPlace would still allocate (with gusto) could much more easily be recognized and fixed with nogc available.
Apr 18 2014
next sibling parent Byron <byron.heads gmail.com> writes:
On Fri, 18 Apr 2014 16:17:10 +0000, Brad Anderson wrote:
 
 You can actually prevent scanning/collection already without much
 difficulty:
 
 GC.disable();
 scope(exit) GC.enable();
 
 I feel like  nogc is most useful in avoiding surprises by declaring your
 assumptions. Problems like how toUpperInPlace would still allocate (with
 gusto) could much more easily be recognized and fixed with  nogc
 available.
I am talking more about hinting to the conservative GC about blocks it doesn't need to scan for addresses. struct Vertex { int x, y, z, w; } nogcscan Vertex vertexs[10_000]; so when a GC scan does happen it can skip scanning the vertexs memory block completely since we are promising not to hold on to addresses in it.
Apr 18 2014
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2014-04-18 18:17, Brad Anderson wrote:

 Problems like how toUpperInPlace would still allocate (with
 gusto) could much more easily be recognized and fixed with  nogc available.
toUpperInPlace should be removed. It cannot work reliable in place. -- /Jacob Carlborg
Apr 19 2014
next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Jacob Carlborg:

 toUpperInPlace should be removed. It cannot work reliable in 
 place.
Better to move it in std.ascii instead of removing it. Bye, bearophile
Apr 19 2014
parent reply Jacob Carlborg <doob me.com> writes:
On 2014-04-19 12:21, bearophile wrote:

 Better to move it in std.ascii instead of removing it.
The current implementation works with more characters than ASCII. -- /Jacob Carlborg
Apr 19 2014
parent reply "Tobias Pankrath" <tobias pankrath.net> writes:
On Saturday, 19 April 2014 at 10:31:28 UTC, Jacob Carlborg wrote:
 On 2014-04-19 12:21, bearophile wrote:

 Better to move it in std.ascii instead of removing it.
The current implementation works with more characters than ASCII.
Or replace characters whose upper case representation is bigger than the lower case representation with U+FFFD or similar.
Apr 19 2014
parent "monarch_dodra" <monarchdodra gmail.com> writes:
On Saturday, 19 April 2014 at 11:21:05 UTC, Tobias Pankrath wrote:
 On Saturday, 19 April 2014 at 10:31:28 UTC, Jacob Carlborg 
 wrote:
 On 2014-04-19 12:21, bearophile wrote:

 Better to move it in std.ascii instead of removing it.
The current implementation works with more characters than ASCII.
Or replace characters whose upper case representation is bigger than the lower case representation with U+FFFD or similar.
Replacing a character with FFFD is only acceptable if it was invalid to begin with. Doing it because it's convenient is not acceptable.
Apr 19 2014
prev sibling parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Saturday, 19 April 2014 at 10:07:46 UTC, Jacob Carlborg wrote:
 On 2014-04-18 18:17, Brad Anderson wrote:

 Problems like how toUpperInPlace would still allocate (with
 gusto) could much more easily be recognized and fixed with 
  nogc available.
toUpperInPlace should be removed.
Nonsense. It still works 99% of the time (I think only a subset of 100 letters in all of Unicode are affect, and even then, another 100 of them *shrink* on toUpper). It is really useful. It avoids *needles* allocations. Removing it would be more harmful than useful. I'm pretty confident that most of the time it is used, people don't care *that* much that *absolutely* no allocation takes place. They just don't want to be wasteful.
 It cannot work reliable in place.
Rename "toUpperMaybeInPlace". Then, for those that absolutely *can't* allocate provide a better interface. For example: `void toUpper(S, O)(S s, ref O o);` Burden on the caller to make it "inplace" from that (or to allocate accordingly when inplace is not possible).
Apr 19 2014
parent Jacob Carlborg <doob me.com> writes:
On 2014-04-19 15:40, monarch_dodra wrote:

 Nonsense. It still works 99% of the time (I think only a subset of 100
 letters in all of Unicode are affect, and even then, another 100 of them
 *shrink* on toUpper). It is really useful. It avoids *needles*
 allocations. Removing it would be more harmful than useful.
I'm implicitly referring to toLowerInPlace as well.
 I'm pretty confident that most of the time it is used, people don't care
 *that* much that *absolutely* no allocation takes place. They just don't
 want to be wasteful.
It still has a confusing name.
 Rename "toUpperMaybeInPlace".
Actually, the functionality is useful, it's just the name that is confusing.
 Then, for those that absolutely *can't* allocate provide a better
 interface. For example:
 `void toUpper(S, O)(S s, ref O o);`

 Burden on the caller to make it "inplace" from that (or to allocate
 accordingly when inplace is not possible).
-- /Jacob Carlborg
Apr 20 2014
prev sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Thursday, 17 April 2014 at 11:31:52 UTC, Manu via 
Digitalmars-d wrote:
 ARC offers a solution that is usable by all parties.
Is this a proven statement? If that paper is right then ARC with cycle management is in fact equivalent to Garbage Collection. Do we have evidence to the contrary? My very vague reasoning on the topic: Sophisticated GCs use various methods to avoid scanning the whole heap, and by doing so they in fact implement something equivalent to ARC, even if it doesn't appear that way on the surface. In the other direction, ARC ends up implementing a GC to deal with cycles. I.e. Easy work (normal data): A clever GC effectively implements ARC. ARC does what it says on the tin. Hard Work (i.e. cycles): Even a clever GC must be somewhat conservative*. ARC effectively implements a GC. *in the normal sense, not GC-jargon. Ergo they aren't really any different?
Apr 17 2014
next sibling parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 17 April 2014 21:57, John Colvin via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 On Thursday, 17 April 2014 at 11:31:52 UTC, Manu via Digitalmars-d wrote:

 ARC offers a solution that is usable by all parties.
Is this a proven statement? If that paper is right then ARC with cycle management is in fact equivalent to Garbage Collection. Do we have evidence to the contrary?
People who care would go to the effort of manually marking weak references. If you make a commitment to that in your software, you can eliminate the backing GC. Turn it off, or don't even link it. The backing GC is so that 'everyone else' would be unaffected by the shift. They'd likely see an advantage too, in that the GC would have a lot less work to do, since the ARC would clean up most of the memory (fall generally in the realm you refer to below). My very vague reasoning on the topic:
 Sophisticated GCs use various methods to avoid scanning the whole heap,
 and by doing so they in fact implement something equivalent to ARC, even if
 it doesn't appear that way on the surface. In the other direction, ARC ends
 up implementing a GC to deal with cycles. I.e.

 Easy work (normal data): A clever GC effectively implements ARC. ARC does
 what it says on the tin.

 Hard Work (i.e. cycles): Even a clever GC must be somewhat conservative*.
 ARC effectively implements a GC.

 *in the normal sense, not GC-jargon.

 Ergo they aren't really any different?
Nobody has proposed a 'sophisticated' GC for D. As far as I can tell, it's considered impossible by the experts. It also doesn't address the fundamental issue with the nature of a GC, which is that it expects plenty of free memory. You can't use a GC in a low-memory environment, no matter how it's designed. It allocates until it can't, then spends a large amount of time re-capturing unreferenced memory. As free memory decreases, this becomes more and more frequent.
Apr 17 2014
next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 04/17/2014 02:34 PM, Manu via Digitalmars-d wrote:
 On 17 April 2014 21:57, John Colvin via Digitalmars-d
 <digitalmars-d puremagic.com <mailto:digitalmars-d puremagic.com>> wrote:

     On Thursday, 17 April 2014 at 11:31:52 UTC, Manu via Digitalmars-d
     wrote:

         ARC offers a solution that is usable by all parties.
 ...
 You can't use a GC in a
 low-memory environment, no matter how it's designed. It allocates until
 it can't, then spends a large amount of time re-capturing unreferenced
 memory. As free memory decreases, this becomes more and more frequent.
What John was trying to get at is that the two quoted statements above are in contradiction with each other. An GC is a subsystem that automatically frees dead memory. (Dead as in it will not be accessed again, which is a weaker notion than it being unreferenced.) Maybe the distinction you want to make is between ARC and tracing garbage collectors.
Apr 17 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/17/2014 5:34 AM, Manu via Digitalmars-d wrote:
 People who care would go to the effort of manually marking weak references.
And that's not compatible with having a guarantee of memory safety.
Apr 17 2014
parent reply Michel Fortin <michel.fortin michelf.ca> writes:
On 2014-04-17 17:29:02 +0000, Walter Bright <newshound2 digitalmars.com> said:

 On 4/17/2014 5:34 AM, Manu via Digitalmars-d wrote:
 People who care would go to the effort of manually marking weak references.
And that's not compatible with having a guarantee of memory safety.
Auto-nulling weak references are perfectly memory-safe. In Objective-C you use the __weak pointer modifier for that. If you don't want it to be auto-nulling, use __unsafe_unretained instead to get a raw pointer. In general, seeing __unsafe_unretained in the code is a red flag however. You'd better know what you're doing. If you could transpose the concept to D, __weak would be allowed in safe functions while __unsafe_unretained would not. And thus memory-safety is preserved. -- Michel Fortin michel.fortin michelf.ca http://michelf.ca
Apr 17 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/17/2014 6:58 PM, Michel Fortin wrote:
 Auto-nulling weak references are perfectly memory-safe. In Objective-C you use
 the __weak pointer modifier for that. If you don't want it to be auto-nulling,
 use __unsafe_unretained instead to get a raw pointer. In general, seeing
 __unsafe_unretained in the code is a red flag however. You'd better know what
 you're doing.

 If you could transpose the concept to D, __weak would be allowed in  safe
 functions while __unsafe_unretained would not. And thus memory-safety is
preserved.
I recall our email discussion about implementing ARC in D that we couldn't even avoid an inc/dec for the 'this' when calling member functions. So I don't see how inc/dec can be elided in sufficient numbers to make ARC performant and unbloated. Of course, you can always *manually* elide these things, but then if you make a mistake, then you've got a leak and memory corruption.
Apr 17 2014
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 18 Apr 2014 01:53:00 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 4/17/2014 6:58 PM, Michel Fortin wrote:
 Auto-nulling weak references are perfectly memory-safe. In Objective-C  
 you use
 the __weak pointer modifier for that. If you don't want it to be  
 auto-nulling,
 use __unsafe_unretained instead to get a raw pointer. In general, seeing
 __unsafe_unretained in the code is a red flag however. You'd better  
 know what
 you're doing.

 If you could transpose the concept to D, __weak would be allowed in  
  safe
 functions while __unsafe_unretained would not. And thus memory-safety  
 is preserved.
I recall our email discussion about implementing ARC in D that we couldn't even avoid an inc/dec for the 'this' when calling member functions. So I don't see how inc/dec can be elided in sufficient numbers to make ARC performant and unbloated.
The important thing to recognize is that it's the *caller* that increments/decrements. This means you can elide calls to an object where you already have a guarantee of its reference count being high enough. I looked up the example you referred to, it was this:
 class R : RefCounted
 {
    int _x;
    int readx() { return _x; }
 }
 int main()
 {
    R r = new R;
    return r.readx();
 }
According to 12. there is no refcounting going on when calling or  
 executing readx. Ok, now what happens here:
class R : RefCounted
 {
    int _x;
    int readx(C c)
    {
        c.r = null; // "standard" rc deletes r here
        return _x;  // reads garbage
    }
 }
 class C
 {
    R r;
 }
 int main()
 {
    C c = new C;
    c.r = new R;
    return c.r.readx(c);
 }
This reads garbage or crashes if there is no reference counting going on  
 when calling readx.
So essentially, main needs to increment c.r's ref count. But not c's, because it already knows that it owns one of c's reference counts. R.readx does NOT need to increment its own reference count. I think the distinction is important. Also, consider if R is a final class, it can inline readx and can possibly defer incrementing the ref count for later, or cache _x before setting c.r to null (probably the better option). Opportunities for elision are not as hopeless as you make it sound. The compiler has a lot of information. The rules should be: 1. when compiling a function, you can assume parameters have at least one reference count increment that will not go away. function. Given D's type system of knowing when variables are shared (and if we implement thread-local destruction of unshared data), we have a lot more power even than Objective-C to make better decisions on ref counting.
 Of course, you can always *manually* elide these things, but then if you  
 make a mistake, then you've got a leak and memory corruption.
Manual eliding should be reserved for extreme optimization cases. It's similar to cast. Apple considers it dangerous enough to statically disallow it for ARC code. -Steve
Apr 18 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 18 April 2014 at 12:55:59 UTC, Steven Schveighoffer 
wrote:
 The important thing to recognize is that it's the *caller* that 
 increments/decrements. This means you can elide calls to an 
 object where you already have a guarantee of its reference 
 count being high enough.
That won't help you if you iterate over an array, so you need a mutex on the array in order to prevent inc/dec for every single object you inspect. inc/dec with a lock prefix could easily cost you 150-200 cycles. Ola.
Apr 18 2014
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 18 Apr 2014 10:00:21 -0400, Ola Fosheim Gr=C3=B8stad  =

<ola.fosheim.grostad+dlang gmail.com> wrote:

 On Friday, 18 April 2014 at 12:55:59 UTC, Steven Schveighoffer wrote:
 The important thing to recognize is that it's the *caller* that  =
 increments/decrements. This means you can elide calls to an object  =
 where you already have a guarantee of its reference count being high =
=
 enough.
That won't help you if you iterate over an array, so you need a mutex =
on =
 the array in order to prevent inc/dec for every single object you  =
 inspect.
If the array is shared, and the elements are references, yes. It's also = = possible that each object uses a reference to the array, in which case t= he = array could be altered inside the method, requiring an inc/dec even for = = unshared arrays.
 inc/dec with a lock prefix could easily cost you 150-200 cycles.
And an inc/dec may not necessarily need a lock if the array element is n= ot = shared, even if you inc/dec the ref count. D offers opportunities to go beyond traditional ref count eliding. But even still, 150-200 extra cycles here and there is not as bad as a = 300ms pause to collect garbage for some apps. I think nobody is arguing that Ref counting is a magic bullet to memory = = management. It fits some applications better than GC, that's all. -Steve
Apr 18 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 18 April 2014 at 14:15:00 UTC, Steven Schveighoffer 
wrote:
 And an inc/dec may not necessarily need a lock if the array 
 element is not shared, even if you inc/dec the ref count.

 D offers opportunities to go beyond traditional ref count 
 eliding.
In most situations where you need speed you do need to share data so that you can keep 8 threads busy without trashing the caches and getting the memory bus as a bottle neck. Then you somehow have to tell the compiler what the mutex covers if ARC is going to be transparent… E.g. "this mutex covers all strings reachable from pointer P". So you need a meta level language…
 But even still, 150-200 extra cycles here and there is not as 
 bad as a 300ms pause to collect garbage for some apps.
I don't know. I think one unified management strategy will not work in most real time apps. I think C++ got that right. I also think you need both meta level reasoning (program verification constructs) and whole program analysis to get a performant solution with automatic management.
 I think nobody is arguing that Ref counting is a magic bullet 
 to memory management. It fits some applications better than GC, 
 that's all.
As an addition to other management techniques, yes.
Apr 18 2014
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/17/2014 10:53 PM, Walter Bright wrote:
 I recall our email discussion about implementing ARC in D
Rainer found it: http://forum.dlang.org/thread/l34lei$255v$1 digitalmars.com
Apr 18 2014
prev sibling parent reply Orvid King via Digitalmars-d <digitalmars-d puremagic.com> writes:
I'm just going to put my 2-cents into this discussion, it's my
personal opinion that while _allocations_ should be removed from
phobos wherever possible, replacing GC usage with manual calls to
malloc/free has no place in the standard library, as it's quite simply
a mess that is really not needed, and quite simply, one should be
figuring out how to simply not allocate at all rather than trying do
do manual management.

It is possible to implement a much better GC than what D currently
has, and I intend to do exactly that when I have the time needed (in
roughly a month). Firstly by making it heap precise, maybe even adding
a stack precise mode (unlikely). Secondly by making it optionally use
an allocation strategy similar to tcmalloc, which is able to avoid
using a global lock for most allocations, as an interim measure until
DMD gets full escape analysis, which, due to the nature of D, would be
required before I could implement an effective compacting GC.
Depending on if I can grasp the underlying theory behind it, I *may*
also create an async collection mode, but it will be interesting to
see how I am able to tie in the extensible scanning system (Andrei's
allocators) into it. Lastly, I'll add support for stack allocated
classes, however that will likely have to be disabled until DMD gets
full escape analysis. As a final note, this will be the 3rd GC I've
written, although it will be the most complex by far. The first was
just heap precise, the second a generational compacting version of it.

On 4/17/14, Manu via Digitalmars-d <digitalmars-d puremagic.com> wrote:
 On 17 April 2014 21:57, John Colvin via Digitalmars-d <
 digitalmars-d puremagic.com> wrote:

 On Thursday, 17 April 2014 at 11:31:52 UTC, Manu via Digitalmars-d wrote:

 ARC offers a solution that is usable by all parties.
Is this a proven statement? If that paper is right then ARC with cycle management is in fact equivalent to Garbage Collection. Do we have evidence to the contrary?
People who care would go to the effort of manually marking weak references. If you make a commitment to that in your software, you can eliminate the backing GC. Turn it off, or don't even link it. The backing GC is so that 'everyone else' would be unaffected by the shift. They'd likely see an advantage too, in that the GC would have a lot less work to do, since the ARC would clean up most of the memory (fall generally in the realm you refer to below). My very vague reasoning on the topic:
 Sophisticated GCs use various methods to avoid scanning the whole heap,
 and by doing so they in fact implement something equivalent to ARC, even
 if
 it doesn't appear that way on the surface. In the other direction, ARC
 ends
 up implementing a GC to deal with cycles. I.e.

 Easy work (normal data): A clever GC effectively implements ARC. ARC does
 what it says on the tin.

 Hard Work (i.e. cycles): Even a clever GC must be somewhat conservative*.
 ARC effectively implements a GC.

 *in the normal sense, not GC-jargon.

 Ergo they aren't really any different?
Nobody has proposed a 'sophisticated' GC for D. As far as I can tell, it's considered impossible by the experts. It also doesn't address the fundamental issue with the nature of a GC, which is that it expects plenty of free memory. You can't use a GC in a low-memory environment, no matter how it's designed. It allocates until it can't, then spends a large amount of time re-capturing unreferenced memory. As free memory decreases, this becomes more and more frequent.
Apr 17 2014
next sibling parent reply "Regan Heath" <regan netmail.co.nz> writes:
On Thu, 17 Apr 2014 14:08:29 +0100, Orvid King via Digitalmars-d  
<digitalmars-d puremagic.com> wrote:

 I'm just going to put my 2-cents into this discussion, it's my
 personal opinion that while _allocations_ should be removed from
 phobos wherever possible, replacing GC usage with manual calls to
 malloc/free has no place in the standard library, as it's quite simply
 a mess that is really not needed, and quite simply, one should be
 figuring out how to simply not allocate at all rather than trying do
 do manual management.
The standard library is a better place to put manual memory management than user space because it should be done by experts, peer reviewed and then would benefit everyone at no extra cost. There are likely a number of smaller GC allocations which could be replaced by calls to alloca, simultaneously improving performance and avoiding GC interaction. These calls could then be marked nogc and used in the realtime sections of applications without fear of collections stopping the world. Neither ARC nor a super amazing GC would be able to improve upon the efficiency of this sort of change. Seems like win-win-win to me.
 It is possible to implement a much better GC than what D currently
 has, and I intend to do exactly that when I have the time needed (in
 roughly a month).
Excellent :) R
Apr 17 2014
parent Orvid King via Digitalmars-d <digitalmars-d puremagic.com> writes:
I should probably have said heap allocation rather than just
allocation, because the alloca calls are the ones that would have the
real benefit, those realtime applications are the reason I hope to be
able to implement an async collection mode. If I were able to
implement even a moderately compacting GC, I would be able to use a
bump-the-pointer allocation strategy, which would be significantly
faster than manual calls to malloc/free.

On 4/17/14, Regan Heath via Digitalmars-d <digitalmars-d puremagic.com> wrote:
 On Thu, 17 Apr 2014 14:08:29 +0100, Orvid King via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:

 I'm just going to put my 2-cents into this discussion, it's my
 personal opinion that while _allocations_ should be removed from
 phobos wherever possible, replacing GC usage with manual calls to
 malloc/free has no place in the standard library, as it's quite simply
 a mess that is really not needed, and quite simply, one should be
 figuring out how to simply not allocate at all rather than trying do
 do manual management.
The standard library is a better place to put manual memory management than user space because it should be done by experts, peer reviewed and then would benefit everyone at no extra cost. There are likely a number of smaller GC allocations which could be replaced by calls to alloca, simultaneously improving performance and avoiding GC interaction. These calls could then be marked nogc and used in the realtime sections of applications without fear of collections stopping the world. Neither ARC nor a super amazing GC would be able to improve upon the efficiency of this sort of change. Seems like win-win-win to me.
 It is possible to implement a much better GC than what D currently
 has, and I intend to do exactly that when I have the time needed (in
 roughly a month).
Excellent :) R
Apr 17 2014
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2014-04-17 15:08, Orvid King via Digitalmars-d wrote:

 Lastly, I'll add support for stack allocated
 classes, however that will likely have to be disabled until DMD gets
 full escape analysis. As a final note, this will be the 3rd GC I've
 written, although it will be the most complex by far. The first was
 just heap precise, the second a generational compacting version of it.
It's already possible to stack allocate classes. Either using the now deprecated (removed?) "scope" or a library solution. -- /Jacob Carlborg
Apr 18 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/18/2014 12:44 PM, Jacob Carlborg wrote:
 It's already possible to stack allocate classes. Either using the now
deprecated
 (removed?) "scope" or a library solution.
dmd could do a better job of escape analysis, and do this automatically.
Apr 18 2014
next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

 dmd could do a better job of escape analysis, and do this 
 automatically.
Timon has kindly fixed the wrong part I wrote in the DIP60. But if we introduce some basic escape analysis required by all conformant D compilers, then the nogc attribute can be applied to some functions that define array literals or more (so my comment becomes true in some specified cases). Thankfully this is something that can be done later, as such improvements just relax the strictness of nogc. Bye, bearophile
Apr 18 2014
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2014-04-18 22:25, Walter Bright wrote:

 dmd could do a better job of escape analysis, and do this automatically.
That would be really nice. Is this long hanging fruit or does it require a lot of work? -- /Jacob Carlborg
Apr 19 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/19/2014 3:10 AM, Jacob Carlborg wrote:
 On 2014-04-18 22:25, Walter Bright wrote:

 dmd could do a better job of escape analysis, and do this automatically.
That would be really nice. Is this long hanging fruit or does it require a lot of work?
It requires a decent working knowledge of data flow analysis techniques, and some concentrated effort.
Apr 19 2014
prev sibling next sibling parent "Mike" <none none.com> writes:
On Wednesday, 16 April 2014 at 22:42:23 UTC, Adam Wilson wrote:

 Long story short, Tracing vs. Ref-counting are algorithmic 
 duals and therefore do not significantly differ. My read of the 
 article is that all the different GC styles are doing is 
 pushing the cost somewhere else.
All memory management schemes cost, even manual memory management. IMO that's not the point. The point is that each memory management scheme distributes the cost differently. One distribution may be more suitable for a certain problem domain than another.
 ARC may in fact be the most advantageous for a specific use 
 case, but that in no way means that all use cases will see a 
 performance improvement, and in all likelihood, may see a 
 decrease in performance.
The same can be said about stop-the-world mark-and-sweep. It is also specialized to a specific problem domain. As an example, it doesn't scale well to the real-time/embedded domain.
 That makes ARC a specialization for a certain type of 
 programming, which would then remove D the "Systems" category 
 and place it in a "Specialist" category. One could argue that 
 due to the currently non-optional status of the GC that D is 
 currently a "Specialist" language, and I would be hard pressed 
 to argue against that.
D is currently in the "Specialist" category. It is already specialized/biased to PC/Server applications. C/C++ are the only languages I know of that scale reasonably well to all systems. I think D has the potential to change that, but it will require, first, recognition that D is not yet a "Systems" language like C/C++ are, and second, the will to change it.
  nogc removes the shackles of the GC from the language and thus 
 brings it closer to the definition of "Systems".  nogc allows 
 programmers to revert to C-style resource management without 
 enforcing a specialized RM system, be it GC or ARC.  nogc might 
 not make you run through the fields singing D's praises, but it 
 is entirely consistent with the goals and direction of D.
nogc doesn't allow users to revert to C-style resource management because they don't have control over implicit allocations in druntime and elsewhere. It just disables them. Users still have to build alternatives. There's no escaping the cost of memory management, but one could choose how to distribute the cost. Mike
Apr 16 2014
prev sibling parent Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 17 April 2014 08:42, Adam Wilson via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 On Wed, 16 Apr 2014 04:50:51 -0700, Manu via Digitalmars-d <
 digitalmars-d puremagic.com> wrote:

  I am convinced that ARC would be acceptable, and I've never heard anyone
 suggest any proposal/fantasy/imaginary GC implementation that would be
 acceptable...
 In complete absence of a path towards an acceptable GC implementation, I'd
 prefer to see people that know what they're talking about explore how
 refcounting could be used instead.
 GC backed ARC sounds like it would acceptably automate the circular
 reference catching that people fuss about, while still providing a
 workable
 solution for embedded/realtime users; disable(/don't link) the backing GC,
 make sure you mark weak references properly.
I'm just going to leave this here. I mentioned it previously in a debate over ARC vs. GC but I couldn't find the link at the time. http://www.cs.virginia.edu/~cs415/reading/bacon-garbage.pdf The paper is pretty math heavy. Long story short, Tracing vs. Ref-counting are algorithmic duals and therefore do not significantly differ. My read of the article is that all the different GC styles are doing is pushing the cost somewhere else.
Of course, I generally agree. Though realtime/embedded values smooth predictability/reliability more than burst+stutter operation. That said, I do think that GC incurs greater cost than ARC in aggregate though. The scanning process, and the cache implications of scanning the heap are cataclysmic. I don't imagine that some trivial inc/dec's would sum to the same amount of work, even though they're happening more frequently. GC has a nasty property where its workload in inversely proportional to available memory. As free memory decreases, frequency of scans increase. Low-memory is an important class of native language users that shouldn't be ignored (embedded, games consoles, etc). Further, The cost of a GC sweep increases with the size of the heap. So, as free memory decreases, you expect longer scans, more often... Yeah, win! There are some other disturbing considerations; over time, as device memory grows, GC costs will increase proportionally. This is silly, and I'm amazed a bigger deal isn't made about the future-proof-ness of GC. In 5 years when we all have 512gb of ram in our devices, how much time is the GC going to spend scanning that much memory? GC might work okay in the modern sweet-spot of 100-mb's to low-gb of total memory, but I think as memory grows with time, GC will become more problematic. ARC on the other hand has a uniform, predictable, constant cost, that never changes with respect to any of these quantities. ARC will always perform the same speed, even 10 years from now, even on my Nintendo Wii, even on my PIC microcontroller. As an embedded/realtime programmer, I can work with this. ARC may in fact be the most advantageous for a specific use case, but that
 in no way means that all use cases will see a performance improvement, and
 in all likelihood, may see a decrease in performance.
If you had to choose one as a default foundation, would you choose one that eliminates a whole class of language users, or one that is an acceptable compromise for all parties? I'd like to see an argument for "I *need* GC. GC-backed-ARC is unacceptable for my use case!". I'll put money on that requirement never emerging, and I have no idea who that user would be. Also, if you do see a decrease in performance, I suspect that it's only under certain conditions. As said above, if your device begins to run low on memory, or your users are working on unusually large projects/workloads, all of a sudden your software starts performing radically differently than you observe during development. Naturally you don't typically profile that environment, but it's not unlikely to occur in the wild. That makes ARC a specialization for a certain type of programming, which
 would then remove D the "Systems" category and place it in a "Specialist"
 category.
What it does, is NOT eliminate a whole class of users. Are you going to tell me that you have a hard dependency on the GC, and something else that does exactly the same thing is incompatible with your requirements? There's nothing intrinsically "systems" about GC over ARC, whatever that means. One could argue that due to the currently non-optional status of the GC
 that D is currently a "Specialist" language, and I would be hard pressed to
 argue against that.
So what's wrong with a choice that does exactly the same thing, but is less exclusive? nogc removes the shackles of the GC from the language and thus brings it
 closer to the definition of "Systems".  nogc allows programmers to revert
 to C-style resource management without enforcing a specialized RM system,
 be it GC or ARC.  nogc might not make you run through the fields singing
 D's praises, but it is entirely consistent with the goals and direction of
 D.
I see some value in nogc. I'm not arguing against it. My point what that I feel it is missing the point, and I fear for the implications... does this represent a dismissal of the root problem? See my points about fracturing frameworks and libraries into isolated worlds. This is a critical problem in C/C++ that I would do literally anything to see not repeated in D.
Apr 16 2014
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/16/14, 2:03 AM, JN wrote:
 On Wednesday, 16 April 2014 at 01:57:29 UTC, Mike wrote:
 I don't believe users hesitant to use D will suddenly come to D now
 that there is a  nogc attribute.  I also don't believe they want to
 avoid the GC, even if they say they do.  I believe what they really
 want is to have an alternative to the GC.
I'd have to agree. I doubt nogc will change anything, people will just start complaining about limitations of nogc (no array concat, having to use own libraries which may be incompatible with phobos). The complaints mostly come from the fact that D wants to offer a choice, in other complaining much about having to use GC, or C++ programmers all over the world asking for GC. Well, most of the new games (Unity3D) are done in of the biggest C++ loving and GC hating crowd there is. Another issue is the quality of D garbage collector, but adding alternative memory management ways doesn't help, fragmenting the codebase.
My perception is the opposite. Time will tell. -- Andrei
Apr 16 2014
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/16/2014 2:03 AM, JN wrote:
 I'd have to agree. I doubt  nogc will change anything, people will just start
 complaining about limitations of  nogc (no array concat, having to use own
 libraries which may be incompatible with phobos). The complaints mostly come
 from the fact that D wants to offer a choice, in other languages people just

having
 to use GC, or C++ programmers all over the world asking for GC. Well, most of

 though game development is one of the biggest C++ loving and GC hating crowd
 there is.
We have to try. Especially since nogc is a low risk thing - it doesn't break anything, and is a fairly simple addition to the compiler.
 Another issue is the quality of D garbage collector, but adding alternative
 memory management ways doesn't help, fragmenting the codebase.
No improvement to the GC is acceptable to people who want to manually manage memory. That much is quite clear.
Apr 16 2014
prev sibling parent reply "froglegs" <nono yahoo.com> writes:

 and people live with it even though game development is one of 
 the biggest C++ loving and GC hating crowd there is.
type of code is generally not performance sensitive.
Apr 16 2014
parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 16.04.2014 22:49, schrieb froglegs:

 people live with it even though game development is one of the biggest
 C++ loving and GC hating crowd there is.
code is generally not performance sensitive.
I am really looking forward to .NET Native becoming widespread. -- Paulo
Apr 16 2014
parent reply "froglegs" <nono yahoo.com> writes:
 I am really looking forward to .NET Native becoming widespread.


 different.
I don't think it will make a major difference. Taking a GC based language and giving it a native compiler doesn't automatically make it performance competitive with C++(see Haskel and D(without dumping GC) on anything besides micro bench marks). Herb Sutters recent talk on arrays). Many programs don't use these, but if you have a few hot spots involving number crunching, they can make a major difference. My current project spends about 80% of its CPU time in SSE amenable locations, some template magic mixed with SSE intrinsics, and now those spots run 4x faster. You might be thinking auto vectorization can compete, but I've yet to see the one in VS2013 accomplish much of anything. Also I doubt very much that an auto vectorizer can squash branches, which is very possible with intrinsics. True branches and vectorized code don't mix well...
Apr 16 2014
parent reply "paulo Pinto" <pjmlp progtools.org> writes:
On Wednesday, 16 April 2014 at 22:11:23 UTC, froglegs wrote:
 I am really looking forward to .NET Native becoming widespread.


 different.
I don't think it will make a major difference. Taking a GC based language and giving it a native compiler doesn't automatically make it performance competitive with C++(see Haskel and D(without dumping GC) on anything besides micro bench marks). Herb Sutters recent talk on arrays).
blocks. Don't forget that Herb Sutters also has a C++ agenda to sell.

That is not correct. 1 - Nowhere in the ANSI/ISO C++ are SSE/AVX intrinsics defined, standard; 2 - .NET Native makes use of Visual C++'s backend with all the automatic vectorization and other code generation optimizations Visual C++ developers enjoy; 3 - .NET Native and RyuJIT have official support for SIMD instructions, GPGPU support is also planned -- Paulo
Apr 18 2014
parent reply "froglegs" <nono yahoo.com> writes:

 unsafe blocks.
And now you aren't using the language, but a (very) poor subset of a language that doesn't even support templates.

That is not correct. 1 - Nowhere in the ANSI/ISO C++ are SSE/AVX intrinsics defined, standard;
Duh, but every C++ compiler exposes this, so it is defacto standard. C++ has plenty of non-standard standards, such as #pragma once.
 3 - .NET Native and RyuJIT have official support for SIMD 
 instructions, GPGPU support is also planned
I see on MS website an article about having a vector data type. While interesting that isn't the same as exposing the actual instructions, which will limit potential gains. The aricle http://blogs.msdn.com/b/dotnet/archive/2014/04/07/the-jit-finally-proposed-jit-and-simd-are-getting-married.aspx Additionally .NET native will be MS only--
Apr 18 2014
parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Saturday, 19 April 2014 at 05:08:06 UTC, froglegs wrote:

 unsafe blocks.
And now you aren't using the language, but a (very) poor subset of a language that doesn't even support templates.
Doesn't change the fact it is possible, but hey lets sell C++ agenda.

That is not correct. 1 - Nowhere in the ANSI/ISO C++ are SSE/AVX intrinsics defined, those are compiler extensions. So equal foot with the
Duh, but every C++ compiler exposes this, so it is defacto standard. C++ has plenty of non-standard standards, such as #pragma once.
 3 - .NET Native and RyuJIT have official support for SIMD 
 instructions, GPGPU support is also planned
I see on MS website an article about having a vector data type. While interesting that isn't the same as exposing the actual instructions, which will limit potential gains. The aricle http://blogs.msdn.com/b/dotnet/archive/2014/04/07/the-jit-finally-proposed-jit-and-simd-are-getting-married.aspx Additionally .NET native will be MS only--
Except it already exists today with Mono. Microsoft is just making the official .NET do something, Xamarin has been doing the last years already, both in static native compilation and SIMD support. Any language can expose SIMD instructions, there is nothing special about them in C++, because like every other language, they are compiler extensions. Regardless of being a defacto standard or not. will certanly help young developers understand we don't need VMs for memory safe systems programming languages. Oberon compilers in the mid-90's were producing code that was as good as C compilers back then. On those days I still wrote a couple applications 100% in Assembly. I think many value too much C and C++ compilers, because they forget how long they have been around, and also never used alternative system programming languages back when C and C++ compilers used to suck. -- Paulo
Apr 19 2014
prev sibling next sibling parent Timothee Cour via Digitalmars-d <digitalmars-d puremagic.com> writes:
I suggest adding a way to override a  nogc function for debugging purposes:

suppose we need to *temporarily* debug / instrument a function marked as
 nogc (myfun) by adding some piece of code that may allocate (quick and
dirty debugging). Unfortunately, in DIP60 we can't just temporarily remove
the  nogc attribute as myfun may be called by many other  nogc functions
and changing all those attributes would be too complex.

With the propose override, we can just recompile with a flag ; how to do
that is implementation detail but here's an option:

dmd -allow_gc_in_nogc fun.d

 nogc myfun(){

//temporary for debugging/instrumentation
 gc{
//code that can allocate
}

}



On Tue, Apr 15, 2014 at 10:01 AM, Walter Bright via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 http://wiki.dlang.org/DIP60

 Start on implementation:

 https://github.com/D-Programming-Language/dmd/pull/3455
Apr 15 2014
prev sibling next sibling parent "justme" <justme example.com> writes:
On Tuesday, 15 April 2014 at 17:01:38 UTC, Walter Bright wrote:
 http://wiki.dlang.org/DIP60
Walter, the DIP has a funny creation date.
Apr 16 2014
prev sibling next sibling parent reply "qznc" <qznc web.de> writes:
On Tuesday, 15 April 2014 at 17:01:38 UTC, Walter Bright wrote:
 http://wiki.dlang.org/DIP60

 Start on implementation:

 https://github.com/D-Programming-Language/dmd/pull/3455
Good start. However, what is still an open issue is that nogc can be stopped by allocations in another thread. We need threads which are not affected by stop-the-world. As far as I know, creating threads via pthreads C API directly achieves that, but integration with nogc could provide more type safety. Stuff for another DIP?
Apr 16 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/16/2014 8:01 AM, qznc wrote:
 However, what is still an open issue is that  nogc can be stopped by
allocations
 in another thread. We need threads which are not affected by stop-the-world. As
 far as I know, creating threads via pthreads C API directly achieves that, but
 integration with  nogc could provide more type safety. Stuff for another DIP?
That's a completely separate issue.
Apr 16 2014
parent "Regan Heath" <regan netmail.co.nz> writes:
On Wed, 16 Apr 2014 18:38:23 +0100, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 4/16/2014 8:01 AM, qznc wrote:
 However, what is still an open issue is that  nogc can be stopped by  
 allocations
 in another thread. We need threads which are not affected by  
 stop-the-world. As
 far as I know, creating threads via pthreads C API directly achieves  
 that, but
 integration with  nogc could provide more type safety. Stuff for  
 another DIP?
That's a completely separate issue.
Yep. I was thinking an attribute like rt (realtime) would be super cool (but, perhaps impossible). It would be a super-set of things like nogc, and imply those things. Adding nogc does not prevent such a thing being done in the future. R -- Using Opera's revolutionary email client: http://www.opera.com/mail/
Apr 17 2014
prev sibling next sibling parent reply "Peter Alexander" <peter.alexander.au gmail.com> writes:
On Tuesday, 15 April 2014 at 17:01:38 UTC, Walter Bright wrote:
 http://wiki.dlang.org/DIP60

 Start on implementation:

 https://github.com/D-Programming-Language/dmd/pull/3455
Some initial thoughts: * Is it perhaps too early to introduce this? We don't have allocators yet, so it can be quite hard to avoid the GC in some situations. * Many Phobos functions use 'text' and 'format' in asserts. What should be done about those? * Does nogc => nothrow? If I'm not mistaken, throw must through a GC-allocated Throwable. * If the above is true, does that mean exceptions cannot be used at all in nogc code? * I worry about the number of attributes being added. Where do we draw the line? Are we going to add every attribute that someone finds a use for? logicalconst nonrecursive nonreentrant guaranteedtermination neverreturns
Apr 16 2014
next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Peter Alexander:

 * Does  nogc => nothrow? If I'm not mistaken, throw must 
 through a GC-allocated Throwable.

 * If the above is true, does that mean exceptions cannot be 
 used at all in  nogc code?
This should work: void foo() nogc nothrow { static const err = new Error("error"); throw err; } Bye, bearophile
Apr 16 2014
parent reply "Peter Alexander" <peter.alexander.au gmail.com> writes:
On Wednesday, 16 April 2014 at 19:53:01 UTC, bearophile wrote:
 Peter Alexander:

 * Does  nogc => nothrow? If I'm not mistaken, throw must 
 through a GC-allocated Throwable.

 * If the above is true, does that mean exceptions cannot be 
 used at all in  nogc code?
This should work: void foo() nogc nothrow { static const err = new Error("error"); throw err; } Bye, bearophile
(I assume that nothrow isn't meant to be there?) What if the exception needs information about the error? You could do something like this: void foo() nogc { static err = new Error(); if (badthing) { err.setError("badthing happened"); throw err; } } However, that raises a second question: since err is allocated when a new thread is created, does that mean nogc functions cannot create threads in the presence of such static initialisation?
Apr 16 2014
next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Peter Alexander:

 (I assume that nothrow isn't meant to be there?)
In D nothrow functions can throw errors.
 You could do something like this:

 void foo()  nogc
 {
     static err = new Error();
     if (badthing)
     {
         err.setError("badthing happened");
         throw err;
     }
 }
To be mutable err also needs to be __gshared. Bye, bearophile
Apr 16 2014
next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Peter Alexander:

        err.setError("badthing happened");
And that is usually written: err.msg = "badthing happened"; Bye, bearophile
Apr 16 2014
prev sibling parent reply "Peter Alexander" <peter.alexander.au gmail.com> writes:
On Wednesday, 16 April 2014 at 20:29:17 UTC, bearophile wrote:
 Peter Alexander:

 (I assume that nothrow isn't meant to be there?)
In D nothrow functions can throw errors.
Of course, ignore me :-)
 You could do something like this:

 void foo()  nogc
 {
    static err = new Error();
    if (badthing)
    {
        err.setError("badthing happened");
        throw err;
    }
 }
To be mutable err also needs to be __gshared.
But then it isn't thread safe. Two threads trying to set and throw the same Error is a recipe for disaster.
Apr 16 2014
parent reply Marco Leise <Marco.Leise gmx.de> writes:
Am Wed, 16 Apr 2014 20:32:20 +0000
schrieb "Peter Alexander" <peter.alexander.au gmail.com>:

 On Wednesday, 16 April 2014 at 20:29:17 UTC, bearophile wrote:
 Peter Alexander:

 (I assume that nothrow isn't meant to be there?)
In D nothrow functions can throw errors.
Of course, ignore me :-)
 You could do something like this:

 void foo()  nogc
 {
    static err = new Error();
    if (badthing)
    {
        err.setError("badthing happened");
        throw err;
    }
 }
To be mutable err also needs to be __gshared.
But then it isn't thread safe. Two threads trying to set and throw the same Error is a recipe for disaster.
Also: As far as I remember from disassembling C++, static variables in functions are initialized on first access and guarded by a bool. The first call to foo() would execute "err = new Error();" in that case. This code should not compile under nogc. -- Marco
Apr 19 2014
parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Saturday, 19 April 2014 at 23:44:45 UTC, Marco Leise wrote:
 Am Wed, 16 Apr 2014 20:32:20 +0000
 schrieb "Peter Alexander" <peter.alexander.au gmail.com>:

 On Wednesday, 16 April 2014 at 20:29:17 UTC, bearophile wrote:
 Peter Alexander:

 (I assume that nothrow isn't meant to be there?)
In D nothrow functions can throw errors.
Of course, ignore me :-)
 You could do something like this:

 void foo()  nogc
 {
    static err = new Error();
    if (badthing)
    {
        err.setError("badthing happened");
        throw err;
    }
 }
To be mutable err also needs to be __gshared.
But then it isn't thread safe. Two threads trying to set and throw the same Error is a recipe for disaster.
Also: As far as I remember from disassembling C++, static variables in functions are initialized on first access and guarded by a bool. The first call to foo() would execute "err = new Error();" in that case. This code should not compile under nogc.
D static initialization doesn't work the same way. Everything is initialized as the program is loaded, and everything must have a statically known value. EG: An Error is allocated *somewhere* (may or may not actually be the GC), and then the static value of the Error is a pointer to that. It's what allows us to do things like: class A{} struct S { A a = new A(); } This is legal. All S.a will point the *same* A. S.init will be initialized to point to that a. Also, just doing this is good enough: //---- void foo() nogc { static err = new Error("badthing happened"); if (badthing) throw err; } //---- It does require the message be known before hand, and not custom "built". But then again, where were you going to allocate the message, and if allocated, who would clean it up? -------- That said, while the approach works, there could be issues with re-entrance, or chaining exceptions.
Apr 20 2014
parent Marco Leise <Marco.Leise gmx.de> writes:
Am Sun, 20 Apr 2014 08:19:45 +0000
schrieb "monarch_dodra" <monarchdodra gmail.com>:

 D static initialization doesn't work the same way. Everything is=20
 initialized as the program is loaded, [=E2=80=A6]
Ah ok, it's all good then :)
 Also, just doing this is good enough:
=20
 //----
 void foo()  nogc
 {
     static err =3D new Error("badthing happened");
     if (badthing)
         throw err;
 }
 //----
=20
 It does require the message be known before hand, and not custom=20
 "built". But then again, where were you going to allocate the=20
 message, and if allocated, who would clean it up?
=20
 --------
=20
 That said, while the approach works, there could be issues with=20
 re-entrance, or chaining exceptions.
Yes, we've discussed the issues with that approach in other threads. At least this allows exceptions to be used at all. --=20 Marco
Apr 21 2014
prev sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 04/16/2014 10:10 PM, Peter Alexander wrote:
 However, that raises a second question: since err is allocated when a
 new thread is created, does that mean  nogc functions cannot create
 threads in the presence of such static initialisation?
This does not allocate on the GC heap.
Apr 16 2014
prev sibling next sibling parent "monarch_dodra" <monarchdodra gmail.com> writes:
On Wednesday, 16 April 2014 at 19:44:19 UTC, Peter Alexander 
wrote:
 On Tuesday, 15 April 2014 at 17:01:38 UTC, Walter Bright wrote:
 http://wiki.dlang.org/DIP60

 Start on implementation:

 https://github.com/D-Programming-Language/dmd/pull/3455
Some initial thoughts: * Is it perhaps too early to introduce this? We don't have allocators yet, so it can be quite hard to avoid the GC in some situations. * Many Phobos functions use 'text' and 'format' in asserts. What should be done about those?
As a rule of thumb, format and text should *already* be avoided altogether in (non-static) asserts, as they can throw exceptions, preventing a function from being nothrow. For example, sort: https://github.com/D-Programming-Language/phobos/pull/2075/files#diff-ff74a46362b5953e8c88120e2490f839R9344 That said, the issue remains relevant for nogc. Not only the exception itself, but also for the ".msg" field. How do we allocate it? Who cleans it up? Does the catcher have to do it? Can the catcher know he has to do it?
 * Does  nogc => nothrow? If I'm not mistaken, throw must 
 through a GC-allocated Throwable.
Even then, what about asserts? Well, I guess it's OK if Errors leak, since you are supposed to terminate shortly afterwards.
 * If the above is true, does that mean exceptions cannot be 
 used at all in  nogc code?

 * I worry about the number of attributes being added. Where do 
 we draw the line? Are we going to add every attribute that 
 someone finds a use for?  logicalconst  nonrecursive 
  nonreentrant  guaranteedtermination  neverreturns
I like the concept of having an " everything" attribute. It "future proofs" code (in a way, if you are also fine with it potentially breaking). Also, it is often easier (I think) to not think in terms of "what guarantees does my function provide", but rather "what guarantees does my function *not* provide"? EG: void myFun(int* p) everyting impure; My function is safe, nothrow, nogc etc... except pure. BUT, I think this should be the subject of another thread. Let's focus on nogc.
Apr 16 2014
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/16/2014 12:44 PM, Peter Alexander wrote:
 * Is it perhaps too early to introduce this? We don't have allocators yet, so
it
 can be quite hard to avoid the GC in some situations.
Not that hard.
 * Many Phobos functions use 'text' and 'format' in asserts. What should be done
 about those?
Redo to use output ranges instead.
 * Does  nogc => nothrow?
No. They are orthogonal.
 If I'm not mistaken, throw must through a GC-allocated Throwable.
 * If the above is true, does that mean exceptions cannot be used at all in
 nogc
 code?
They can use preallocated exceptions, or be templatized and infer the attributes. It is a problem, though.
 * I worry about the number of attributes being added. Where do we draw the
line?
 Are we going to add every attribute that someone finds a use for?  logicalconst
  nonrecursive  nonreentrant  guaranteedtermination  neverreturns
That's essentially true of every language feature.
Apr 16 2014
prev sibling next sibling parent reply "Dejan Lekic" <dejan.lekic gmail.com> writes:
On Tuesday, 15 April 2014 at 17:01:38 UTC, Walter Bright wrote:
 http://wiki.dlang.org/DIP60

 Start on implementation:

 https://github.com/D-Programming-Language/dmd/pull/3455
This is a good start, but I am sure I am not the only person who thought "maybe we should have this on a module level". This would allow people to nicely group pieces of the application that should not use GC.
Apr 17 2014
parent reply "Rikki Cattermole" <alphaglosined gmail.com> writes:
On Thursday, 17 April 2014 at 09:22:55 UTC, Dejan Lekic wrote:
 On Tuesday, 15 April 2014 at 17:01:38 UTC, Walter Bright wrote:
 http://wiki.dlang.org/DIP60

 Start on implementation:

 https://github.com/D-Programming-Language/dmd/pull/3455
This is a good start, but I am sure I am not the only person who thought "maybe we should have this on a module level". This would allow people to nicely group pieces of the application that should not use GC.
Sure it does. module mymodule; nogc: void myfunc(){} class MyClass { void mymethod() {} } Everything in above code has nogc applied to it. Nothing special about it, can do it for most attributes like static, final and UDA's. Unless of course you can think of another way it could be done? Or I've missed something.
Apr 17 2014
parent reply Artur Skawina via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 04/17/14 11:33, Rikki Cattermole via Digitalmars-d wrote:
 On Thursday, 17 April 2014 at 09:22:55 UTC, Dejan Lekic wrote:
 On Tuesday, 15 April 2014 at 17:01:38 UTC, Walter Bright wrote:
 http://wiki.dlang.org/DIP60

 Start on implementation:

 https://github.com/D-Programming-Language/dmd/pull/3455
This is a good start, but I am sure I am not the only person who thought "maybe we should have this on a module level". This would allow people to nicely group pieces of the application that should not use GC.
Sure it does. module mymodule; nogc: void myfunc(){} class MyClass { void mymethod() {} } Everything in above code has nogc applied to it. Nothing special about it, can do it for most attributes like static, final and UDA's.
It does not work like that. User defined attributes only apply to the current scope, ie your MyClass.mymethod() would *not* have the attribute. With built-in attributes it becomes more "interesting" - for example ' safe' will include child scopes, but 'nothrow" won't. Yes, the current attribute situation in D is a mess. No, attribute inference isn't the answer. artur
Apr 17 2014
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 17 April 2014 at 10:38:54 UTC, Artur Skawina via 
Digitalmars-d wrote:
 Yes, the current attribute situation in D is a mess.
A more coherent D syntax would make the language more approachable. I find the current syntax to be somewhat annoying. I'd also like to see coherent naming conventions for attributes etc, e.g. nogc // assert/prove no gc (for compiled code) is_nogc // assume/guarantee no gc (for linked code, or "unprovable" code)
Apr 17 2014
prev sibling parent reply "Rikki Cattermole" <alphaglosined gmail.com> writes:
On Thursday, 17 April 2014 at 10:38:54 UTC, Artur Skawina via
Digitalmars-d wrote:
 On 04/17/14 11:33, Rikki Cattermole via Digitalmars-d wrote:
 On Thursday, 17 April 2014 at 09:22:55 UTC, Dejan Lekic wrote:
 On Tuesday, 15 April 2014 at 17:01:38 UTC, Walter Bright 
 wrote:
 http://wiki.dlang.org/DIP60

 Start on implementation:

 https://github.com/D-Programming-Language/dmd/pull/3455
This is a good start, but I am sure I am not the only person who thought "maybe we should have this on a module level". This would allow people to nicely group pieces of the application that should not use GC.
Sure it does. module mymodule; nogc: void myfunc(){} class MyClass { void mymethod() {} } Everything in above code has nogc applied to it. Nothing special about it, can do it for most attributes like static, final and UDA's.
It does not work like that. User defined attributes only apply to the current scope, ie your MyClass.mymethod() would *not* have the attribute. With built-in attributes it becomes more "interesting" - for example ' safe' will include child scopes, but 'nothrow" won't. Yes, the current attribute situation in D is a mess. No, attribute inference isn't the answer. artur
Good point yes, in the case of a class/struct its methods won't have it applied to them. No idea post manually adding it to start of those declarations can be done. Either that or we need language changes. nogc module mymodule; ("something") module mymodule; Well it is a possible option for improvement. Either way, I'm not gonna advocate this.
Apr 17 2014
parent "Dejan Lekic" <dejan.lekic gmail.com> writes:
  nogc
 module mymodule;
This is precisely what I had in mind.
Apr 17 2014
prev sibling next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

 http://wiki.dlang.org/DIP60

 Start on implementation:

 https://github.com/D-Programming-Language/dmd/pull/3455
If I have this program: __gshared int x = 5; int main() { int[] a = [x, x + 10, x * x]; return a[0] + a[1] + a[2]; } If I compile with all optimizations DMD produces this X86 asm, that contains the call to __d_arrayliteralTX, so that main can't be nogc: __Dmain: L0: push EAX push EAX mov EAX,offset FLAT:_D11TypeInfo_Ai6__initZ push EBX push ESI push EDI push 3 push EAX call near ptr __d_arrayliteralTX mov EBX,EAX mov ECX,_D4test1xi mov [EBX],ECX mov EDX,_D4test1xi add EDX,0Ah mov 4[EBX],EDX mov ESI,_D4test1xi imul ESI,ESI mov 8[EBX],ESI mov EAX,3 mov ECX,EBX mov 014h[ESP],EAX mov 018h[ESP],ECX add ESP,8 mov EDI,010h[ESP] mov EAX,[EDI] add EAX,4[EDI] add EAX,8[EDI] pop EDI pop ESI pop EBX add ESP,8 ret If I compile that code with ldc2 without optimizations the result is similar, there is a call to __d_newarrayvT: __Dmain: pushl %ebp movl %esp, %ebp pushl %esi andl $-8, %esp subl $32, %esp leal __D11TypeInfo_Ai6__initZ, %eax movl $3, %ecx movl %eax, (%esp) movl $3, 4(%esp) movl %ecx, 12(%esp) calll __d_newarrayvT movl %edx, %ecx movl __D4test1xi, %esi movl %esi, (%edx) movl __D4test1xi, %esi addl $10, %esi movl %esi, 4(%edx) movl __D4test1xi, %esi imull __D4test1xi, %esi movl %esi, 8(%edx) movl %eax, 16(%esp) movl %ecx, 20(%esp) movl 20(%esp), %eax movl 20(%esp), %ecx movl (%eax), %eax addl 4(%ecx), %eax movl 20(%esp), %ecx addl 8(%ecx), %eax leal -4(%ebp), %esp popl %esi popl %ebp ret But if I compile the code with ldc2 with full optimizations the compiler is able to perform a bit of escape analysis, and to see the array doesn't need to be allocated, and produces the asm: __Dmain: movl __D4test1xi, %eax movl %eax, %ecx imull %ecx, %ecx addl %eax, %ecx leal 10(%eax,%ecx), %eax ret Now there are no memory allocations. So what's the right behavour of nogc? Is it possible to compile this main with a future version of ldc2 if I compile the code with full optimizations? Bye, bearophile
Apr 17 2014
next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
 Is it possible to compile this main with a future version of 
 ldc2 if I compile the code with full optimizations?
Sorry, I meant to ask if it's possible to compile this main with a nogc applied to it if I compile it with ldc2 with full optimizations. Bye, bearophile
Apr 17 2014
prev sibling parent reply "Kapps" <opantm2+spam gmail.com> writes:
On Thursday, 17 April 2014 at 09:46:23 UTC, bearophile wrote:
 Walter Bright:

 http://wiki.dlang.org/DIP60

 Start on implementation:

 https://github.com/D-Programming-Language/dmd/pull/3455
If I have this program: __gshared int x = 5; int main() { int[] a = [x, x + 10, x * x]; return a[0] + a[1] + a[2]; } If I compile with all optimizations DMD produces this X86 asm, that contains the call to __d_arrayliteralTX, so that main can't be nogc: But if I compile the code with ldc2 with full optimizations the compiler is able to perform a bit of escape analysis, and to see the array doesn't need to be allocated, and produces the asm: Now there are no memory allocations. So what's the right behavour of nogc? Is it possible to compile this main with a future version of ldc2 if I compile the code with full optimizations? Bye, bearophile
That code is not nogc safe, as you're creating a dynamic array within it. The fact that LDC2 at full optimizations doesn't actually allocate is simply an optimization and does not affect the design of the code. If you wanted it to be nogc, you could use: int main() nogc { int[3] a = [x, x + 10, x * x]; return a[0] + a[1] + a[2]; }
Apr 17 2014
next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Kapps:

 That code is not  nogc safe, as you're creating a dynamic array 
 within it. The fact that LDC2 at full optimizations doesn't 
 actually allocate is simply an optimization and does not affect 
 the design of the code.
Walter has answered to another person:
 The  nogc will tell you if it will allocate on the gc or not, 
 on a case by case basis, and you can use easy workarounds as 
 necessary.
That can be read as the opposite of what you say. The DIP60 needs to contain a clear answer on this point. Bye, bearophile
Apr 17 2014
prev sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Kapps:

 That code is not  nogc safe, as you're creating a dynamic array 
 within it. The fact that LDC2 at full optimizations doesn't 
 actually allocate is simply an optimization and does not affect 
 the design of the code.
I've added the opposite of what you say in the DIP. So Walter can fix it if it's wrong, or leave it there if it's right, because that DIP can't miss to to specify one behavour or the other: http://wiki.dlang.org/DIP60 Bye, bearophile
Apr 17 2014
parent reply "Kapps" <opantm2+spam gmail.com> writes:
On Friday, 18 April 2014 at 01:07:40 UTC, bearophile wrote:
 Kapps:

 That code is not  nogc safe, as you're creating a dynamic 
 array within it. The fact that LDC2 at full optimizations 
 doesn't actually allocate is simply an optimization and does 
 not affect the design of the code.
I've added the opposite of what you say in the DIP. So Walter can fix it if it's wrong, or leave it there if it's right, because that DIP can't miss to to specify one behavour or the other: http://wiki.dlang.org/DIP60 Bye, bearophile
Flags such as -O are specifically not supposed to change program behaviour. This being the case would completely discard that and allow code to be compiled only with a single compiler. Honestly, I think expecting that code to be allowed to use nogc is a huge mistake and disagree with editing the DIP to include this solely because you decided it should.
Apr 18 2014
parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Kapps:

 Flags such as -O are specifically not supposed to change 
 program behaviour. This being the case would completely discard 
 that and allow code to be compiled only with a single compiler.
OK.
 Honestly, I think expecting that code to be allowed to use 
  nogc is a huge mistake and disagree with editing the DIP to 
 include this solely because you decided it should.
That Wiki page is editable, so if it's wrong it takes one minute to fix the text I have written. What I have decided to include is an explicit explanation regarding what a correct D compiler should do in that case. Bye, bearophile
Apr 18 2014
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 04/18/2014 10:50 AM, bearophile wrote:
 Honestly, I think expecting that code to be allowed to use  nogc is a
 huge mistake and disagree with editing the DIP to include this solely
 because you decided it should.
That Wiki page is editable, so if it's wrong it takes one minute to fix the text I have written. What I have decided to include is an explicit explanation regarding what a correct D compiler should do in that case.
In which case? In case some version of LDC2 is able to avoid the heap allocation using full optimizations? :o)
Apr 18 2014
parent "bearophile" <bearophileHUGS lycos.com> writes:
Timon Gehr:

 In which case? In case some version of LDC2 is able to avoid 
 the heap allocation using full optimizations? :o)
Please take a look, I have just added one more note in the optimizations section: http://wiki.dlang.org/DIP60#Behaviour_in_presence_of_optimizations Bye, bearophile
Apr 21 2014
prev sibling next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Tuesday, 15 April 2014 at 17:01:38 UTC, Walter Bright wrote:
 http://wiki.dlang.org/DIP60

 Start on implementation:

 https://github.com/D-Programming-Language/dmd/pull/3455
OK, a bit late to the thread, seeing how it has already went to ARC off-topic domain :( An attempt to get back to the original point. I was asking for nogc earlier and I find proposed implementation too naive to be practically useful, to the point where I will likely be forced to ignore it in general. First problem is that, by an analogy with `pure`, there is no such thing as "weakly nogc ". A common pattern for performance intensive code is to use output buffers of some sort: void foo(OutputRange buffer) { buffer.put(42); } `foo` can't be nogc here if OutputRange uses GC as backing allocator. However I'd really like to use it to verify that no hidden allocations happen other than those explicitly coming from user-supplied arguments. In fact, if such "weakly nogc" thing would have been available, it could be used to clean up Phobos reliably. With current limitations nogc is only useful to verify that embedded code which does not have GC at all does not use any GC-triggering language features before it comes to weird linker errors / rt-asserts. But that does not work good either because of next problem: The point where "I told ya" statement is extremely tempting :) bearophile has already pointed this out - for some of language features like array literals you can't be sure about possible usage of GC at compile-time as it depends on optimizations in backend. And making nogc conservative in that regard and marking all literals as nogc-prohibited will cripple the language beyond reason. I can see only one fix for that - defining clear set of array literal use cases where optimizing GC away is guaranteed by spec and relying on it.
Apr 17 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 17 April 2014 at 15:02:27 UTC, Dicebot wrote:
 void foo(OutputRange buffer)
 {
     buffer.put(42);
 }

 `foo` can't be  nogc here if OutputRange uses GC as backing 
 allocator. However I'd really like to use it to verify that no
Can't you write foo as a template? Then if "buffer" is a ring buffer the memory might be allocated by GC, which is ok if put() does not call the GC and is marked as such. Where this falls apart is when you introduce a compacting GC and the nogc code is run in a real time priority thread. Then you need both nogc_function_calls and nogc_memory . Of course, resorting to templates requires some thinking-ahead, and makes reuse more difficult. You'll probably end up with the nogc crowd creating their own NoGCOutputRange… :-P Ola.
Apr 17 2014
next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Ola Fosheim Grøstad:

 Where this falls apart is when you introduce a compacting GC 
 and the  nogc code is run in a real time priority thread. Then 
 you need both  nogc_function_calls and  nogc_memory .
Perhaps the nogc proposal is not flexible enough. So probably the problem needs to be looked from a higher distance to find a smarter and more flexible solution. Koka and other ideas appeared in this thread can be seeds for ideas. Bye, bearophile
Apr 17 2014
parent "Dicebot" <public dicebot.lv> writes:
On Thursday, 17 April 2014 at 15:48:29 UTC, bearophile wrote:
 Ola Fosheim Grøstad:

 Where this falls apart is when you introduce a compacting GC 
 and the  nogc code is run in a real time priority thread. Then 
 you need both  nogc_function_calls and  nogc_memory .
Perhaps the nogc proposal is not flexible enough. So probably the problem needs to be looked from a higher distance to find a smarter and more flexible solution. Koka and other ideas appeared in this thread can be seeds for ideas. Bye, bearophile
Reason why nogc is desired in general is because it is relatively simple and can be done right now. That alone buts it above all ideas with alternate GC implementation and/or major type system tweaks. It only needs some tweaks to make it actually useful for common-enoough practical cases.
Apr 17 2014
prev sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 17 April 2014 at 15:39:38 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 17 April 2014 at 15:02:27 UTC, Dicebot wrote:
 void foo(OutputRange buffer)
 {
    buffer.put(42);
 }

 `foo` can't be  nogc here if OutputRange uses GC as backing 
 allocator. However I'd really like to use it to verify that no
Can't you write foo as a template? Then if "buffer" is a ring buffer the memory might be allocated by GC, which is ok if put() does not call the GC and is marked as such.
put() may call GC to grow the buffer, this is the very point. What is desired is to check if anything _else_ does call GC, thus the "weak nogc" parallel.
 Where this falls apart is when you introduce a compacting GC 
 and the  nogc code is run in a real time priority thread. Then 
 you need both  nogc_function_calls and  nogc_memory .
True hard real-time is always special, I am speaking about "softer" but still performance-demanding code (like one that is used in Sociomantic).
 Of course, resorting to templates requires some thinking-ahead, 
 and makes reuse more difficult.
I don't see how templates can help here right now.
 You'll probably end up with the  nogc crowd creating their own 
 NoGCOutputRange… :-P

 Ola.
Apr 17 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 17 April 2014 at 15:49:44 UTC, Dicebot wrote:
 put() may call GC to grow the buffer, this is the very point. 
 What is desired is to check if anything _else_ does call GC, 
 thus the "weak  nogc" parallel.
What do you need that for?
 Of course, resorting to templates requires some 
 thinking-ahead, and makes reuse more difficult.
I don't see how templates can help here right now.
Wasn't the problem that the type-interface was less constrained than the type-interface allowed by a nogc constrained function? I perceive the problem as being this: you cannot fully specify all types because of the combinatorial explosion. In which case templates tend to be the easy-hack-solution where the type system falls short?
Apr 17 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 17 April 2014 at 17:48:39 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 17 April 2014 at 15:49:44 UTC, Dicebot wrote:
 put() may call GC to grow the buffer, this is the very point. 
 What is desired is to check if anything _else_ does call GC, 
 thus the "weak  nogc" parallel.
What do you need that for?
As a middle-ground between hard-core low level real-time code and applications that don't care about garbage at all. As soon as you keep your buffers growing and shrink only occasionally "GC vs malloc" issues becomes less important. But it is important to not generate any actual garbage as it may trigger collection cycles. Such weak nogc could help to avoid triggering allocations by an accident and encourage usage of output ranges / buffers. Currently code in Sociomantic uses similar idioms but having compiler help to verify it would help in my opinion.
 Of course, resorting to templates requires some 
 thinking-ahead, and makes reuse more difficult.
I don't see how templates can help here right now.
Wasn't the problem that the type-interface was less constrained than the type-interface allowed by a nogc constrained function?
No, this is something completely different, see my answer before.
Apr 17 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 17 April 2014 at 18:00:25 UTC, Dicebot wrote:
 Such weak  nogc could help to avoid triggering allocations by 
 an accident and encourage usage of output ranges / buffers.
Ok, more like a "lintish" feature of the "remind me if I use too much of feature X in these sections" variety. I view nogc as a safeguard against crashes when I let threads run while the garbage collector is in a collection phase. A means to bypass "stop-the-world" collection by having pure nogc threads.
 No, this is something completely different, see my answer 
 before.
Got it, I didn't see that answer until after I wrote my reply. Ola.
Apr 17 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 17 April 2014 at 18:18:49 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 17 April 2014 at 18:00:25 UTC, Dicebot wrote:
 Such weak  nogc could help to avoid triggering allocations by 
 an accident and encourage usage of output ranges / buffers.
Ok, more like a "lintish" feature of the "remind me if I use too much of feature X in these sections" variety. I view nogc as a safeguard against crashes when I let threads run while the garbage collector is in a collection phase. A means to bypass "stop-the-world" collection by having pure nogc threads.
Yeah for me nogc is more of a lint thing in general. But it can't be done by lint because nogc needs to affect mangling to work with separate compilation reliably. I think for your scenario having dedicated nogc threads makes more sense, this can be built on top of plain function attribute nogc.
Apr 17 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 17 April 2014 at 18:26:25 UTC, Dicebot wrote:
 I think for your scenario having dedicated  nogc threads makes 
 more sense, this can be built on top of plain function 
 attribute  nogc.
Yes, that could be a life saver. Nothing is more annoying than random crashes due to concurrency issues because something "slipped in". But I think both you and Bearophile are right in pointing out that it needs more thinking through. Especially the distinction between calling into GC code and dealing with GC memory. For instance, maybe it is possible to have a memory pool split in two, so that the no-GC thread can allocate during a collection cycle, but be required to have a lock-free book-keeping system for all GC memory referenced from the no-GC thread. That way you might be able to use GC allocation from the no-GC thread. Maybe that is a reasonable trade-off. (I haven't thought this through, it just occurred to me) Ola.
Apr 17 2014
prev sibling next sibling parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Thursday, 17 April 2014 at 15:02:27 UTC, Dicebot wrote:


 First problem is that, by an analogy with `pure`, there is no 
 such thing as "weakly  nogc ". A common pattern for performance 
 intensive code is to use output buffers of some sort:

 void foo(OutputRange buffer)
 {
     buffer.put(42);
 }

 `foo` can't be  nogc here if OutputRange uses GC as backing 
 allocator. However I'd really like to use it to verify that no 
 hidden allocations happen other than those explicitly coming 
 from user-supplied arguments. In fact, if such "weakly  nogc" 
 thing would have been available, it could be used to clean up 
 Phobos reliably.
I don't really see how this is really any different than safe, nothrow or pure attributes. Either your code is templated, and the attributes get inferred. Or it's not templated, and you have to rely on `put`'s base-class signature. If it's not marked nogc (or safe, pure, or nothrow), then that's that. -------- That said, your proposal could be applied for all attributes in general. Not just nogc in particular. In practice though, a simple unittest should cover all your needs. simply create a nogc (pure, nothrow, safe, ctfe-able) unitest, and call it with a trivial argument. If it doesn't pass, then it probably means you made a gc-related (or impure, throwing, unsafe) call that's unrelated to the passed parameters. In any case, that's how we've been doing it in phobos since we've started actually caring about attributes.
Apr 17 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/17/2014 9:42 AM, monarch_dodra wrote:
 That said, your proposal could be applied for all attributes in general. Not
 just  nogc in particular. In practice though, a simple unittest should cover
all
 your needs. simply create a  nogc (pure, nothrow, safe, ctfe-able) unitest, and
 call it with a trivial argument. If it doesn't pass, then it probably means you
 made a gc-related (or impure, throwing, unsafe) call that's unrelated to the
 passed parameters.
Yup, that should work fine.
Apr 17 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/17/2014 8:02 AM, Dicebot wrote:


 First problem is that, by an analogy with `pure`, there is no such thing as
 "weakly  nogc ". A common pattern for performance intensive code is to use
 output buffers of some sort:

 void foo(OutputRange buffer)
 {
      buffer.put(42);
 }

 `foo` can't be  nogc here if OutputRange uses GC as backing allocator. However
 I'd really like to use it to verify that no hidden allocations happen other
than
 those explicitly coming from user-supplied arguments. In fact, if such "weakly
  nogc" thing would have been available, it could be used to clean up Phobos
 reliably.

 With current limitations  nogc is only useful to verify that embedded code
which
 does not have GC at all does not use any GC-triggering language features before
 it comes to weird linker errors / rt-asserts. But that does not work good
either
 because of next problem:
Remember that nogc will be inferred for template functions. That means that whether it is nogc or not will depend on its arguments being nogc, which is just what is needed.


 The point where "I told ya" statement is extremely tempting :) bearophile has
 already pointed this out - for some of language features like array literals
you
 can't be sure about possible usage of GC at compile-time as it depends on
 optimizations in backend. And making  nogc conservative in that regard and
 marking all literals as  nogc-prohibited will cripple the language beyond
reason.

 I can see only one fix for that - defining clear set of array literal use cases
 where optimizing GC away is guaranteed by spec and relying on it.
I know that you bring up the array literal issue and gc a lot, but this is simply not a major issue with nogc. The nogc will tell you if it will allocate on the gc or not, on a case by case basis, and you can use easy workarounds as necessary.
Apr 17 2014
next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

 I know that you bring up the array literal issue and gc a lot, 
 but this is simply not a major issue with  nogc. The  nogc will 
 tell you if it will allocate on the gc or not, on a case by 
 case basis, and you can use easy workarounds as necessary.
Assuming you have seen my examples with dmd/ldcs, so are you saying that according to the compilation level the compiler will accept or not accept the nogc attribute on a function? Bye, bearophile
Apr 17 2014
prev sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 17 April 2014 at 16:57:32 UTC, Walter Bright wrote:
 With current limitations  nogc is only useful to verify that 
 embedded code which
 does not have GC at all does not use any GC-triggering 
 language features before
 it comes to weird linker errors / rt-asserts. But that does 
 not work good either
 because of next problem:
Remember that nogc will be inferred for template functions. That means that whether it is nogc or not will depend on its arguments being nogc, which is just what is needed.
No, it looks like I have stated that very wrong because everyone understood it in completely opposite way. What I mean is that `put()` is NOT nogc and it still should work. Same as weakly pure is kind of pure but allowed to mutate its arguments, proposed "weakly nogc" can only call GC via functions directly accessible from its arguments.


 The point where "I told ya" statement is extremely tempting :) 
 bearophile has
 already pointed this out - for some of language features like 
 array literals you
 can't be sure about possible usage of GC at compile-time as it 
 depends on
 optimizations in backend. And making  nogc conservative in 
 that regard and
 marking all literals as  nogc-prohibited will cripple the 
 language beyond reason.

 I can see only one fix for that - defining clear set of array 
 literal use cases
 where optimizing GC away is guaranteed by spec and relying on 
 it.
I know that you bring up the array literal issue and gc a lot, but this is simply not a major issue with nogc. The nogc will tell you if it will allocate on the gc or not, on a case by case basis, and you can use easy workarounds as necessary.
Beg my pardon, I have overstated this one indeed but temptation was just too high :) On actual topic - what "case by case" basis do you have in mind? There are no cases mentioned in spec when literals are guaranteed to not allocated AFAIK. Probably compiler developers know them but definitely not me.
Apr 17 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/17/2014 10:41 AM, Dicebot wrote:
 On Thursday, 17 April 2014 at 16:57:32 UTC, Walter Bright wrote:
 With current limitations  nogc is only useful to verify that embedded code
which
 does not have GC at all does not use any GC-triggering language features before
 it comes to weird linker errors / rt-asserts. But that does not work good
either
 because of next problem:
Remember that nogc will be inferred for template functions. That means that whether it is nogc or not will depend on its arguments being nogc, which is just what is needed.
No, it looks like I have stated that very wrong because everyone understood it in completely opposite way. What I mean is that `put()` is NOT nogc and it still should work. Same as weakly pure is kind of pure but allowed to mutate its arguments, proposed "weakly nogc" can only call GC via functions directly accessible from its arguments.
I don't see value for this behavior.
 I know that you bring up the array literal issue and gc a lot, but this is
 simply not a major issue with  nogc. The  nogc will tell you if it will
 allocate on the gc or not, on a case by case basis, and you can use easy
 workarounds as necessary.
Beg my pardon, I have overstated this one indeed but temptation was just too high :) On actual topic - what "case by case" basis do you have in mind? There are no cases mentioned in spec when literals are guaranteed to not allocated AFAIK. Probably compiler developers know them but definitely not me.
That's why the compiler will tell you if it will allocate or not.
Apr 17 2014
next sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Thursday, 17 April 2014 at 19:51:38 UTC, Walter Bright wrote:
 On 4/17/2014 10:41 AM, Dicebot wrote:
 On Thursday, 17 April 2014 at 16:57:32 UTC, Walter Bright 
 wrote:
 With current limitations  nogc is only useful to verify that 
 embedded code which
 does not have GC at all does not use any GC-triggering 
 language features before
 it comes to weird linker errors / rt-asserts. But that does 
 not work good either
 because of next problem:
Remember that nogc will be inferred for template functions. That means that whether it is nogc or not will depend on its arguments being nogc, which is just what is needed.
No, it looks like I have stated that very wrong because everyone understood it in completely opposite way. What I mean is that `put()` is NOT nogc and it still should work. Same as weakly pure is kind of pure but allowed to mutate its arguments, proposed "weakly nogc" can only call GC via functions directly accessible from its arguments.
I don't see value for this behavior.
It's a formal promise that the function won't do any GC work *itself*, only indirectly if you pass it something that implicitly does heap allocation. E.g. you can implement some complicated function foo that writes to a user-provided output range and guarantee that all GC usage is in the control of the caller and his output range. The advantage of having this as language instead of documentation is the turtles-all-the-way-down principle: if some function deep inside the call chain under foo decides to use a GC buffer then it's a compile-time-error.
Apr 17 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/17/2014 1:03 PM, John Colvin wrote:
 E.g. you can implement some complicated function foo that writes to a
 user-provided output range and guarantee that all GC usage is in the control of
 the caller and his output range.
As mentioned elsewhere here, it's easy enough to do a unit test for this.
 The advantage of having this as language instead of documentation is the
 turtles-all-the-way-down principle: if some function deep inside the call chain
 under foo decides to use a GC buffer then it's a compile-time-error.
And that's how nogc works.
Apr 17 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 17 April 2014 at 22:04:17 UTC, Walter Bright wrote:
 On 4/17/2014 1:03 PM, John Colvin wrote:
 E.g. you can implement some complicated function foo that 
 writes to a
 user-provided output range and guarantee that all GC usage is 
 in the control of
 the caller and his output range.
As mentioned elsewhere here, it's easy enough to do a unit test for this.
Erm, no? You can possibly track GC calls by using custom druntime fork but you can't track origins of such calls in source tree without compiler help. I hope Don's DConf talk will convince you how useful enforcing such model is ;)
 The advantage of having this as language instead of 
 documentation is the
 turtles-all-the-way-down principle: if some function deep 
 inside the call chain
 under foo decides to use a GC buffer then it's a 
 compile-time-error.
And that's how nogc works.
And it is not good enough for practical reasons, i.e. we won't be able to use nogc for most of the Phobos.
Apr 19 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/19/2014 6:14 AM, Dicebot wrote:
 On Thursday, 17 April 2014 at 22:04:17 UTC, Walter Bright wrote:
 On 4/17/2014 1:03 PM, John Colvin wrote:
 E.g. you can implement some complicated function foo that writes to a
 user-provided output range and guarantee that all GC usage is in the control of
 the caller and his output range.
As mentioned elsewhere here, it's easy enough to do a unit test for this.
Erm, no? You can possibly track GC calls by using custom druntime fork but you can't track origins of such calls in source tree without compiler help.
nogc is there to help.
 The advantage of having this as language instead of documentation is the
 turtles-all-the-way-down principle: if some function deep inside the call chain
 under foo decides to use a GC buffer then it's a compile-time-error.
And that's how nogc works.
And it is not good enough for practical reasons, i.e. we won't be able to use nogc for most of the Phobos.
The first step is to identify the parts of Phobos that unnecessarily use the GC. nogc will help a lot with this.
Apr 19 2014
next sibling parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Saturday, 19 April 2014 at 17:41:58 UTC, Walter Bright wrote:
 The first step is to identify the parts of Phobos that 
 unnecessarily use the GC.  nogc will help a lot with this.
Unless I missed it, I think we still haven't answered the issue with throwing exceptions. I'm in particular interested in asserts and Errors. Will "assert" be usable in nogc code? Because if not, then basically *none* of phobos will be nogc. Also, I don't think statically pre-allocating the error is an acceptable workaround. EG: assert(arr.length, "arr is empty."); vs version (assert) { static Error e = new Error("arr is empty."); if (!arr.length) throw e; }
Apr 19 2014
next sibling parent "monarch_dodra" <monarchdodra gmail.com> writes:
On Saturday, 19 April 2014 at 17:51:38 UTC, monarch_dodra wrote:
 Also, I don't think statically pre-allocating the error is an 
 acceptable workaround.
Just in case that's not clear, I mean for the generic "assert(...)". For throwing actual run-time exceptions, I think it's fine to require a static preallocation in an nogc. Though it does raise the issue of things such as global state (what if a catcher changes the msg?), exception threading, and finaly, purity. Seems nogc prevents both a the same time "can throw" and "pure" ... ?
Apr 19 2014
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/19/2014 10:51 AM, monarch_dodra wrote:
 On Saturday, 19 April 2014 at 17:41:58 UTC, Walter Bright wrote:
 The first step is to identify the parts of Phobos that unnecessarily use the
 GC.  nogc will help a lot with this.
Unless I missed it, I think we still haven't answered the issue with throwing exceptions. I'm in particular interested in asserts and Errors.
It wouldn't affect asserts.
Apr 19 2014
prev sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Saturday, 19 April 2014 at 17:41:58 UTC, Walter Bright wrote:
 On 4/19/2014 6:14 AM, Dicebot wrote:
 On Thursday, 17 April 2014 at 22:04:17 UTC, Walter Bright 
 wrote:
 On 4/17/2014 1:03 PM, John Colvin wrote:
 E.g. you can implement some complicated function foo that 
 writes to a
 user-provided output range and guarantee that all GC usage 
 is in the control of
 the caller and his output range.
As mentioned elsewhere here, it's easy enough to do a unit test for this.
Erm, no? You can possibly track GC calls by using custom druntime fork but you can't track origins of such calls in source tree without compiler help.
nogc is there to help.
 The advantage of having this as language instead of 
 documentation is the
 turtles-all-the-way-down principle: if some function deep 
 inside the call chain
 under foo decides to use a GC buffer then it's a 
 compile-time-error.
And that's how nogc works.
And it is not good enough for practical reasons, i.e. we won't be able to use nogc for most of the Phobos.
The first step is to identify the parts of Phobos that unnecessarily use the GC. nogc will help a lot with this.
I feel like the origin of the discussion has been completely lost here and we don't speak the same language right now. The very point I have made initially is that nogc in a way it is defined in your DIP is too restrictive to be effectively used in Phobos. In lot of standard library functions you may actually need to allocate as part of algorithm, strict nogc is not applicable there. However, it is still extremely useful that no _hidden_ allocations happen outside of weel-defined user API and this is something that less restrictive version of nogc could help with. The fact that you propose me to use unit tests to verify same guarantees hints that I have completely failed to explain my proposal but I can't really rephrase it any better without some help from your side to identify the point of confusion.
Apr 19 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 19 April 2014 at 18:05:48 UTC, Dicebot wrote:
 In lot of standard library functions you may actually need to 
 allocate as part of algorithm, strict  nogc is not applicable 
 there. However, it is still extremely useful that no _hidden_ 
 allocations happen outside of weel-defined user API and this is 
 something that less restrictive version of  nogc could help 
 with.
What you want is to mark some functions with allow_gc ? So that you only get GC where specified as a "contract"? But isn't this more suitable for dynamic tracing/logging? Because what you want is probably the frequency of GC allocations in a particular call tree?
Apr 19 2014
parent "Dicebot" <public dicebot.lv> writes:
On Saturday, 19 April 2014 at 18:12:32 UTC, Ola Fosheim Grøstad 
wrote:
 On Saturday, 19 April 2014 at 18:05:48 UTC, Dicebot wrote:
 In lot of standard library functions you may actually need to 
 allocate as part of algorithm, strict  nogc is not applicable 
 there. However, it is still extremely useful that no _hidden_ 
 allocations happen outside of weel-defined user API and this 
 is something that less restrictive version of  nogc could help 
 with.
What you want is to mark some functions with allow_gc ? So that you only get GC where specified as a "contract"? But isn't this more suitable for dynamic tracing/logging? Because what you want is probably the frequency of GC allocations in a particular call tree?
Original proposal, updated and re-worded as formal as possible. "weak nogc" functions / methods are identical to normal nogc function but are allowed to trigger GC via functions / methods directly accessible via its arguments. Such weak nogc functions can only call strict nogc functions in any other context. Exact details get tricky rather quick and this is something that needs thorough examination but rationale behind this is to move all allocation decisions exclusively to caller side. Frequency does not matter here, only the fact that function does not cause allocations of its own. Again, example of a pattern that should be common in Phobos: void foo(ref OutputBuffer buffer) nogc { buffer.put(42); // buffer.put may be not nogc, this turns foo into "weak nogc" throw new Exception(); // put this is prohibited anyway someGCFunction(); // as well as this.. int[] arr; arr ~= 42; // ..and this } User of such functions will be 100% sure that if any allocations happen, he is the one to blame and can tweak it in his own code, possibly using OutputBuffer implementation that does not use GC or allocations at all.
Apr 19 2014
prev sibling next sibling parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Saturday, 19 April 2014 at 18:05:48 UTC, Dicebot wrote:
 I feel like the origin of the discussion has been completely 
 lost here and we don't speak the same language right now. The 
 very point I have made initially is that  nogc in a way it is 
 defined in your DIP is too restrictive to be effectively used 
 in Phobos.

 In lot of standard library functions you may actually need to 
 allocate as part of algorithm, strict  nogc is not applicable 
 there. However, it is still extremely useful that no _hidden_ 
 allocations happen outside of weel-defined user API and this is 
 something that less restrictive version of  nogc could help 
 with.

 The fact that you propose me to use unit tests to verify same 
 guarantees hints that I have completely failed to explain my 
 proposal but I can't really rephrase it any better without some 
 help from your side to identify the point of confusion.
I feel what you are asking for is that something that is part of the interface (qualifier), make no actual promises about the function itself, but rather only help you with your implementation? That feels wrong. I see little value in doing: void fun(T)(ref T t) nogc { t.allocateOnGc(); //"This is fine because T does it?" } This is what you are asking for? Am I correct? I also still don't see why nogc would behave any different from nothrow, pure or safe. ***THAT SAID***, unless I'm mistaken, wouldn't your requirements be better answered by the "qualified block" proposal? Remember that proposal that (would) have allowed us to do: void fun() safe { trusted { //doesn't create scope. T t = someUnsafeFunctionITrust(); } t.doIt(); //doIt must be safe } By the same standard, you could use it to enforce no GC. But only in certain areas. EG: void fun(T)(ref T t) //inferred { t.allocate(); //I'm fine with this using GC nogc : //But as of this line, I want the compiler to ban the GC altogether doThings(); //I can rest assured this won't allocate on the GC. } Personally, I still feel this would be a *much* more natural, idiomatic, and more transparent approach to doing things, than the current trusted lambda *crap* we've been doing. And it would indeed help with writing things correctly. ... ...or I misunderstood you, in which case I apologize.
Apr 19 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
I think it would be useful to be able to mark structs as 
 nogc_alloc or something similar.

Interpretation: this struct and any data directly reachable from 
it is guaranteed to not be GC allocated. Then a precise collector 
could avoid scanning those and pointers to them.

Even with  nogc threads for audio/visual real time computations 
the GC itself will have to get down to consistent < 50-200ms 
freezes to get fluid interaction for content computations.
Apr 19 2014
parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 20 April 2014 06:56, via Digitalmars-d <digitalmars-d puremagic.com>wrot=
e:

 I think it would be useful to be able to mark structs as  nogc_alloc or
 something similar.

 Interpretation: this struct and any data directly reachable from it is
 guaranteed to not be GC allocated. Then a precise collector could avoid
 scanning those and pointers to them.
Why wouldn't precise GC be able to do this anyway? It already has data about everything it scans. It can easily produce a 'don't bother scanning this' bit at the start of that data without programmer assistance? Even with nogc threads for audio/visual real time computations the GC
 itself will have to get down to consistent < 50-200ms freezes to get flui=
d
 interaction for content computations.
50ms is 3 whole frames, 200ms is 12.5 frames! I thought you were a realtime programmer? In a visual realtime app, the GC will only be acceptable when it will not interrupt for more than 1ms or so (and I consider that quite generous, I'd be more comfortable with < 500=C2=B5s). Otherwise you'll lose frames anyway= ; if your entire world exists within 16ms slices, how can you budget the frame's usual workload around the GC interruption? And what if the GC interrupts more than once per frame? What if you have very little free memory?
Apr 19 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 20 April 2014 at 00:59:26 UTC, Manu via Digitalmars-d 
wrote:
 Interpretation: this struct and any data directly reachable 
 from it is
 guaranteed to not be GC allocated. Then a precise collector 
 could avoid
 scanning those and pointers to them.
Why wouldn't precise GC be able to do this anyway? It already has data about everything it scans. It can easily produce a 'don't bother scanning this' bit at the start of that data without programmer assistance?
It doesn't know what can be reached through a node in a graph. It doesn't know what is on the GC heap.
 In a visual realtime app, the GC will only be acceptable when 
 it will not
 interrupt for more than 1ms or so (and I consider that quite 
 generous, I'd
 be more comfortable with < 500µs). Otherwise you'll lose frames 
 anyway; if
No, because the nogc thread will not be interrupted. Think MVC: the model is under GC, the view/controller is under nogc.
Apr 19 2014
parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 20 April 2014 14:33, via Digitalmars-d <digitalmars-d puremagic.com>wrot=
e:

 On Sunday, 20 April 2014 at 00:59:26 UTC, Manu via Digitalmars-d wrote:

 Interpretation: this struct and any data directly reachable from it is
 guaranteed to not be GC allocated. Then a precise collector could avoid
 scanning those and pointers to them.
Why wouldn't precise GC be able to do this anyway? It already has data about everything it scans. It can easily produce a 'don't bother scannin=
g
 this' bit at the start of that data without programmer assistance?
It doesn't know what can be reached through a node in a graph. It doesn't know what is on the GC heap. In a visual realtime app, the GC will only be acceptable when it will no=
t
 interrupt for more than 1ms or so (and I consider that quite generous, I=
'd
 be more comfortable with < 500=C2=B5s). Otherwise you'll lose frames any=
way; if

 No, because the  nogc thread will not be interrupted.

 Think MVC: the model is under GC, the view/controller is under  nogc.
I don't really see why a proposed nogc thread wouldn't hold references to GC allocated objects... what would such a thread do if it didn't have any data to work with? nogc just says the thread won't allocate, it can still be holding all the references it likes, and still needs to be scanned.
Apr 19 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 20 April 2014 at 05:21:48 UTC, Manu via Digitalmars-d 
wrote:
 I don't really see why a proposed  nogc thread wouldn't hold 
 references to
 GC allocated objects... what would such a thread do if it 
 didn't have any
 data to work with?
Approach 1: Nogc thread can hold references to the gc heap that is known to be under bookkeeping or that is reachable from objects that are under book keeping. No need to scan the nogc thread then. Approach 2: separate scan and freeing in the GC by pushing collected item onto a list of pending objects that are waiting for nogc to complete an iteration before they are put on the freelist. This should work for standard culling. Ola.
Apr 19 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/19/2014 11:05 AM, Dicebot wrote:
 I feel like the origin of the discussion has been completely lost here and we
 don't speak the same language right now. The very point I have made initially
is
 that  nogc in a way it is defined in your DIP is too restrictive to be
 effectively used in Phobos.

 In lot of standard library functions you may actually need to allocate as part
 of algorithm, strict  nogc is not applicable there. However, it is still
 extremely useful that no _hidden_ allocations happen outside of weel-defined
 user API and this is something that less restrictive version of  nogc could
help
 with.

 The fact that you propose me to use unit tests to verify same guarantees hints
 that I have completely failed to explain my proposal but I can't really
rephrase
 it any better without some help from your side to identify the point of
confusion.
The way I understood your idea, was that a template could be marked nogc, and yet still allow template arguments that themselves may gc. This can be accomplished by creating a unit test that passes non-allocating template parameters, and then verifying the instantiation is nogc.
Apr 19 2014
parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Saturday, 19 April 2014 at 18:41:39 UTC, Walter Bright wrote:
 On 4/19/2014 11:05 AM, Dicebot wrote:
 I feel like the origin of the discussion has been completely 
 lost here and we
 don't speak the same language right now. The very point I have 
 made initially is
 that  nogc in a way it is defined in your DIP is too 
 restrictive to be
 effectively used in Phobos.

 In lot of standard library functions you may actually need to 
 allocate as part
 of algorithm, strict  nogc is not applicable there. However, 
 it is still
 extremely useful that no _hidden_ allocations happen outside 
 of weel-defined
 user API and this is something that less restrictive version 
 of  nogc could help
 with.

 The fact that you propose me to use unit tests to verify same 
 guarantees hints
 that I have completely failed to explain my proposal but I 
 can't really rephrase
 it any better without some help from your side to identify the 
 point of confusion.
The way I understood your idea, was that a template could be marked nogc, and yet still allow template arguments that themselves may gc. This can be accomplished by creating a unit test that passes non-allocating template parameters, and then verifying the instantiation is nogc.
The only way that works is if the unittest has coverage of all possible currently non-GC-using instantiations of all templates all the way down the call-tree.* Imagine the case where some function deep down the call-tree has a `static if(T == NastyType) doGCStuff();`. In order to protect against this, you have to check the internals of the entire call-tree in order to write the required unittest, and verify manually that you haven't missed a case every time anything changes. *alright, technically only those that can be instantiated by the function your testing, but this still blows up pretty fast.
Apr 20 2014
parent "Dicebot" <public dicebot.lv> writes:
On Sunday, 20 April 2014 at 18:48:49 UTC, John Colvin wrote:
 The way I understood your idea, was that a template could be 
 marked  nogc, and yet still allow template arguments that 
 themselves may gc.

 This can be accomplished by creating a unit test that passes 
 non-allocating template parameters, and then verifying the 
 instantiation is  nogc.
The only way that works is if the unittest has coverage of all possible currently non-GC-using instantiations of all templates all the way down the call-tree.* Imagine the case where some function deep down the call-tree has a `static if(T == NastyType) doGCStuff();`. In order to protect against this, you have to check the internals of the entire call-tree in order to write the required unittest, and verify manually that you haven't missed a case every time anything changes. *alright, technically only those that can be instantiated by the function your testing, but this still blows up pretty fast.
Looks like John has similar thinking pattern for this specific case :P Also you proposal does not add any hygiene checks to non-template functions _and_ requires to create a boilerplate output range mocks (for all possible duck types) and extra static asserts for all functions that you may want to mark as weak nogc. I don't see it as clean solution that can actually be used in a library. The very marketing value of nogc is not to show that something like it is possible (it already is) but to show "hey, look how easy and clean it is!"
Apr 21 2014
prev sibling parent reply "Don" <x nospam.com> writes:
On Thursday, 17 April 2014 at 19:51:38 UTC, Walter Bright wrote:
 On 4/17/2014 10:41 AM, Dicebot wrote:
 On Thursday, 17 April 2014 at 16:57:32 UTC, Walter Bright 
 wrote:
 With current limitations  nogc is only useful to verify that 
 embedded code which
 does not have GC at all does not use any GC-triggering 
 language features before
 it comes to weird linker errors / rt-asserts. But that does 
 not work good either
 because of next problem:
Remember that nogc will be inferred for template functions. That means that whether it is nogc or not will depend on its arguments being nogc, which is just what is needed.
No, it looks like I have stated that very wrong because everyone understood it in completely opposite way. What I mean is that `put()` is NOT nogc and it still should work. Same as weakly pure is kind of pure but allowed to mutate its arguments, proposed "weakly nogc" can only call GC via functions directly accessible from its arguments.
I don't see value for this behavior.
It turns out to have enormous value. I will explain this in my DConf talk. A little preview: Almost all of our code at Sociomantic obeys this behaviour, and it's probably the most striking feature of our codebase. By "almost all" I mean probably 90% of our code, including all of our libraries. Not just the 5% - 10% that could marked as nogc according to your DIP. The key property it ensures is, if you make N calls to the function, the number of GC allocations is in O(1). We don't care if makes 0 allocations or 17. We're not really interested in whether a function uses the GC or not, since most interesting functions do need to do some memory allocation. Ideally, we'd want an attribute which could applied to *all* of Phobos, except for some convenience functions. We have no interest in library code which doesn't behave in that way.
Apr 22 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/22/2014 4:00 AM, Don wrote:
 It turns out to have enormous value. I will explain this in my DConf talk. A
 little preview:
 Almost all of our code at Sociomantic obeys this behaviour, and it's probably
 the most striking feature of our codebase. By "almost all" I mean probably 90%
 of our code, including all of our libraries. Not just the 5% - 10% that could
 marked as  nogc according to your DIP.

 The key property it ensures is, if you make N calls to the function, the number
 of GC allocations is in O(1). We don't care if makes 0 allocations or 17.
I don't really understand how dicebot's proposal ensures this property. I guess it'll have to wait until Dconf!
 We're not really interested in whether a function uses the GC or not, since
most
 interesting functions do need to do some memory allocation.

 Ideally, we'd want an attribute which could applied to *all* of Phobos, except
 for some convenience functions. We have no interest in library code which
 doesn't behave in that way.
Apr 22 2014
prev sibling next sibling parent reply "Brad Anderson" <eco gnuk.net> writes:
On Tuesday, 15 April 2014 at 17:01:38 UTC, Walter Bright wrote:
 http://wiki.dlang.org/DIP60

 Start on implementation:

 https://github.com/D-Programming-Language/dmd/pull/3455
Would nogc apply to code being evaluated at compile-time? I don't think it should.
Apr 17 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/17/2014 9:23 PM, Brad Anderson wrote:
 Would  nogc apply to code being evaluated at compile-time? I
 don't think it should.
Yes, it would be. Compile time functions are not special, in fact, there is no such thing in D.
Apr 17 2014
parent reply "Brad Anderson" <eco gnuk.net> writes:
On Friday, 18 April 2014 at 06:35:33 UTC, Walter Bright wrote:
 On 4/17/2014 9:23 PM, Brad Anderson wrote:
 Would  nogc apply to code being evaluated at compile-time? I
 don't think it should.
Yes, it would be. Compile time functions are not special, in fact, there is no such thing in D.
But surely something like: struct S { this(int d) { data = d; } S opBinary(string op)(S rhs) nogc { return S(mixin("data "~op~" rhs.data")); } private int data; } Would still work, right? There is no GC activity there.
Apr 17 2014
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/17/2014 11:50 PM, Brad Anderson wrote:
 But surely something like:

 struct S
 {
       this(int d) { data = d; }
       S opBinary(string op)(S rhs)  nogc
       {
         return S(mixin("data "~op~" rhs.data"));
       }

       private int data;
 }

 Would still work, right? There is no GC activity there.
Right, because there is no gc activity!
Apr 18 2014
prev sibling parent "monarch_dodra" <monarchdodra gmail.com> writes:
On Friday, 18 April 2014 at 06:50:48 UTC, Brad Anderson wrote:
 On Friday, 18 April 2014 at 06:35:33 UTC, Walter Bright wrote:
 On 4/17/2014 9:23 PM, Brad Anderson wrote:
 Would  nogc apply to code being evaluated at compile-time? I
 don't think it should.
Yes, it would be. Compile time functions are not special, in fact, there is no such thing in D.
But surely something like: struct S { this(int d) { data = d; } S opBinary(string op)(S rhs) nogc { return S(mixin("data "~op~" rhs.data")); } private int data; } Would still work, right? There is no GC activity there.
I don't think that's a "compile time function" (hence the "there is no such thing in D"). Rather, it's a function called in a CTFE context. It *should* work, since currently, a safe, pure, nothrow function can call an impure, throwing unsafe one for CTFE.
Apr 18 2014
prev sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

 http://wiki.dlang.org/DIP60

 Start on implementation:

 https://github.com/D-Programming-Language/dmd/pull/3455
Currently this code doesn't compile because the lambda allocates the closure on the heap: void main() nogc { import std.algorithm: map; int[3] data = [1, 2, 3]; immutable int x = 3; auto result = data[].map!(y => y * x); } test.d(1,6): Error: function D main nogc function allocates a closure with the GC Such kind of code is common, so a good amount of range-based code can't be nogc. ----------- In the meantime the good Kenji has created a patch for missing semantics: https://github.com/D-Programming-Language/dmd/pull/3493 Bye, bearophile
Apr 24 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/24/2014 6:35 AM, bearophile wrote:
 Currently this code doesn't compile because the lambda allocates the closure on
 the heap:
Pointing out these issues is exactly what nogc is designed to do.
 void main()  nogc {
      import std.algorithm: map;
      int[3] data = [1, 2, 3];
      immutable int x = 3;
      auto result = data[].map!(y => y * x);
 }


 test.d(1,6): Error: function D main  nogc function allocates a closure with
the GC

 Such kind of code is common,
True.
 so a good amount of range-based code can't be  nogc.
"Can't" is a bit strong of a word. Needing a workaround that is perhaps a bit ugly is more accurate. For your example, enum int x = 3; will solve the issue.
Apr 24 2014
next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

 Pointing out these issues is exactly what  nogc is designed to 
 do.
Right.
 "Can't" is a bit strong of a word. Needing a workaround that is 
 perhaps a bit ugly is more accurate. For your example,

     enum int x = 3;

 will solve the issue.
In most cases that "x" is a run-time value, as in my example. Bye, bearophile
Apr 24 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/24/2014 11:49 AM, bearophile wrote:
 Walter Bright:
 "Can't" is a bit strong of a word. Needing a workaround that is perhaps a bit
 ugly is more accurate. For your example,

     enum int x = 3;

 will solve the issue.
In most cases that "x" is a run-time value, as in my example.
You can make it a static and it'll work. Ugly, but it'll work.
Apr 24 2014
prev sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

 Pointing out these issues is exactly what  nogc is designed to 
 do.
Using nogc is like putting your code under a newly invented microscope, it allows to see things that I missed before :-) Bye, bearophile
Apr 25 2014
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 24 April 2014 at 13:35:39 UTC, bearophile wrote:
     immutable int x = 3;
     auto result = data[].map!(y => y * x);
 }


 test.d(1,6): Error: function D main  nogc function allocates a 
 closure with the GC

 Such kind of code is common, so a good amount of range-based 
 code can't be  nogc.
Why can't this be on the stack if the referenced local function (lambda) does not outlive the stack frame?
Apr 25 2014
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 25 Apr 2014 07:28:27 -0400, Ola Fosheim Gr=C3=B8stad  =

<ola.fosheim.grostad+dlang gmail.com> wrote:

 On Thursday, 24 April 2014 at 13:35:39 UTC, bearophile wrote:
     immutable int x =3D 3;
     auto result =3D data[].map!(y =3D> y * x);
 }


 test.d(1,6): Error: function D main  nogc function allocates a closur=
e =
 with the GC

 Such kind of code is common, so a good amount of range-based code can=
't =
 be  nogc.
Why can't this be on the stack if the referenced local function (lambd=
a) =
 does not outlive the stack frame?
It could. I don't think the compiler is smart enough, as it would need t= o = verify result doesn't go anywhere (flow analysis). I wonder if LDC/GDC could do it. One interesting thing about this is that the compiler implementation may= = make some nogc code valid on some compilers, and invalid on others, eve= n = though the resulting execution is the same. -Steve
Apr 25 2014
next sibling parent reply "David Nadlinger" <code klickverbot.at> writes:
On Friday, 25 April 2014 at 12:07:00 UTC, Steven Schveighoffer 
wrote:
 One interesting thing about this is that the compiler 
 implementation may make some  nogc code valid on some 
 compilers, and invalid on others, even though the resulting 
 execution is the same.
I don't think this is a desirable behavior. nogc should be decided in the frontend, before closure allocation optimizations take place. David
Apr 25 2014
next sibling parent reply "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Friday, 25 April 2014 at 12:21:40 UTC, David Nadlinger wrote:
 On Friday, 25 April 2014 at 12:07:00 UTC, Steven Schveighoffer 
 wrote:
 One interesting thing about this is that the compiler 
 implementation may make some  nogc code valid on some 
 compilers, and invalid on others, even though the resulting 
 execution is the same.
I don't think this is a desirable behavior. nogc should be decided in the frontend, before closure allocation optimizations take place.
Yes, but the language specification should guarantee that no heap allocation takes place at least for some simple cases. `scope` comes to mind... This can apply to other normally allocating operations, too, like `new` and array concatenation/appending.
Apr 25 2014
parent "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
Here's another thing that should be allowed that doesn't depend 
on optimizations:

Any code path in a  nogc function that is guaranteed to abort the 
program should be exempt from  nogc enforcement. This includes 
assert(0) and throwing an Error.

Take std.exception.assumeWontThrow() as an example:

T assumeWontThrow(T)(lazy T expr,
                      string msg = null,
                      string file = __FILE__,
                      size_t line = __LINE__) nothrow
{
     try
     {
         return expr;
     }
     catch(Exception e)
     {
         immutable tail = msg.empty ? "." : ": " ~ msg;
         throw new AssertError("assumeWontThrow failed: Expression 
did throw" ~
                               tail, file, line);
     }
}

Currently, this cannot be  nogc, because it uses `new` and `~`. 
However, this only happens in preparation to throwing the 
AssertError, which in turn causes the program to abort. I guess 
in this situation, it's ok to allocate on the GC heap.

With my proposed rule, assumeWontThrow can be deduced to be  nogc 
iff expr is  nogc. This allows more functions to be  nogc.
Apr 27 2014
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 25 Apr 2014 08:21:38 -0400, David Nadlinger <code klickverbot.at>  
wrote:

 On Friday, 25 April 2014 at 12:07:00 UTC, Steven Schveighoffer wrote:
 One interesting thing about this is that the compiler implementation  
 may make some  nogc code valid on some compilers, and invalid on  
 others, even though the resulting execution is the same.
I don't think this is a desirable behavior. nogc should be decided in the frontend, before closure allocation optimizations take place.
I don't know that it's desirable to have nogc reject code even though an allocation does not occur. I agree the situation is not ideal, but nogc is a practical optimization. I can think of other cases where the GC may be optimized out. To reject such code in nogc would make it much less attractive. -Steve
Apr 25 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Friday, 25 April 2014 at 12:59:55 UTC, Steven Schveighoffer 
wrote:
 On Fri, 25 Apr 2014 08:21:38 -0400, David Nadlinger 
 <code klickverbot.at> wrote:

 On Friday, 25 April 2014 at 12:07:00 UTC, Steven Schveighoffer 
 wrote:
 One interesting thing about this is that the compiler 
 implementation may make some  nogc code valid on some 
 compilers, and invalid on others, even though the resulting 
 execution is the same.
I don't think this is a desirable behavior. nogc should be decided in the frontend, before closure allocation optimizations take place.
I don't know that it's desirable to have nogc reject code even though an allocation does not occur. I agree the situation is not ideal, but nogc is a practical optimization. I can think of other cases where the GC may be optimized out. To reject such code in nogc would make it much less attractive. -Steve
It is unacceptable to have code that fails with one compiler and works with the other despite the shared frontend version. Such "enhanced" nogc attributes must be placed into compiler-specific attribute space and not as a core language feature.
Apr 25 2014
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 25 Apr 2014 09:20:08 -0400, Dicebot <public dicebot.lv> wrote:

 On Friday, 25 April 2014 at 12:59:55 UTC, Steven Schveighoffer wrote:
 I don't know that it's desirable to have  nogc reject code even though  
 an allocation does not occur. I agree the situation is not ideal, but  
  nogc is a practical optimization.

 I can think of other cases where the GC may be optimized out. To reject  
 such code in  nogc would make it much less attractive.
It is unacceptable to have code that fails with one compiler and works with the other despite the shared frontend version. Such "enhanced" nogc attributes must be placed into compiler-specific attribute space and not as a core language feature.
Like I said, this may be the ideologically correct position, but please explain to the poor user that even though the compiler does not invoke the GC in his function, it still cannot be nogc. I think in this case, nogc is not a good name. But what really is the difference between a function that is marked as nogc that compiles on compiler X but not compiler Y, and some custom attribute that compiles on X but not Y? Consider that the fact that compiler X could have compiled a function that compiler Y is linking to, may actually be nogc, because compiler X is better at avoiding GC calls. Wouldn't it make sense to be able to mark it nogc and still use it from compiler Y? -Steve
Apr 25 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Friday, 25 April 2014 at 14:01:07 UTC, Steven Schveighoffer 
wrote:
 It is unacceptable to have code that fails with one compiler 
 and works with the other despite the shared frontend version. 
 Such "enhanced"  nogc attributes must be placed into 
 compiler-specific attribute space and not as a core language 
 feature.
Like I said, this may be the ideologically correct position, but please explain to the poor user that even though the compiler does not invoke the GC in his function, it still cannot be nogc. I think in this case, nogc is not a good name.
Which is the very reason why I was so insisting of defining exact set of cases when optimisation is to be guaranteed in spec (before releasing nogc). Unfortunately, with no success.
 But what really is the difference between a function that is 
 marked as  nogc that compiles on compiler X but not compiler Y, 
 and some custom attribute that compiles on X but not Y?
There are no user-defined attributes that can possibly fail on only some compiler. And compiler-specific attributes are part of that compiler documentation and never part of language spec. This is the difference.
Apr 25 2014
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 25 Apr 2014 11:12:54 -0400, Dicebot <public dicebot.lv> wrote:

 On Friday, 25 April 2014 at 14:01:07 UTC, Steven Schveighoffer wrote:
 It is unacceptable to have code that fails with one compiler and works  
 with the other despite the shared frontend version. Such "enhanced"  
  nogc attributes must be placed into compiler-specific attribute space  
 and not as a core language feature.
Like I said, this may be the ideologically correct position, but please explain to the poor user that even though the compiler does not invoke the GC in his function, it still cannot be nogc. I think in this case, nogc is not a good name.
Which is the very reason why I was so insisting of defining exact set of cases when optimisation is to be guaranteed in spec (before releasing nogc). Unfortunately, with no success.
Well, nogc is not released yet. Please tell me we don't have to avoid breaking code based on git HEAD...
 But what really is the difference between a function that is marked as  
  nogc that compiles on compiler X but not compiler Y, and some custom  
 attribute that compiles on X but not Y?
There are no user-defined attributes that can possibly fail on only some compiler. And compiler-specific attributes are part of that compiler documentation and never part of language spec. This is the difference.
But such a situation would not violate a spec that says " nogc means there are no hidden GC calls." And the end result is identical -- I must compile function foo on compiler X only. I agree there are likely no precedents for this. Another option would be to put such a compiler-specific attribute around the code in question rather than a different attribute than nogc on the function itself. I think there's really no avoiding that this will happen some way or another. -Steve
Apr 25 2014
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 25 Apr 2014 11:29:37 -0400, Steven Schveighoffer  
<schveiguy yahoo.com> wrote:

 On Fri, 25 Apr 2014 11:12:54 -0400, Dicebot <public dicebot.lv> wrote:

 On Friday, 25 April 2014 at 14:01:07 UTC, Steven Schveighoffer wrote:
 But what really is the difference between a function that is marked as  
  nogc that compiles on compiler X but not compiler Y, and some custom  
 attribute that compiles on X but not Y?
There are no user-defined attributes that can possibly fail on only some compiler. And compiler-specific attributes are part of that compiler documentation and never part of language spec. This is the difference.
But such a situation would not violate a spec that says " nogc means there are no hidden GC calls." And the end result is identical -- I must compile function foo on compiler X only.
You know what, in fact, nogc may need to be re-branded as compiler-specific. -Steve
Apr 25 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 25 April 2014 at 15:32:40 UTC, Steven Schveighoffer 
wrote:
 You know what, in fact,  nogc may need to be re-branded as 
 compiler-specific.
You should have a compiler switch that let's you get "compiler-optimal non-portable nogc".
Apr 25 2014
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 25 Apr 2014 20:50:46 -0400, Ola Fosheim Gr=C3=B8stad  =

<ola.fosheim.grostad+dlang gmail.com> wrote:

 On Friday, 25 April 2014 at 15:32:40 UTC, Steven Schveighoffer wrote:
 You know what, in fact,  nogc may need to be re-branded as  =
 compiler-specific.
You should have a compiler switch that let's you get "compiler-optimal=
=
 non-portable  nogc".
I feel like I'm being a nag, but it sure seems to me like having somethi= ng = like that is no different than custom behavior of nogc. Basically, I ha= ve = a file blah.d, which can only be compiled with X, even if it's only with= = the option X -extraNogcFunctionality In other words, I have this function. It can avoid GC allocations if the= = compiler can do extra steps to prove it. But not all compilers go to the= se = lengths. So if I only care about compiling with savvy enough compilers, why do I = = need to use some special compiler-specific escape? I think the attribute= = is fine being defined as "if you can't do this without calling the GC, = refuse to compile," and the compiler may or may not compile it. In any case, I don't need another explanation, I don't think it will eve= r = make sense to me. -Steve
Apr 25 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 26 April 2014 at 04:49:07 UTC, Steven Schveighoffer 
wrote:
 In any case, I don't need another explanation, I don't think it 
 will ever make sense to me.
It makes sense because there are two different use cases: 1. Library authors who need a more conservative interpretation of nogc. 2. Binary release productions who only want to be certain that the GC is not called where it could lead to a crash. It would be annoying to have to rewrite code when the compiler actually knows that it does not touch the GC. So the latter use cases need the less conservative approach.
Apr 25 2014
prev sibling next sibling parent "Dicebot" <public dicebot.lv> writes:
On Friday, 25 April 2014 at 15:29:38 UTC, Steven Schveighoffer 
wrote:
 On Fri, 25 Apr 2014 11:12:54 -0400, Dicebot <public dicebot.lv>
 Which is the very reason why I was so insisting of defining 
 exact set of cases when optimisation is to be guaranteed in 
 spec (before releasing  nogc). Unfortunately, with no success.
Well, nogc is not released yet. Please tell me we don't have to avoid breaking code based on git HEAD...
It has become a blocker for next release though. It has been repeated numerous times that such features need to be developed in own feature branches until design is considered at least somewhat solid =/
 But what really is the difference between a function that is 
 marked as  nogc that compiles on compiler X but not compiler 
 Y, and some custom attribute that compiles on X but not Y?
There are no user-defined attributes that can possibly fail on only some compiler. And compiler-specific attributes are part of that compiler documentation and never part of language spec. This is the difference.
But such a situation would not violate a spec that says " nogc means there are no hidden GC calls." And the end result is identical -- I must compile function foo on compiler X only.
It is invalid and useless spec on its own. It would have been a valid spec if it also had a chapter with definitive list of all cases when hidden GC calls can happen and when are guaranteed to be optimized away. Otherwise such spec is as useful as one that says "Maybe your code will compile".
 I agree there are likely no precedents for this.

 Another option would be to put such a compiler-specific 
 attribute around the code in question rather than a different 
 attribute than  nogc on the function itself. I think there's 
 really no avoiding that this will happen some way or another.
I think there should be both. Saying that you need to marry specific compiler forever once you want to use nogc is pretty much same as saying "don't use nogc".
Apr 25 2014
prev sibling parent "Jacob Carlborg" <doob me.com> writes:
On Friday, 25 April 2014 at 15:29:38 UTC, Steven Schveighoffer 
wrote:

 Well,  nogc is not released yet. Please tell me we don't have 
 to avoid breaking code based on git HEAD...
We've already done that before, with UDA's. So, you never know. -- /Jacob Carlborg
Apr 26 2014
prev sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Dicebot:

 It is unacceptable to have code that fails with one compiler 
 and works with the other despite the shared frontend version. 
 Such "enhanced"  nogc attributes must be placed into 
 compiler-specific attribute space and not as a core language 
 feature.
This problem was underlined during this thread far before the merging of the nogc implementation. Why have Walter & Andrei ignored the problem? What's the point of creating a DIP if you ignore the problems found in its discussion? What's the point of 338 comment posts if Walter goes on anyway with the original idea? There are some problems in the D development method that must be addressed. Bye, bearophile
Apr 25 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/25/2014 7:28 AM, bearophile wrote:
 Dicebot:

 It is unacceptable to have code that fails with one compiler and works with
 the other despite the shared frontend version. Such "enhanced"  nogc
 attributes must be placed into compiler-specific attribute space and not as a
 core language feature.
This problem was underlined during this thread far before the merging of the nogc implementation. Why have Walter & Andrei ignored the problem? What's the point of creating a DIP if you ignore the problems found in its discussion? What's the point of 338 comment posts if Walter goes on anyway with the original idea? There are some problems in the D development method that must be addressed.
The nogc logic is entirely contained in the front end, and is not affected by back end logic.
Apr 25 2014
parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

 The  nogc logic is entirely contained in the front end, and is 
 not affected by back end logic.
Thank you for your answer and sorry for me being sometimes too much nervous. So the problem I was alarmed about doesn't exists. Some time ago I have filed this ER: https://issues.dlang.org/show_bug.cgi?id=12642 That shows this rejected code that I thought could be accepted: __gshared int[1] data1; int[1] bar() nogc { int x; return [x]; } void main() nogc { int x; data1 = [x]; int[1] data2; data2 = [x]; } So that's an example of what you are talking about. DMD is already performing some stack allocation of array literals that the nogc is not seeing and rejecting. Kenji Hara has commented:
 If you remove  nogc annotation, all array literals will be
 allocated on stack. So this is pure front-end issue,
 and may be fixed easily.
So the ER 12642 should be a wontfix, or a front-end rule should be added to be added so all D compilers allocate those cases on the stack. If I am not missing some more point, what is the best solution? Bye, bearophile
Apr 26 2014
parent reply "bearophile" <bearophileHUGS lycos.com> writes:
 If I am not missing some more point, what is the best solution?
Before this question gets lost, I'd like to receive some kind of answer. Thank you, bearophile
Apr 26 2014
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 04/27/2014 01:32 AM, bearophile wrote:
 If I am not missing some more point, what is the best solution?
Before this question gets lost, I'd like to receive some kind of answer. Thank you, bearophile
The front end already distinguishes dynamic and static array literals (in a limited form), this distinction should simply carry through to code generation and static array literals should be allowed in nogc code.
Apr 26 2014
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2014-04-25 16:28, bearophile wrote:

 This problem was underlined during this thread far before the merging of
 the  nogc implementation. Why have Walter & Andrei ignored the problem?
 What's the point of creating a DIP if you ignore the problems found in
 its discussion? What's the point of 338 comment posts if Walter goes on
 anyway with the original idea? There are some problems in the D
 development method that must be addressed.
That's a problem. The problem is if someone has an idea/code it wants to be merged, it's enough to convince one developer with push right to get it merged. -- /Jacob Carlborg
Apr 26 2014
parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Jacob Carlborg"  wrote in message news:ljfvec$126l$1 digitalmars.com...

 That's a problem. The problem is if someone has an idea/code it wants to 
 be merged, it's enough to convince one developer with push right to get it 
 merged.
At least these days it only happens when Walter and Andrei agree instead of just Walter merging whatever he feels like.
Apr 26 2014
parent Jacob Carlborg <doob me.com> writes:
On 2014-04-26 16:43, Daniel Murphy wrote:

 At least these days it only happens when Walter and Andrei agree instead
 of just Walter merging whatever he feels like.
Yeah, but it's still a problem when the rest of the community disagrees. -- /Jacob Carlborg
Apr 27 2014
prev sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 25 April 2014 at 12:07:00 UTC, Steven Schveighoffer 
wrote:
 It could. I don't think the compiler is smart enough, as it 
 would need to verify result doesn't go anywhere (flow analysis).
In that case I'd like to see recursive inlining, if it makes stack allocations more probable.
Apr 25 2014