www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.announce - Go 1.5

reply Rory <rjmcguire gmail.com> writes:
The new GC in Go 1.5 seems interesting. What they say about is 
certainly interesting.

http://blog.golang.org/go15gc

"To create a garbage collector for the next decade, we turned to 
an algorithm from decades ago. Go's new garbage collector is a 
concurrent, tri-color, mark-sweep collector, an idea first 
proposed by Dijkstra in 1978."
Sep 18 2015
next sibling parent reply Jack Stouffer <jack jackstouffer.com> writes:
On Friday, 18 September 2015 at 19:26:27 UTC, Rory wrote:
 The new GC in Go 1.5 seems interesting. What they say about is 
 certainly interesting.

 http://blog.golang.org/go15gc

 "To create a garbage collector for the next decade, we turned 
 to an algorithm from decades ago. Go's new garbage collector is 
 a concurrent, tri-color, mark-sweep collector, an idea first 
 proposed by Dijkstra in 1978."
I think this was talked about in general. If I remember correctly the consensus was that 1. D's GC is really primitive (70's style stop the world) and there's a lot of room for improvement 2. However, D has much more important problems currently than a slow GC, e.g. std.allocator, a GC-less phobos, smaller .o files for embedded systems, A better DMD with DDMD, etc. The reason Go has a better GC than D is that Go users have no choice but to use the GC, while D users have a bunch more options.
Sep 18 2015
next sibling parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 18-Sep-2015 23:46, Jack Stouffer wrote:
 On Friday, 18 September 2015 at 19:26:27 UTC, Rory wrote:
[snip]
 The reason Go has a better GC than D is that Go users have no choice but
 to use the GC, while D users have a bunch more options.
To put it differently - D is a big language that has lots of things to improve/extend whereas Go is simplistic. This means the focus of the Go team is on run-time only for a while now, while we keep on improving the core language together with the druntime. -- Dmitry Olshansky
Sep 19 2015
prev sibling parent reply deadalnix <deadalnix gmail.com> writes:
On Friday, 18 September 2015 at 20:46:18 UTC, Jack Stouffer wrote:
 On Friday, 18 September 2015 at 19:26:27 UTC, Rory wrote:
 The new GC in Go 1.5 seems interesting. What they say about is 
 certainly interesting.

 http://blog.golang.org/go15gc

 "To create a garbage collector for the next decade, we turned 
 to an algorithm from decades ago. Go's new garbage collector 
 is a concurrent, tri-color, mark-sweep collector, an idea 
 first proposed by Dijkstra in 1978."
I think this was talked about in general. If I remember correctly the consensus was that 1. D's GC is really primitive (70's style stop the world) and there's a lot of room for improvement 2. However, D has much more important problems currently than a slow GC, e.g. std.allocator, a GC-less phobos, smaller .o files for embedded systems, A better DMD with DDMD, etc. The reason Go has a better GC than D is that Go users have no choice but to use the GC, while D users have a bunch more options.
That's just bad excuses.
Sep 20 2015
parent reply Jack Stouffer <jack jackstouffer.com> writes:
On Sunday, 20 September 2015 at 22:41:46 UTC, deadalnix wrote:
 That's just bad excuses.
Excuses? Sure. Bad excuses? Not nearly. The other things I listed are much more important (IMO) than making the GC faster.
Sep 20 2015
next sibling parent reply deadalnix <deadalnix gmail.com> writes:
On Sunday, 20 September 2015 at 23:43:29 UTC, Jack Stouffer wrote:
 On Sunday, 20 September 2015 at 22:41:46 UTC, deadalnix wrote:
 That's just bad excuses.
Excuses? Sure. Bad excuses? Not nearly. The other things I listed are much more important (IMO) than making the GC faster.
Most excuses are bad. Users do not care about excuses. These one are especially bad as they do not provide any valid reason why the task is not tackled for years, simply that other tasks that are also of importance are not being tackled either. X is not important because Y is more important. As a result, we have nothing to show for either X or Y. How can this be anything else than a bad excuse ?
Sep 20 2015
parent Rory McGuire via Digitalmars-d-announce writes:
With Andrei working more on D maybe he will find time to document how the
compiler works better so more of us can contribute.

On Mon, Sep 21, 2015 at 4:23 AM, deadalnix via Digitalmars-d-announce <
digitalmars-d-announce puremagic.com> wrote:

 On Sunday, 20 September 2015 at 23:43:29 UTC, Jack Stouffer wrote:

 On Sunday, 20 September 2015 at 22:41:46 UTC, deadalnix wrote:

 That's just bad excuses.
Excuses? Sure. Bad excuses? Not nearly. The other things I listed are much more important (IMO) than making the GC faster.
Most excuses are bad. Users do not care about excuses. These one are especially bad as they do not provide any valid reason why the task is not tackled for years, simply that other tasks that are also of importance are not being tackled either. X is not important because Y is more important. As a result, we have nothing to show for either X or Y. How can this be anything else than a bad excuse ?
Sep 21 2015
prev sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Sunday, 20 September 2015 at 23:43:29 UTC, Jack Stouffer wrote:
 On Sunday, 20 September 2015 at 22:41:46 UTC, deadalnix wrote:
 That's just bad excuses.
Excuses? Sure. Bad excuses? Not nearly. The other things I listed are much more important (IMO) than making the GC faster.
The most important thing for a system level language is to get competitive resource management right early on since it is very likely to require language changes. And it has to be more solid than C++ and Rust.
Sep 21 2015
prev sibling next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Friday, 18 September 2015 at 19:26:27 UTC, Rory wrote:
 The new GC in Go 1.5 seems interesting. What they say about is 
 certainly interesting.

 http://blog.golang.org/go15gc
Go 1.6 GC roadmap: https://docs.google.com/document/d/1kBx98ulj5V5M9Zdeamy7v6ofZXX3yPziAf0V27A64Mo/preview
Sep 19 2015
prev sibling next sibling parent reply thedeemon <dlang thedeemon.com> writes:
On Friday, 18 September 2015 at 19:26:27 UTC, Rory wrote:
 The new GC in Go 1.5 seems interesting. What they say about is 
 certainly interesting.
They went the way of classical GC-ed language where write barriers are used actively, allowing to make concurrent, incremental and (eventually, if not yet) generational GC. However it has a cost - pointer field updates are slower than in plain C/C++/D, and overall speed is close to Java. D tries to be like C and C++ where simple code is fast and straightforward, there are no write barriers and there will never be, without changing the language design. It means D's GC will always be dog slow - it has to stop the world and scan full heap every time. And that leads to different usage pattern where GC heap should remain small and GC allocation rate low.
Sep 19 2015
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Saturday, 19 September 2015 at 08:36:51 UTC, thedeemon wrote:
 full heap every time. And that leads to different usage pattern 
 where GC heap should remain small and GC allocation rate low.
Please, let's stop pretending you only have to scan the GC heap. You have to scan all pointers that somehow can lead to something that can lead to something... that points into the GC heap. To get out of that you need a language constructs that create a verified separation between pointers that can and pointers that cannot reach GC pointers. That's very hard to do, especially since D does not have GC pointers.
Sep 19 2015
parent reply thedeemon <dlang thedeemon.com> writes:
On Saturday, 19 September 2015 at 09:22:40 UTC, Ola Fosheim 
Grøstad wrote:

 Please, let's stop pretending you only have to scan the GC 
 heap. You have to scan all pointers that somehow can lead to 
 something that can lead to something... that points into the GC 
 heap.
Yes, good point. One should keep root ranges small too. If we carefully use addRoot() and addRange() for data directly pointing to GC heap I think we don't need to let GC scan everything that can lead to this data. This is error-prone in general, of course.
Sep 19 2015
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Saturday, 19 September 2015 at 17:56:23 UTC, thedeemon wrote:
 Yes, good point. One should keep root ranges small too.
 If we carefully use addRoot() and addRange() for data directly 
 pointing to GC heap I think we don't need to let GC scan 
 everything that can lead to this data. This is error-prone in 
 general, of course.
Yes, it is error prone when the project grows over time. And even if it looks like it is working one may forget to remove the addRoot() and then we have a leak. Move semantics could help of course (addRoot in constructor). But it also makes the transition between GC/non-GC slower. E.g. as long as you keep a scanned pointer on the stack then addRoot() isn't needed. As long as you know that the object remains included in the GC graph, you also don't need addRoot(). But compiler support is needed to ensure that that either the GC graph remains connected or that the pointer actually is pushed to the stack before the object is collected and the stack is scanned. Lots of details that must be right. I'm not sure if the current collector scans all registers, or just scans the stack? If it only scans the stack then you need to lock out the GC collection process between obtaining a pointer and pushing the pointer on the stack or adding the root.
Sep 19 2015
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Saturday, 19 September 2015 at 18:20:16 UTC, Ola Fosheim 
Grøstad wrote:
 I'm not sure if the current collector scans all registers, or 
 just scans the stack?
According to the docs it scans all registers, but even then one must be careful and do addRoot before the pointer is set, otherwise the CPU might flush the register and a collection could run between setting the pointer and addRoot...?
Sep 19 2015
parent reply Daniel Kozak via Digitalmars-d-announce writes:
No, collection could not occure if we speaking about current D GC
implementation. So it safe to set pointer before addRoot.
Dne 19. 9. 2015 21:00 napsal u=C5=BEivatel "Ola Fosheim Gr=C3=B8stad via
Digitalmars-d-announce" <digitalmars-d-announce puremagic.com>:

 On Saturday, 19 September 2015 at 18:20:16 UTC, Ola Fosheim Gr=C3=B8stad =
wrote:
 I'm not sure if the current collector scans all registers, or just scans
 the stack?
According to the docs it scans all registers, but even then one must be careful and do addRoot before the pointer is set, otherwise the CPU might flush the register and a collection could run between setting the pointer and addRoot...?
Sep 19 2015
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Saturday, 19 September 2015 at 19:17:38 UTC, Daniel Kozak 
wrote:
 No, collection could not occure if we speaking about current D 
 GC
 implementation. So it safe to set pointer before addRoot.
It can be triggered by another thread. Wrong: ptr = somestack.pop(); someglobalptr = ptr; // ptr register flushed // collection triggered by other thread // other thread allocated the same memory for some other object addRoot(ptr); Right: ptr = somestack.pop(); addRoot(ptr); ensure memory barrier here; someglobalptr = ptr; ?
Sep 19 2015
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Saturday, 19 September 2015 at 19:25:31 UTC, Ola Fosheim 
Grøstad wrote:
 On Saturday, 19 September 2015 at 19:17:38 UTC, Daniel Kozak 
 wrote:
 No, collection could not occure if we speaking about current D 
 GC
 implementation. So it safe to set pointer before addRoot.
It can be triggered by another thread. Wrong: ptr = somestack.pop(); someglobalptr = ptr; // ptr register flushed // collection triggered by other thread // other thread allocated the same memory for some other object addRoot(ptr);
Typo:
 addRoot(someglobalptr);
Sep 19 2015
parent Daniel Kozak via Digitalmars-d-announce writes:
Dne 19.9.2015 v 21:30 Ola Fosheim Grøstad via Digitalmars-d-announce 
napsal(a):
 On Saturday, 19 September 2015 at 19:25:31 UTC, Ola Fosheim Grøstad 
 wrote:
 On Saturday, 19 September 2015 at 19:17:38 UTC, Daniel Kozak wrote:
 No, collection could not occure if we speaking about current D GC
 implementation. So it safe to set pointer before addRoot.
It can be triggered by another thread. Wrong: ptr = somestack.pop(); someglobalptr = ptr; // ptr register flushed // collection triggered by other thread // other thread allocated the same memory for some other object addRoot(ptr);
Typo:
 addRoot(someglobalptr);
Yes, now it seems possible :)
Sep 19 2015
prev sibling parent =?iso-8859-1?Q?Robert_M._M=FCnch?= <robert.muench saphirion.com> writes:
On 2015-09-19 17:56:21 +0000, thedeemon said:

 If we carefully use addRoot() and addRange() for data directly pointing 
 to GC heap I think we don't need to let GC scan everything that can 
 lead to this data. This is error-prone in general, of course.
Well, that's a different name for malloc & free... I don't see any value in using a GC that needs addRoot / addRange, then I can do manual memory management as well. -- Robert M. Mnch http://www.saphirion.com smarter | better | faster
Sep 21 2015
prev sibling parent reply Rory McGuire via Digitalmars-d-announce writes:
The impression I got reading the article was that their GC was very much
like our current one except that the marking part of the algorithm was run
concurrently.

That is the only reason I shared the article. To me it seems one should be
to mark variables/types with which style of memory management it should
use. I suppose that is what allocators are becoming.
Perhaps someone will write a concurrent generational garbage collected
allocator.

On Sat, Sep 19, 2015 at 10:36 AM, thedeemon via Digitalmars-d-announce <
digitalmars-d-announce puremagic.com> wrote:

 On Friday, 18 September 2015 at 19:26:27 UTC, Rory wrote:

 The new GC in Go 1.5 seems interesting. What they say about is certainly
 interesting.
They went the way of classical GC-ed language where write barriers are used actively, allowing to make concurrent, incremental and (eventually, if not yet) generational GC. However it has a cost - pointer field updates are slower than in plain C/C++/D, and overall speed is close to Java. D tries to be like C and C++ where simple code is fast and straightforward, there are no write barriers and there will never be, without changing the language design. It means D's GC will always be dog slow - it has to stop the world and scan full heap every time. And that leads to different usage pattern where GC heap should remain small and GC allocation rate low.
Sep 19 2015
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Saturday, 19 September 2015 at 14:12:10 UTC, Rory McGuire 
wrote:
 The impression I got reading the article was that their GC was 
 very much like our current one except that the marking part of 
 the algorithm was run concurrently.
It is quite different. As mentioned they also protect writes to pointers with GC semantics. In D this will be very difficult to get right due to the unsafe regions (e.g. inline asm etc). Go has a compiler backend tailored to their semantics.
Sep 19 2015
prev sibling next sibling parent reply Chris <wendlec tcd.ie> writes:
On Friday, 18 September 2015 at 19:26:27 UTC, Rory wrote:
 The new GC in Go 1.5 seems interesting. What they say about is 
 certainly interesting.

 http://blog.golang.org/go15gc

 "To create a garbage collector for the next decade, we turned 
 to an algorithm from decades ago. Go's new garbage collector is 
 a concurrent, tri-color, mark-sweep collector, an idea first 
 proposed by Dijkstra in 1978."
I sometimes wonder - and please forgive me my ignorance, because I'm not a GC expert at all - if it would be possible to create a system where the created objects know their own life spans and destroy themselves, once they are no longer used. Like the cells in our bodies.
Sep 21 2015
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Monday, 21 September 2015 at 09:58:31 UTC, Chris wrote:
 I sometimes wonder - and please forgive me my ignorance, 
 because I'm not a GC expert at all - if it would be possible to 
 create a system where the created objects know their own life 
 spans and destroy themselves, once they are no longer used. 
 Like the cells in our bodies.
Yes, this is the system Rust uses, but the compiler has to prove the life spans at compile time. So how convenient that is, is limited to the capabilities of the prover and for how long you want to wait for the computation of the life times. In essence you have to choose between a simple tracking system that is a bit annoying and solving NP-complete problems (which may work out fine in many cases, but not in all possible configurations). There are many ways to improve on this based on what tradeoffs you accept. Like you could segment the heap and prove that at a given point all the objects in the heap have to be dead, and just accept that you waste some memory up until that point. Etc.
Sep 21 2015
parent reply Chris <wendlec tcd.ie> writes:
On Monday, 21 September 2015 at 10:18:17 UTC, Ola Fosheim Grøstad 
wrote:
 On Monday, 21 September 2015 at 09:58:31 UTC, Chris wrote:
 I sometimes wonder - and please forgive me my ignorance, 
 because I'm not a GC expert at all - if it would be possible 
 to create a system where the created objects know their own 
 life spans and destroy themselves, once they are no longer 
 used. Like the cells in our bodies.
Yes, this is the system Rust uses, but the compiler has to prove the life spans at compile time. So how convenient that is, is limited to the capabilities of the prover and for how long you want to wait for the computation of the life times.
So I'm not completely nuts! Good to know. :) I wonder, if something like this is feasible in D.
Sep 21 2015
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Monday, 21 September 2015 at 10:25:05 UTC, Chris wrote:
 So I'm not completely nuts! Good to know. :) I wonder, if 
 something like this is feasible in D.
I am too. I'm toying with some ideas, but I think it would work mostly for shorter programs. Then again, I'm mostly interested in shorter programs...
Sep 21 2015
prev sibling parent reply ZombineDev <valid_email he.re> writes:
On Monday, 21 September 2015 at 10:25:05 UTC, Chris wrote:
 On Monday, 21 September 2015 at 10:18:17 UTC, Ola Fosheim 
 Grøstad wrote:
 On Monday, 21 September 2015 at 09:58:31 UTC, Chris wrote:
 I sometimes wonder - and please forgive me my ignorance, 
 because I'm not a GC expert at all - if it would be possible 
 to create a system where the created objects know their own 
 life spans and destroy themselves, once they are no longer 
 used. Like the cells in our bodies.
Yes, this is the system Rust uses, but the compiler has to prove the life spans at compile time. So how convenient that is, is limited to the capabilities of the prover and for how long you want to wait for the computation of the life times.
So I'm not completely nuts! Good to know. :) I wonder, if something like this is feasible in D.
There's also a simple thing called smart pointers which do this with RAII, copy and move semantics. Smart pointers manage the lifetime of the object they point to automatically. You just need to make sure that you access the object only through the smart pointer, because if you get another reference (through other means) that the smart pointer doesn't know about, the smart pointer may free the object too early. I prefer library-defined smart pointers than language magic, because you can easily modify them to fit your needs. What D needs is a way to enforce that the user can't get unmanaged references to the underlying object managed by the smart pointer. The killer way to implement this in D is to NOT add complexity in the compiler (and to change the whole language to some imaginable perfect correct memory management system), but to add away for the library writers to write extensible CTFE checkers that enforce the smart pointer invariants at compile-time.
Sep 21 2015
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Monday, 21 September 2015 at 11:01:27 UTC, ZombineDev wrote:
 I prefer library-defined smart pointers than language magic, 
 because you can easily modify them to fit your needs. What D 
 needs is a way to enforce that the user can't get unmanaged 
 references to the underlying object managed by the smart 
 pointer.
That's ok. I've done that in my own prototype library by having a "movingptr!T" returned by doing "move(someuniqueptr)" and "borrowptr!T" returned by "borrow(...)". But checking is limited to runtime asserts in destructors in debug builds. It is ok as a runtime hack... but not a competitive solution.
 The killer way to implement this in D is to NOT add complexity 
 in the compiler (and to change the whole language to some 
 imaginable perfect correct memory management system), but to 
 add away for the library writers to write extensible CTFE 
 checkers that enforce the smart pointer invariants at 
 compile-time.
That is most likely even more work than creating a language solution?
Sep 21 2015
parent reply Chris <wendlec tcd.ie> writes:
On Monday, 21 September 2015 at 12:04:11 UTC, Ola Fosheim Grøstad 
wrote:

 That is most likely even more work than creating a language 
 solution?
What's the current state of D's GC. Will std.allocator improve things eventually?
Sep 21 2015
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Monday, 21 September 2015 at 14:30:19 UTC, Chris wrote:
 What's the current state of D's GC. Will std.allocator improve 
 things eventually?
I don't understand the point of std.allocator. AFAIK the current GC has very limited compiler support. A smart compiler could move allocations to the stack by doing smart static analysis, cluster pointers that should be traced to the same cache lines etc. Moving to library allocation just makes it even harder to write a smart compiler, for very little gain IMO (i.e. custom allocators will always be better).
Sep 21 2015
prev sibling parent reply jmh530 <john.michael.hall gmail.com> writes:
On Monday, 21 September 2015 at 11:01:27 UTC, ZombineDev wrote:
 There's also a simple thing called smart pointers which do this 
 with RAII, copy and move semantics. Smart pointers manage the 
 lifetime of the object they point to automatically. You just 
 need to make sure that you access the object only through the 
 smart pointer, because if you get another reference (through 
 other means) that the smart pointer doesn't know about, the 
 smart pointer may free the object too early.
My understanding is that the key benefit of Rust's system is that compile time checks don't have the runtime costs of smart pointers.
Sep 21 2015
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Monday, 21 September 2015 at 18:28:19 UTC, jmh530 wrote:
 My understanding is that the key benefit of Rust's system is 
 that compile time checks don't have the runtime costs of smart 
 pointers.
+ aliasing information. If the compiler can prove that two pointers point to non-overlapping memory regions then the compiler can optimize better. This is one of the reasons why Fortran compilers managed to do better than C for a long time.
Sep 21 2015
next sibling parent reply jmh530 <john.michael.hall gmail.com> writes:
On Monday, 21 September 2015 at 19:32:23 UTC, Ola Fosheim Grøstad 
wrote:
 If the compiler can prove that two pointers point to 
 non-overlapping memory regions then the compiler can optimize 
 better. This is one of the reasons why Fortran compilers 
 managed to do better than C for a long time.
Interesting. Not to resurrect the older D vs. Rust thread, but I have heard it that it can be painful to do some things in Rust. D often has the ability to do unsafe things, like disable the GC. I was looking at how Rust has raw pointers and smart pointers. I'm curious as to what it is missing that is making things more difficult for people. If you or anyone has any idea.
Sep 21 2015
parent reply Ola Fosheim Grostad <ola.fosheim.grostad+dlang gmail.com> writes:
On Tuesday, 22 September 2015 at 02:15:51 UTC, jmh530 wrote:
 Interesting. Not to resurrect the older D vs. Rust thread, but 
 I have heard it that it can be painful to do some things in 
 Rust. D often has the ability to do unsafe things, like disable 
 the GC. I was looking at how Rust has raw pointers and smart 
 pointers. I'm curious as to what it is missing that is making 
 things more difficult for people. If you or anyone has any idea.
My knowledge of Rust is only cursory, but if you want to graphs (like a doubly linked list) you have to use a different pointer type. Just like in c++ where you have to use shared_ptr (+ weak_ptr or raw pointers) and not unique_ptr. You sometimes also have to explicitly state relationships between lifetimes (that one object outlives another).
Sep 21 2015
parent reply Chris <wendlec tcd.ie> writes:
On Tuesday, 22 September 2015 at 03:59:31 UTC, Ola Fosheim 
Grostad wrote:
 On Tuesday, 22 September 2015 at 02:15:51 UTC, jmh530 wrote:
 Interesting. Not to resurrect the older D vs. Rust thread, but 
 I have heard it that it can be painful to do some things in 
 Rust. D often has the ability to do unsafe things, like 
 disable the GC. I was looking at how Rust has raw pointers and 
 smart pointers. I'm curious as to what it is missing that is 
 making things more difficult for people. If you or anyone has 
 any idea.
My knowledge of Rust is only cursory, but if you want to graphs (like a doubly linked list) you have to use a different pointer type. Just like in c++ where you have to use shared_ptr (+ weak_ptr or raw pointers) and not unique_ptr. You sometimes also have to explicitly state relationships between lifetimes (that one object outlives another).
But that's very annoying to work with and more pain than gain. In my initial post I was thinking of a runtime solution where the object knows it's own life cycle, or at least knows when its own death is nigh and destroys itself. I don't know if this is possible at all, I simply borrowed this idea from biology. We don't have GC in our bodies, cells know when it's time for them to go.
Sep 22 2015
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Tuesday, 22 September 2015 at 09:01:07 UTC, Chris wrote:
 But that's very annoying to work with and more pain than gain.
I don't know... unique_ptr in C++ is quite ok for managing resources, but it does not track "borrowed pointers". But as I point out one can: 1. do it in runtime in debug builds (have a pointer back to the owning pointer and "assert(*this.owner==this.ptr)" at every dereference). 2. one can build a prover that does a fair job out of proving that the owner lives longer than any pointer copied from the owner. The caveat is that 2 might either require explicit annotations and/or take a long time, but maybe it is ok that it takes a long time if you only do pointer sanitisation (2) once in a while and use the assert method (1) during the daily development?
 In my initial post I was thinking of a runtime solution where 
 the object knows it's own life cycle, or at least knows when 
 its own death is nigh and destroys itself. I don't know if this 
 is possible at all, I simply borrowed this idea from biology. 
 We don't have GC in our bodies, cells know when it's time for 
 them to go.
Yes, that would be C++ shared_ptr and weak_ptr. Then you have to explicitly break all cycles in the graph with weak_ptr. All pointers pointing backwards in the three of shared_ptrs have to be weak_ptrs. Kind of expensive since you need two objects for every object. When all shared pointers are dead you free the object, but you need a reference object to reference count weak_ptrs. IIRC (I don't use shared_ptrs much).
Sep 22 2015
prev sibling parent Johannes Pfau <nospam example.com> writes:
Am Mon, 21 Sep 2015 19:32:21 +0000
schrieb Ola Fosheim Gr=C3=B8stad <ola.fosheim.grostad+dlang gmail.com>:

 On Monday, 21 September 2015 at 18:28:19 UTC, jmh530 wrote:
 My understanding is that the key benefit of Rust's system is=20
 that compile time checks don't have the runtime costs of smart=20
 pointers.
=20 + aliasing information. =20 If the compiler can prove that two pointers point to=20 non-overlapping memory regions then the compiler can optimize=20 better. This is one of the reasons why Fortran compilers managed=20 to do better than C for a long time. =20
Unfortunately we have even weaker optimizations than C regarding aliasing information. There's code in druntime and phobos which breaks C aliasing rules (usually pointer type casts) and this caused real issues on ARM systems with GDC. As the D spec doesn't state anything about aliasing we simply disable strict aliasing rules. I guess there's also lots of D user code which isn't compatible with strict aliasing rules.
Sep 22 2015
prev sibling parent reply Martin Nowak <code+news.digitalmars dawg.eu> writes:
On 09/18/2015 09:26 PM, Rory wrote:
 The new GC in Go 1.5 seems interesting. What they say about is certainly
 interesting.
 
 http://blog.golang.org/go15gc
 
 "To create a garbage collector for the next decade, we turned to an
 algorithm from decades ago. Go's new garbage collector is a concurrent,
 tri-color, mark-sweep collector, an idea first proposed by Dijkstra in
 1978."
A concurrent collector for sure reduces latency but lowers the throughput and also steals memory bandwidth from your program. It also requires write-barriers and we decided against them b/c they slow down every program by ~5%. Though it might be somehow possible to make them optional only for the people using a concurrent GC. The key to a low latency/high throughput GC is being able to incrementally collect the heap. There is a very interesting paper that uses the type system to perform incremental collections. http://forum.dlang.org/post/mcqr3s$cmf$1 digitalmars.com
Sep 23 2015
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Thursday, 24 September 2015 at 00:08:18 UTC, Martin Nowak 
wrote:
 The key to a low latency/high throughput GC is being able to 
 incrementally collect the heap. There is a very interesting 
 paper that uses the type system to perform incremental 
 collections.

 http://forum.dlang.org/post/mcqr3s$cmf$1 digitalmars.com
I haven't read the paper, but how does this solve collecting things like strings, or other "leaf types" when you use separate compilation units? The easy thing to do is to use GC locally (like for a fiber) and use move semantics for moving objects from one locality to the other (between fibers). This is also compatible with next gen computing where CPUs have local memory. Incidentally this is also the multithreaded model used in web browsers...
Sep 23 2015
parent reply Martin Nowak <code+news.digitalmars dawg.eu> writes:
On 09/24/2015 03:49 AM, Ola Fosheim Grøstad wrote:
 On Thursday, 24 September 2015 at 00:08:18 UTC, Martin Nowak wrote:
 The key to a low latency/high throughput GC is being able to
 incrementally collect the heap. There is a very interesting paper that
 uses the type system to perform incremental collections.

 http://forum.dlang.org/post/mcqr3s$cmf$1 digitalmars.com
I haven't read the paper, but how does this solve collecting things like strings, or other "leaf types" when you use separate compilation units?
We'd use runtime typeinfo.
 The easy thing to do is to use GC locally (like for a fiber) and use
 move semantics for moving objects from one locality to the other
 (between fibers).
Though it's challenging to efficiently manage all the GC structures for a small scope. Doing this per thread is a proven technology (see https://trello.com/c/K7HrSnwo/28-thread-cache-for-gc).
Sep 27 2015
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Sunday, 27 September 2015 at 16:54:52 UTC, Martin Nowak wrote:
 On 09/24/2015 03:49 AM, Ola Fosheim Grøstad wrote:
 I haven't read the paper, but how does this solve collecting 
 things like strings, or other "leaf types" when you use 
 separate compilation units?
We'd use runtime typeinfo.
But doesn't that imply a full scan when you are scanning for common types that live on leaf nodes in the graph?
 The easy thing to do is to use GC locally (like for a fiber) 
 and use
 move semantics for moving objects from one locality to the 
 other
 (between fibers).
Though it's challenging to efficiently manage all the GC structures for a small scope. Doing this per thread is a proven technology (see https://trello.com/c/K7HrSnwo/28-thread-cache-for-gc).
That's a good start, but hardware threads range from 1-32 threads on current CPUs, so it is likely to affect modelling more than doing it on an actor/fiber level. If you could group N actors on a single GC heap you could run a simulation across many threads and then collect inbetween. Btw, C++ appears to get the semi-stackless co-routines (no state on stack when yielding), which also appears to be the model used in Pony-lang. D really should consider a move in that direction combined with it's GC strategy.
Sep 29 2015
parent Martin Nowak <code dawg.eu> writes:
On Tuesday, 29 September 2015 at 08:09:39 UTC, Ola Fosheim 
Grøstad wrote:
 But doesn't that imply a full scan when you are scanning for 
 common types that live on leaf nodes in the graph?
Yes, if you want to collect a very common type, you'd need to scan many types. But using typed allocations you can also predict where to find a lot of garbage, so you collect strings when you can collect a lot of them.
Oct 04 2015