www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Why does D rely on a GC?

reply "maik klein" <maikklein googlemail.com> writes:
First of all I don't want to insult anyone on language design, I 
just want to know the reason behind the "always on" GC.
I know that the GC as several advantages over reference counting, 
especially when it comes to immutable data structures.
What I don't (correct me if i am wrong) understand is why every 
heap allocation has to be garbage collected, like classes, 
dynamic arrays etc.
Does a GC still have advantages over heap allocations that do not 
need to be reference counted such as the unique_ptr in c++?
The dlang homepage stats:

Destructors are used to deallocate resources acquired by an 
object. For most classes, this resource is allocated memory. With 
garbage collection, most destructors then become empty and can be 
discarded entirely.

If I understand it correctly it means that D has a GC, so most 
classes don't need a destructor anymore because they don't need 
to do any cleanup. I am not totally convinced that this would be 
a good trade off in general. Maybe someone could shine some light 
on this statement?
Aug 18 2014
next sibling parent reply Jonathan M Davis via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Mon, 18 Aug 2014 10:01:57 +0000
maik klein via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 First of all I don't want to insult anyone on language design, I
 just want to know the reason behind the "always on" GC.
 I know that the GC as several advantages over reference counting,
 especially when it comes to immutable data structures.
 What I don't (correct me if i am wrong) understand is why every
 heap allocation has to be garbage collected, like classes,
 dynamic arrays etc.
 Does a GC still have advantages over heap allocations that do not
 need to be reference counted such as the unique_ptr in c++?
 The dlang homepage stats:

 Destructors are used to deallocate resources acquired by an
 object. For most classes, this resource is allocated memory. With
 garbage collection, most destructors then become empty and can be
 discarded entirely.

 If I understand it correctly it means that D has a GC, so most
 classes don't need a destructor anymore because they don't need
 to do any cleanup. I am not totally convinced that this would be
 a good trade off in general. Maybe someone could shine some light
 on this statement?
The biggest reason is memory safety. With a GC, it's possible to make compiler guarantees about memory safety, whereas with manual memory management, it isn't. It's also pretty hard to do stuff like automatic closures and delegates without a GC, and the behavior of D's dynamic arrays - particularly with regards to slices - is much harder to do without a GC. The result is that there are a number of features which you lose out on if you don't have D's GC (though they're not features a language like C++ has, since it doesn't have a GC). However, it's also not true that the GC is necessarily always on. You can disable it. It's just that you do so, you lose out on certain language features, and memory management becomes a bit harder (particularly when it comes to constructing classes in malloced memory, but the custom allocators which are in the works should fix that). However, with the way a typical D program works, a lot more goes on the stack than happens with a typical GC language, so the fact that it has a GC wouldn't be as big an impediment as it is with some other languages, if you couldn't turn it off. So, having the GC gives us a number of features that aren't really possible without it, and unlike languages like Java, you _can_ turn it off if you want to, though you do lose out on some features when you do. So, ultimately, about all we really lose out on by having the GC is having folks who want a systems language freaking out about the fact that D has a GC and frequently assume that that means that they have to it and that D is not performant. - Jonathan M Davis
Aug 18 2014
parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Jonathan M Davis:

 The biggest reason is memory safety. With a GC, it's possible 
 to make compiler guarantees about memory safety, whereas with
 manual memory management, it isn't.
Unless you have a very smart type system and you accept some compromises (Rust also uses a reference counter some some cases, but I think most allocations don't need it). Bye, bearophile
Aug 18 2014
parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
On 8/18/14, 8:51 AM, bearophile wrote:
 Jonathan M Davis:

 The biggest reason is memory safety. With a GC, it's possible to make
 compiler guarantees about memory safety, whereas with
 manual memory management, it isn't.
Unless you have a very smart type system and you accept some compromises (Rust also uses a reference counter some some cases, but I think most allocations don't need it). Bye, bearophile
It's very smart, yes. But it takes half an hour to compile the compiler itself. And you have to put all those unwrap and types everywhere, I don't think it's fun or productive that way.
Aug 18 2014
next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Ary Borenszweig:

 It's very smart, yes. But it takes half an hour to compile the 
 compiler itself.
I think this is mostly a back-end issue. How much time does it take to compile ldc2? Can't they create a Rust with dmc back-end? :o)
 And you have to put all those unwrap and types everywhere, I 
 don't think it's fun or productive that way.
I've never written Rust programs longer than twenty lines, so I don't know. But I think Rust code is acceptable to write, I have seen code written in far worse situations (think about a programs one million lines of code long of MUMPS). Apparently the main Rust designer is willing to accept anything to obtain memory safety of Rust in parallel code. And the language is becoming simpler to use and less noisy. Bye, bearophile
Aug 18 2014
parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
On 8/18/14, 9:05 PM, bearophile wrote:
 Ary Borenszweig:

 It's very smart, yes. But it takes half an hour to compile the
 compiler itself.
I think this is mostly a back-end issue. How much time does it take to compile ldc2? Can't they create a Rust with dmc back-end? :o)
Not all hope is lost, though: https://github.com/rust-lang/rust/issues/16624 With such bugs, one can expect a lot of performance improvements in the future :-)
Aug 20 2014
parent "Dicebot" <public dicebot.lv> writes:
On Wednesday, 20 August 2014 at 13:43:19 UTC, Ary Borenszweig 
wrote:
 On 8/18/14, 9:05 PM, bearophile wrote:
 Ary Borenszweig:

 It's very smart, yes. But it takes half an hour to compile the
 compiler itself.
I think this is mostly a back-end issue. How much time does it take to compile ldc2? Can't they create a Rust with dmc back-end? :o)
Not all hope is lost, though: https://github.com/rust-lang/rust/issues/16624 With such bugs, one can expect a lot of performance improvements in the future :-)
btw it takes 6 minutes to build LDC package from scratch on my machine - that is including cloning git repos and packaging actual tarball.
Aug 20 2014
prev sibling next sibling parent "Idan Arye" <GenericNPC gmail.com> writes:
On Monday, 18 August 2014 at 23:48:24 UTC, Ary Borenszweig wrote:
 On 8/18/14, 8:51 AM, bearophile wrote:
 Jonathan M Davis:

 The biggest reason is memory safety. With a GC, it's possible 
 to make
 compiler guarantees about memory safety, whereas with
 manual memory management, it isn't.
Unless you have a very smart type system and you accept some compromises (Rust also uses a reference counter some some cases, but I think most allocations don't need it). Bye, bearophile
It's very smart, yes. But it takes half an hour to compile the compiler itself. And you have to put all those unwrap and types everywhere, I don't think it's fun or productive that way.
Initially all those type wrapping and scoping and lifetimes were single-character annotations and that gave me the impression that the idea is that once you get comfortable with Rust's type system and syntax, you can use all that fine-grained control over the scope and lifetime of the data to get superior compile-time error checking and to give better cues to the compiler to get better performance, without too much effort and without hindering the readability(again - once you get comfortable with the type system). Now, though, when they remove more and more syntax to the library in an attempt to reach the elegance and simplicity of modern C++, I'm no longer sure that was the true goal of that language...
Aug 18 2014
prev sibling parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Monday, 18 August 2014 at 23:48:24 UTC, Ary Borenszweig wrote:
 On 8/18/14, 8:51 AM, bearophile wrote:
 Jonathan M Davis:

 The biggest reason is memory safety. With a GC, it's possible 
 to make
 compiler guarantees about memory safety, whereas with
 manual memory management, it isn't.
Unless you have a very smart type system and you accept some compromises (Rust also uses a reference counter some some cases, but I think most allocations don't need it). Bye, bearophile
It's very smart, yes. But it takes half an hour to compile the compiler itself.
The compilation speed is caused by the C++ code in their compiler backend (LLVM), which gets compiled at least twice during the bootstraping process.
 And you have to put all those unwrap and types everywhere, I 
 don't think it's fun or productive that way.
There I fully agree. If they don't improve lifetime's usability, I don't see Rust being adopted by average developers. -- Paulo
Aug 18 2014
next sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
On 8/19/14, 3:50 AM, Paulo Pinto wrote:
 On Monday, 18 August 2014 at 23:48:24 UTC, Ary Borenszweig wrote:
 On 8/18/14, 8:51 AM, bearophile wrote:
 Jonathan M Davis:

 The biggest reason is memory safety. With a GC, it's possible to make
 compiler guarantees about memory safety, whereas with
 manual memory management, it isn't.
Unless you have a very smart type system and you accept some compromises (Rust also uses a reference counter some some cases, but I think most allocations don't need it). Bye, bearophile
It's very smart, yes. But it takes half an hour to compile the compiler itself.
The compilation speed is caused by the C++ code in their compiler backend (LLVM), which gets compiled at least twice during the bootstraping process.
Actually, it's 26m to just compile Rust without LLVM. Take a look at this: https://twitter.com/steveklabnik/status/496774607610052608 Then here someone from the team says he can't say a way to improve the performance by an order of magnitude: https://www.mail-archive.com/rust-dev mozilla.org/msg02856.html (but I don't know how true is that)
 And you have to put all those unwrap and types everywhere, I don't
 think it's fun or productive that way.
There I fully agree. If they don't improve lifetime's usability, I don't see Rust being adopted by average developers.
This. But maybe programming in a safe way is inherently hard? Who knows...
Aug 19 2014
next sibling parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Ary Borenszweig"  wrote in message news:lsviva$2ip0$1 digitalmars.com...

 Actually, it's 26m to just compile Rust without LLVM. Take a look at this:
Funny, the DDMD frontend compiles in ~6 seconds.
Aug 19 2014
parent Ary Borenszweig <ary esperanto.org.ar> writes:
On 8/19/14, 10:55 AM, Daniel Murphy wrote:
 "Ary Borenszweig"  wrote in message news:lsviva$2ip0$1 digitalmars.com...

 Actually, it's 26m to just compile Rust without LLVM. Take a look at
 this:
Funny, the DDMD frontend compiles in ~6 seconds.
Nimrod's compiler takes 5 second. Crystal takes 6 seconds. I think compiling ldc2 also takes about the same time. When I compile librustc with Rust with stats on, it takes 1.1 seconds to just *parse* the code. I think there's something very bad in their code...
Aug 19 2014
prev sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Ary Borenszweig:

 Then here someone from the team says he can't say a way to 
 improve the performance by an order of magnitude:

 https://www.mail-archive.com/rust-dev mozilla.org/msg02856.html

 (but I don't know how true is that)
Can't they remove some type inference from the language? Type inference is handy (but I write down all type signatures in Haskell, sometimes even for nested functions) but if it costs so much in compilation time then perhaps isn't it a good idea to remove some type inference from Rust? Bye, bearophile
Aug 19 2014
next sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
On 8/19/14, 11:51 AM, bearophile wrote:
 Ary Borenszweig:

 Then here someone from the team says he can't say a way to improve the
 performance by an order of magnitude:

 https://www.mail-archive.com/rust-dev mozilla.org/msg02856.html

 (but I don't know how true is that)
Can't they remove some type inference from the language? Type inference is handy (but I write down all type signatures in Haskell, sometimes even for nested functions) but if it costs so much in compilation time then perhaps isn't it a good idea to remove some type inference from Rust? Bye, bearophile
Crystal has *global* type inference. If you look at the compiler's code you will find very few type annotations (mostly for generic types and for arguments types restrictions). Compiling the compiler takes 6 seconds (recompiling it takes 3 seconds). D also has auto, Nimrod has let, and both compilers are very fast. I don't think type inference is what makes their compiler slow. Here are the full stats: $ time CFG_VERSION=1 CFG_RELEASE=0 rustc -Z time-passes src/librustc/lib.rs time: 0.519 s parsing time: 0.026 s gated feature checking time: 0.000 s crate injection time: 0.170 s configuration 1 time: 0.083 s plugin loading time: 0.000 s plugin registration time: 1.803 s expansion time: 0.326 s configuration 2 time: 0.309 s maybe building test harness time: 0.321 s prelude injection time: 0.363 s assigning node ids and indexing ast time: 0.023 s checking that all macro invocations are gone time: 0.031 s external crate/lib resolution time: 0.046 s language item collection time: 1.250 s resolution time: 0.027 s lifetime resolution time: 0.000 s looking for entry point time: 0.023 s looking for plugin registrar time: 0.063 s freevar finding time: 0.126 s region resolution time: 0.025 s loop checking time: 0.047 s stability index time: 0.126 s type collecting time: 0.050 s variance inference time: 0.265 s coherence checking time: 17.294 s type checking time: 0.044 s check static items time: 0.190 s const marking time: 0.037 s const checking time: 0.378 s privacy checking time: 0.080 s intrinsic checking time: 0.070 s effect checking time: 0.843 s match checking time: 0.184 s liveness checking time: 1.569 s borrow checking time: 0.518 s kind checking time: 0.033 s reachability checking time: 0.204 s death checking time: 0.835 s lint checking time: 0.000 s resolving dependency formats time: 25.645 s translation time: 1.325 s llvm function passes time: 0.766 s llvm module passes time: 40.950 s codegen passes time: 46.521 s LLVM passes time: 0.607 s running linker time: 3.372 s linking real 1m46.062s user 1m41.727s sys 0m3.333s So apparently type checking takes a long time, and also generating the llvm code. But it seems waaaay too much for what it is.
Aug 19 2014
parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
On 8/19/14, 12:01 PM, Ary Borenszweig wrote:
 On 8/19/14, 11:51 AM, bearophile wrote:
 Ary Borenszweig:

 Then here someone from the team says he can't say a way to improve the
 performance by an order of magnitude:

 https://www.mail-archive.com/rust-dev mozilla.org/msg02856.html

 (but I don't know how true is that)
Can't they remove some type inference from the language? Type inference is handy (but I write down all type signatures in Haskell, sometimes even for nested functions) but if it costs so much in compilation time then perhaps isn't it a good idea to remove some type inference from Rust? Bye, bearophile
Crystal has *global* type inference. If you look at the compiler's code you will find very few type annotations (mostly for generic types and for arguments types restrictions). Compiling the compiler takes 6 seconds (recompiling it takes 3 seconds). D also has auto, Nimrod has let, and both compilers are very fast. I don't think type inference is what makes their compiler slow. Here are the full stats: $ time CFG_VERSION=1 CFG_RELEASE=0 rustc -Z time-passes src/librustc/lib.rs time: 0.519 s parsing time: 0.026 s gated feature checking time: 0.000 s crate injection time: 0.170 s configuration 1 time: 0.083 s plugin loading time: 0.000 s plugin registration time: 1.803 s expansion time: 0.326 s configuration 2 time: 0.309 s maybe building test harness time: 0.321 s prelude injection time: 0.363 s assigning node ids and indexing ast time: 0.023 s checking that all macro invocations are gone time: 0.031 s external crate/lib resolution time: 0.046 s language item collection time: 1.250 s resolution time: 0.027 s lifetime resolution time: 0.000 s looking for entry point time: 0.023 s looking for plugin registrar time: 0.063 s freevar finding time: 0.126 s region resolution time: 0.025 s loop checking time: 0.047 s stability index time: 0.126 s type collecting time: 0.050 s variance inference time: 0.265 s coherence checking time: 17.294 s type checking time: 0.044 s check static items time: 0.190 s const marking time: 0.037 s const checking time: 0.378 s privacy checking time: 0.080 s intrinsic checking time: 0.070 s effect checking time: 0.843 s match checking time: 0.184 s liveness checking time: 1.569 s borrow checking time: 0.518 s kind checking time: 0.033 s reachability checking time: 0.204 s death checking time: 0.835 s lint checking time: 0.000 s resolving dependency formats time: 25.645 s translation time: 1.325 s llvm function passes time: 0.766 s llvm module passes time: 40.950 s codegen passes time: 46.521 s LLVM passes time: 0.607 s running linker time: 3.372 s linking real 1m46.062s user 1m41.727s sys 0m3.333s So apparently type checking takes a long time, and also generating the llvm code. But it seems waaaay too much for what it is.
Also, the list seems way too big. It's ok from a purist point of view, to make the compiler nice and clean. But that's not a good way to make a fast compiler. The sad thing is that Mozilla is behind the project, so people are really excited about it. Other languages don't have a big corporation behind them and have faster compilers (and nicer languages, I think ^_^).
Aug 19 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Tuesday, 19 August 2014 at 15:16:31 UTC, Ary Borenszweig wrote:
 Also, the list seems way too big. It's ok from a purist point 
 of view, to make the compiler nice and clean. But that's not a 
 good way to make a fast compiler.
This is why I won't bother investing any time in it for a few more years at least. It may have really cool language features, big development team and famous names behind it but in terms of compiler maturity it is still a very long road to go until it gets even to DMD capabilities.
Aug 19 2014
parent reply "bachmeier" <no spam.com> writes:
On Tuesday, 19 August 2014 at 16:17:02 UTC, Dicebot wrote:
 On Tuesday, 19 August 2014 at 15:16:31 UTC, Ary Borenszweig 
 wrote:
 Also, the list seems way too big. It's ok from a purist point 
 of view, to make the compiler nice and clean. But that's not a 
 good way to make a fast compiler.
This is why I won't bother investing any time in it for a few more years at least. It may have really cool language features, big development team and famous names behind it but in terms of compiler maturity it is still a very long road to go until it gets even to DMD capabilities.
I won't look at it again for a different reason. They're the types that say "A monad is just a monoid in the category of endofunctors, what's the problem?" but they're serious. My last interaction with Rust was when I commented that adoption would be hurt if they require an understanding of the memory model just to get started, to which they responded more or less that it's not a big deal. At that point I concluded the language was lost. I can only imagine what it will look like in five years.
Aug 19 2014
next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Tuesday, 19 August 2014 at 17:11:21 UTC, bachmeier wrote:
 This is why I won't bother investing any time in it for a few 
 more years at least. It may have really cool language 
 features, big development team and famous names behind it but 
 in terms of compiler maturity it is still a very long road to 
 go until it gets even to DMD capabilities.
I won't look at it again for a different reason. They're the types that say "A monad is just a monoid in the category of endofunctors, what's the problem?" but they're serious. My last interaction with Rust was when I commented that adoption would be hurt if they require an understanding of the memory model just to get started, to which they responded more or less that it's not a big deal. At that point I concluded the language was lost. I can only imagine what it will look like in five years.
Actually I also don't think it is a big deal. Yes, it is crucial blocker for any "casual" adoption and will like to prevent its adoption in web service domain, for example (in a way similar to vibe.d) - but language has never been designed for such use cases. It is for complicated projects with non-trivial performance requirements and there is certain point of complexity where educating hired programmers about category theory may be cheaper than dealing with maintenance in overly lax language. It does look like a niche language but a very good one in declared niche.
Aug 19 2014
parent reply "bachmeier" <no spam.com> writes:
On Tuesday, 19 August 2014 at 17:16:32 UTC, Dicebot wrote:

 It does look like a niche language but a very good one in 
 declared niche.
On that we agree. It's great for its niche. I was picturing a Java or .NET programmer looking at the language. Java devs complain about Scala. I can't imagine what they'd say about Rust.
Aug 19 2014
next sibling parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Tue, 19 Aug 2014 18:13:27 +0000
bachmeier via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 Java or .NET programmer looking at the language. Java devs=20
 complain about Scala. I can't imagine what they'd say about Rust.
Java devs can speak?! O_O
Aug 19 2014
prev sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Tuesday, 19 August 2014 at 18:13:28 UTC, bachmeier wrote:
 On Tuesday, 19 August 2014 at 17:16:32 UTC, Dicebot wrote:

 It does look like a niche language but a very good one in 
 declared niche.
On that we agree. It's great for its niche. I was picturing a Java or .NET programmer looking at the language. Java devs complain about Scala. I can't imagine what they'd say about Rust.
I imagine you are speaking about enterprise coding drones, as I also develop in Java and already expressed my opinion about Rust. :) -- Paulo
Aug 20 2014
prev sibling next sibling parent reply "ponce" <contact gam3sfrommars.fr> writes:
On Tuesday, 19 August 2014 at 17:11:21 UTC, bachmeier wrote:
 My last interaction with Rust was when I commented that 
 adoption would be hurt if they require an understanding of the 
 memory model just to get started, to which they responded more 
 or less that it's not a big deal. At that point I concluded the 
 language was lost. I can only imagine what it will look like in 
 five years.
Same here. Want to solve the C++ problem, solve compilation speed first. I can't see programmers suddenly willing to manage lifetimes explicitely, also I can see the Rust syntax create an endless stream of complaints.
Aug 19 2014
parent Paulo Pinto <pjmlp progtools.org> writes:
Am 19.08.2014 19:18, schrieb ponce:
 On Tuesday, 19 August 2014 at 17:11:21 UTC, bachmeier wrote:
 My last interaction with Rust was when I commented that adoption would
 be hurt if they require an understanding of the memory model just to
 get started, to which they responded more or less that it's not a big
 deal. At that point I concluded the language was lost. I can only
 imagine what it will look like in five years.
Same here. Want to solve the C++ problem, solve compilation speed first. I can't see programmers suddenly willing to manage lifetimes explicitely, also I can see the Rust syntax create an endless stream of complaints.
I like the ML syntax, the problem is the extra Perl like syntax for lifetimes and such. The way things are, Swift is probably going to get more developers than Rust. -- Paulo
Aug 19 2014
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 8/19/14, 10:11 AM, bachmeier wrote:
 On Tuesday, 19 August 2014 at 16:17:02 UTC, Dicebot wrote:
 On Tuesday, 19 August 2014 at 15:16:31 UTC, Ary Borenszweig wrote:
 Also, the list seems way too big. It's ok from a purist point of
 view, to make the compiler nice and clean. But that's not a good way
 to make a fast compiler.
This is why I won't bother investing any time in it for a few more years at least. It may have really cool language features, big development team and famous names behind it but in terms of compiler maturity it is still a very long road to go until it gets even to DMD capabilities.
I won't look at it again for a different reason. They're the types that say "A monad is just a monoid in the category of endofunctors, what's the problem?" but they're serious. My last interaction with Rust was when I commented that adoption would be hurt if they require an understanding of the memory model just to get started, to which they responded more or less that it's not a big deal. At that point I concluded the language was lost. I can only imagine what it will look like in five years.
What is the main Rust forum? Thanks, Andrei
Aug 19 2014
next sibling parent Paulo Pinto <pjmlp progtools.org> writes:
Am 19.08.2014 20:22, schrieb Andrei Alexandrescu:
 On 8/19/14, 10:11 AM, bachmeier wrote:
 On Tuesday, 19 August 2014 at 16:17:02 UTC, Dicebot wrote:
 On Tuesday, 19 August 2014 at 15:16:31 UTC, Ary Borenszweig wrote:
 Also, the list seems way too big. It's ok from a purist point of
 view, to make the compiler nice and clean. But that's not a good way
 to make a fast compiler.
This is why I won't bother investing any time in it for a few more years at least. It may have really cool language features, big development team and famous names behind it but in terms of compiler maturity it is still a very long road to go until it gets even to DMD capabilities.
I won't look at it again for a different reason. They're the types that say "A monad is just a monoid in the category of endofunctors, what's the problem?" but they're serious. My last interaction with Rust was when I commented that adoption would be hurt if they require an understanding of the memory model just to get started, to which they responded more or less that it's not a big deal. At that point I concluded the language was lost. I can only imagine what it will look like in five years.
What is the main Rust forum? Thanks, Andrei
According to this post, StackOverflow https://mail.mozilla.org/pipermail/rust-dev/2014-August/011019.html
Aug 19 2014
prev sibling parent "Meta" <jared771 gmail.com> writes:
On Tuesday, 19 August 2014 at 18:22:46 UTC, Andrei Alexandrescu 
wrote:
 What is the main Rust forum? Thanks, Andrei
Until very recently it was the mailing list [0]. There is now also a Discourse forum [1]. [0] rust-dev mozilla.org [1] http://discuss.rust-lang.org/
Aug 19 2014
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08/19/2014 07:11 PM, bachmeier wrote:
 On Tuesday, 19 August 2014 at 16:17:02 UTC, Dicebot wrote:
 On Tuesday, 19 August 2014 at 15:16:31 UTC, Ary Borenszweig wrote:
 Also, the list seems way too big. It's ok from a purist point of
 view, to make the compiler nice and clean. But that's not a good way
 to make a fast compiler.
This is why I won't bother investing any time in it for a few more years at least. It may have really cool language features, big development team and famous names behind it but in terms of compiler maturity it is still a very long road to go until it gets even to DMD capabilities.
I won't look at it again for a different reason. They're the types that say "A monad is just a monoid in the category of endofunctors, what's the problem?"
Sure, so one should point out that problem may be made out to be that the monoidal product [1][2] is underspecified for someone unfamiliar with the convention (in this case it should be given by composition of endofunctors, and the associator is given pointwise by identity morphisms). (But of course, the more fundamental problem is actually that this characterization is not abstract enough and hence harder to decipher than necessary. A monad can be defined in an arbitrary bicategory... :o) ) What do /you/ think is the problem?
  but they're serious.
I have a hard time believing this. While it is true that those concepts are not complicated [3], it seems to be generally acknowledged that some carefully compressed characterization/definition alone is not usually the most helpful introduction of them, especially so if the application is going to be in a more concrete setting. [1] http://en.wikipedia.org/wiki/Monoid_%28category_theory%29 [2] http://en.wikipedia.org/wiki/Monoidal_category [3] But, quite apparently, they are sometimes hard to communicate.
Aug 19 2014
parent reply Philippe Sigaud via Digitalmars-d <digitalmars-d puremagic.com> writes:
 I won't look at it again for a different reason. They're the types that
 say "A monad is just a monoid in the category of endofunctors, what's
 the problem?"
Sure, so one should point out that problem may be made out to be that the monoidal product [1][2] is underspecified for someone unfamiliar with the convention (in this case it should be given by composition of endofunctors, and the associator is given pointwise by identity morphisms). (But of course, the more fundamental problem is actually that this characterization is not abstract enough and hence harder to decipher than necessary. A monad can be defined in an arbitrary bicategory... :o) ) What do /you/ think is the problem?
I remember finding this while exploring Haskell, some years ago: http://www.haskell.org/haskellwiki/Zygohistomorphic_prepromorphisms And thinking: Ah, I get it, it's a joke: they know they are considered a bunch of strangely mathematically-influenced developers, but they have a sense of humor and know how to be tongue-in-cheek and have gentle fun at themselves. (My strange free association was: Zygo => zygomatics muscle => smile & laughter => joke). That actually got me interested in Haskell :) But, apparently, I was all wrong ;-) It got me reading entire books on category theory and type theory, though.
Aug 19 2014
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 08/19/2014 09:07 PM, Philippe Sigaud via Digitalmars-d wrote:
 I remember finding this while exploring Haskell, some years ago:

 http://www.haskell.org/haskellwiki/Zygohistomorphic_prepromorphisms

 And thinking: Ah, I get it, it's a joke: they know they are considered
 a bunch of strangely mathematically-influenced developers, but they
 have a sense of humor and know how to be tongue-in-cheek and have
 gentle fun at themselves. (My strange free association was: Zygo =>
 zygomatics muscle => smile & laughter => joke).

 That actually got me interested in Haskell:)
 But, apparently, I was all wrong;-)
Well, not /all/ wrong. http://www.reddit.com/r/programming/comments/6ml1y/a_pretty_useful_has ell_snippet/c04ako5 :-)
 It got me reading entire books on
 category theory and type theory, though.
That's a lot of fun in any case. =)
Aug 19 2014
prev sibling parent "Kagamin" <spam here.lot> writes:
On Tuesday, 19 August 2014 at 14:51:49 UTC, bearophile wrote:
 Ary Borenszweig:

 Then here someone from the team says he can't say a way to 
 improve the performance by an order of magnitude:

 https://www.mail-archive.com/rust-dev mozilla.org/msg02856.html

 (but I don't know how true is that)
Can't they remove some type inference from the language?
They don't know, where is the problem: https://www.mail-archive.com/rust-dev mozilla.org/msg02865.html
  - Non-use of arenas and &, pervasively
They don't use lent pointers. Just as planned.
  - Refcounting traffic
  - Landing pads
  - Cleanup duplication in _any_ scope-exit scenario
Aug 19 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 8/18/14, 11:50 PM, Paulo Pinto wrote:
 On Monday, 18 August 2014 at 23:48:24 UTC, Ary Borenszweig wrote:
 On 8/18/14, 8:51 AM, bearophile wrote:
 Jonathan M Davis:

 The biggest reason is memory safety. With a GC, it's possible to make
 compiler guarantees about memory safety, whereas with
 manual memory management, it isn't.
Unless you have a very smart type system and you accept some compromises (Rust also uses a reference counter some some cases, but I think most allocations don't need it). Bye, bearophile
It's very smart, yes. But it takes half an hour to compile the compiler itself.
The compilation speed is caused by the C++ code in their compiler backend (LLVM), which gets compiled at least twice during the bootstraping process.
Generally speaking how fast is the Rust compiler at compiling Rust files?
 And you have to put all those unwrap and types everywhere, I don't
 think it's fun or productive that way.
There I fully agree. If they don't improve lifetime's usability, I don't see Rust being adopted by average developers.
Could you please substantiate this with a couple of examples? Andrei
Aug 19 2014
next sibling parent Nick Treleaven <ntrel-public yahoo.co.uk> writes:
On 19/08/2014 15:09, Andrei Alexandrescu wrote:
 There I fully agree. If they don't improve lifetime's usability, I don't
 see Rust being adopted by average developers.
Could you please substantiate this with a couple of examples?
Here's an example I ran into playing with the last release. Maybe I'm doing something wrong though: fn main() { let mut x:uint = 0u; let f = ||x += 1; // closure borrows x if x == 1 {return} // error: cannot use `x` because it was mutably borrowed f(); } This seems unnecessarily restrictive.
Aug 19 2014
prev sibling parent Paulo Pinto <pjmlp progtools.org> writes:
Am 19.08.2014 16:09, schrieb Andrei Alexandrescu:
 On 8/18/14, 11:50 PM, Paulo Pinto wrote:
 On Monday, 18 August 2014 at 23:48:24 UTC, Ary Borenszweig wrote:
 On 8/18/14, 8:51 AM, bearophile wrote:
 Jonathan M Davis:

 The biggest reason is memory safety. With a GC, it's possible to make
 compiler guarantees about memory safety, whereas with
 manual memory management, it isn't.
Unless you have a very smart type system and you accept some compromises (Rust also uses a reference counter some some cases, but I think most allocations don't need it). Bye, bearophile
It's very smart, yes. But it takes half an hour to compile the compiler itself.
The compilation speed is caused by the C++ code in their compiler backend (LLVM), which gets compiled at least twice during the bootstraping process.
Generally speaking how fast is the Rust compiler at compiling Rust files?
A few seconds when following the tutorial examples. I haven't written much Rust.
 And you have to put all those unwrap and types everywhere, I don't
 think it's fun or productive that way.
There I fully agree. If they don't improve lifetime's usability, I don't see Rust being adopted by average developers.
Could you please substantiate this with a couple of examples? Andrei
Discussions like these, http://www.reddit.com/r/rust/comments/2dmcxs/new_to_rust_trying_to_figure_out_lifetimes/ http://www.reddit.com/r/rust/comments/2ac390/generic_string_literals/ -- Paulo
Aug 19 2014
prev sibling next sibling parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Monday, 18 August 2014 at 10:01:59 UTC, maik klein wrote:
 First of all I don't want to insult anyone on language design, 
 I just want to know the reason behind the "always on" GC.
 I know that the GC as several advantages over reference 
 counting, especially when it comes to immutable data structures.
 What I don't (correct me if i am wrong) understand is why every 
 heap allocation has to be garbage collected, like classes, 
 dynamic arrays etc.
 Does a GC still have advantages over heap allocations that do 
 not need to be reference counted such as the unique_ptr in c++?
 The dlang homepage stats:

 Destructors are used to deallocate resources acquired by an 
 object. For most classes, this resource is allocated memory. 
 With garbage collection, most destructors then become empty and 
 can be discarded entirely.

 If I understand it correctly it means that D has a GC, so most 
 classes don't need a destructor anymore because they don't need 
 to do any cleanup. I am not totally convinced that this would 
 be a good trade off in general. Maybe someone could shine some 
 light on this statement?
A good reason is the ability to write lock-free algorithms, which are very hard to implement without GC support. This is the main reason why C++11 has a GC API and Herb Sutter will be discussing about GC in C++ at CppCon. Reference counting is only a win over GC with compiler support for reducing increment/decrement operations via dataflow analysis. C++ programs with heavy use of unique_ptr/shared_ptr/weak_ptr are slower than other languages with GC support, because those classes are plain library types without compiler support. Of course, compiler vendors can have blessed library types, but the standard does not require it. RC also has performance impact when deleting big data structures. You can optimize this away big using asynchronous deletion, but eventually you are just doing GC with another name. As for feasibility of a GC in a systems programming language, just think the Mesa/Cedar environment at Xerox PARC was a system programming language using reference counting with a GC for collecting cycles. -- Paulo
Aug 18 2014
parent reply "b" <yes no.com> writes:
 A good reason is the ability to write lock-free algorithms, 
 which are very hard to implement without GC support. This is 
 the main reason why C++11 has a GC API and Herb Sutter will be 
 discussing about GC in C++ at CppCon.
*some* lock free algorithms benefit from GC, there is still plenty you can do without GC, just look at TBB.
 Reference counting is only a win over GC with compiler support 
 for reducing increment/decrement operations via dataflow 
 analysis.

 C++ programs with heavy use of unique_ptr/shared_ptr/weak_ptr 
 are slower than other languages with GC support, because those 
 classes are plain library types without compiler support. Of 
 course, compiler vendors can have blessed library types, but 
 the standard does not require it.
Not really accurate. First of all, don't include unique_ptr as if it had the same overhead as the other two, it doesn't. With RC you pay a price during creation/deletion/sharing, but not while it is alive. With GC you pay almost no cost during allocation/deletion, but a constant cost while it is alive. You allocate enough objects and the sum cost ant so small. Besides that, in C++ it works like this. 90% of objects: value types, on stack or embedded into other objects 9% of objects: unique types, use unique_ptr, no overhead ~1% of objects: shared, use shared_ptr/weak_ptr etc. With GC you give up deterministic behavior, which is *absolutely* not worth giving up for 1% of objects. I think most people simply haven't worked in an environment that supports unique/linear types. So everyone assumes that you need a GC. Rust is showing that this is nonsense, as C++ has already done for people using C++11.
Aug 18 2014
next sibling parent Paulo Pinto <pjmlp progtools.org> writes:
Am 18.08.2014 20:56, schrieb b:
 A good reason is the ability to write lock-free algorithms, which are
 very hard to implement without GC support. This is the main reason why
 C++11 has a GC API and Herb Sutter will be discussing about GC in C++
 at CppCon.
*some* lock free algorithms benefit from GC, there is still plenty you can do without GC, just look at TBB.
Sure, but you need to be a very good expert to pull them off.
 Reference counting is only a win over GC with compiler support for
 reducing increment/decrement operations via dataflow analysis.

 C++ programs with heavy use of unique_ptr/shared_ptr/weak_ptr are
 slower than other languages with GC support, because those classes are
 plain library types without compiler support. Of course, compiler
 vendors can have blessed library types, but the standard does not
 require it.
Not really accurate. First of all, don't include unique_ptr as if it had the same overhead as the other two, it doesn't.
Yes it does, when you do cascade destruction of large data structures.
   With RC you pay a price during creation/deletion/sharing, but not
 while it is alive.
   With GC you pay almost no cost during allocation/deletion, but a
 constant cost while it is alive. You allocate enough objects and the sum
 cost ant so small.

   Besides that, in C++ it works like this.
   90% of objects: value types, on stack or embedded into other objects
   9% of objects: unique types, use unique_ptr, no overhead
   ~1% of objects: shared, use shared_ptr/weak_ptr etc.
It is more than 1% I would say, because in many cases where you have an unique_ptr, you might need a shared_ptr instead, or go unsafe and give direct access to the underlying pointer. For example, parameters and temporaries, where you can be sure no one else is using the pointer, but afterwards as a consequence of destructor invocation the data is gone.
   With GC you give up deterministic behavior, which is *absolutely* not
 worth giving up for 1% of objects.
Being a GC enabled systems programming language does not forbid the presence of deterministic memory management support, for the use cases that really need it.
   I think most people simply haven't worked in an environment that
 supports unique/linear types. So everyone assumes that you need a GC.
 Rust is showing that this is nonsense, as C++ has already done for
 people using C++11.
I know C++ pretty well (using it since 1993), like it a lot, but I also think we can get better than it. Specially since I had the luck to get to know systems programming languages with GC like Modula-3 and Oberon(-2). The Oberon OS had quite a few nice concepts that go way back to Mesa/Cedar at Xerox PARC. Rust is also showing how complex a type system needs to be to handle all memory management cases. Not sure how many developers will jump into it. For example, currently you can only concatenate strings if they are both heap allocated. There are still some issues with operations that mix lifetimes being sorted out. -- Paulo
Aug 18 2014
prev sibling next sibling parent reply "maik klein" <maikklein googlemail.com> writes:
On Monday, 18 August 2014 at 18:56:42 UTC, b wrote:
  With RC you pay a price during creation/deletion/sharing, ...
Are you sure that there is even a cost for creation? I mean sure we have to allocate memory on the heap but the GC has to do the same thing. And with C++11 unique_ptr moves by default, so I think that the creation cost is exactly the same as with a GC. I also think that deletion is cheaper for a unique_ptr compared to a GC, because all you have to do is call the destructor when it goes out of scope. My initial question was why D uses the GC for everything. Personally it would make more sense to me if D would use the GC as a library. Let the user decide what he wants, something like shared_ptr(GC) ptr; shared_ptr(RC) ptr;
Aug 18 2014
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 18 August 2014 at 19:43:14 UTC, maik klein wrote:
 My initial question was why D uses the GC for everything.
It isn't supposed to with nogc?
 Personally it would make more sense to me if D would use the GC 
 as a library. Let the user decide what he wants, something like

 shared_ptr(GC) ptr;
 shared_ptr(RC) ptr;
Memory allocation should be in the compiler/runtime to get proper optimizations. But D has apparently gone with nongc-allocators as a library feature.
Aug 18 2014
prev sibling parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Mon, 18 Aug 2014 19:43:13 +0000
maik klein via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 My initial question was why D uses the GC for everything.
to avoid unnecessary complications in user source code. GC is necessary for some cool D features (it was noted ealier), and GC is first-class citizen in D.
 Personally it would make more sense to me if D would use the GC=20
 as a library.
D is not C++. besides, you can write your own allocators (oh, we need more documentation on this) and avoid GC usage. so user *can* avoid GC if he wants to. take a look at std.typecons and it's 'scoped', for example.
Aug 18 2014
prev sibling next sibling parent reply "ponce" <contact gam3sfrommars.fr> writes:
On Monday, 18 August 2014 at 18:56:42 UTC, b wrote:
  Besides that, in C++ it works like this.
  90% of objects: value types, on stack or embedded into other 
 objects
  9% of objects: unique types, use unique_ptr, no overhead
  ~1% of objects: shared, use shared_ptr/weak_ptr etc.

  With GC you give up deterministic behavior, which is 
 *absolutely* not worth giving up for 1% of objects.

  I think most people simply haven't worked in an environment 
 that supports unique/linear types. So everyone assumes that you 
 need a GC. Rust is showing that this is nonsense, as C++ has 
 already done for people using C++11.
I work in such an environment and I tend to agree with you. We use a combination of std::unique_ptr and raw pointers as weak ref. It works without hiccup even though people have to ensure manually that lifetimes of owners exceed lifetime of borrowers. A problem with scoped ownership is that it makes the pointer know about the lifetime, and that forces to give up type-safety a bit to write function taking any pointer type. In D having deterministic resource release is harder because scoped ownership is less polished than in C++.
Aug 19 2014
parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Tue, 19 Aug 2014 07:14:17 +0000
ponce via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 I work in such an environment and I tend to agree with you.
just replace GC with stub doing only allocs (or use GC.disable) and manage resource freeing manually (or with corresponding templated struct wrappers).
 In D having deterministic resource release is harder because=20
 scoped ownership is less polished than in C++.
but why?! see 'scoped' to scope allocations. and, for example, 'File', which is refcounted internally. plus 'scope()' finalizers. i'm pretty sure that scoped ownership in D at least on par with C++, if not better and simplier either to write, to read and to use. of course, you'll loose such nice features as closures and slices, but hey, C++ doesn't have them too! ok, C++11 has lambdas, and i don't know if D lambdas can work without GC and don't leak.
Aug 19 2014
next sibling parent reply "ponce" <contact gam3sfrommars.fr> writes:
On Tuesday, 19 August 2014 at 07:26:07 UTC, ketmar via 
Digitalmars-d wrote:
 but why?! see 'scoped' to scope allocations. and, for example, 
 'File',
 which is refcounted internally. plus 'scope()' finalizers.

 i'm pretty sure that scoped ownership in D at least on par with 
 C++, if
 not better and simplier either to write, to read and to use.

 of course, you'll loose such nice features as closures and 
 slices, but
 hey, C++ doesn't have them too! ok, C++11 has lambdas, and i 
 don't know
 if D lambdas can work without GC and don't leak.
I'm well aware of all that exist to do deterministic destruction. There is no _composable_ way to do it apart from using structs exclusively and RefCounted!. I can't find the previous thread about this but there was problems, eg. Unique should work with both classes and structs but does not.
Aug 19 2014
parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Tue, 19 Aug 2014 10:24:07 +0000
ponce via Digitalmars-d <digitalmars-d puremagic.com> wrote:

seems that i misunderstand you. sorry.
Aug 19 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 8/19/14, 12:25 AM, ketmar via Digitalmars-d wrote:
 On Tue, 19 Aug 2014 07:14:17 +0000
 ponce via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 I work in such an environment and I tend to agree with you.
just replace GC with stub doing only allocs (or use GC.disable) and manage resource freeing manually (or with corresponding templated struct wrappers).
One issue with this is e.g. hashtables have no primitive for freeing entries.
 In D having deterministic resource release is harder because
 scoped ownership is less polished than in C++.
but why?! see 'scoped' to scope allocations. and, for example, 'File', which is refcounted internally. plus 'scope()' finalizers. i'm pretty sure that scoped ownership in D at least on par with C++, if not better and simplier either to write, to read and to use. of course, you'll loose such nice features as closures and slices, but hey, C++ doesn't have them too! ok, C++11 has lambdas, and i don't know if D lambdas can work without GC and don't leak.
They don't use GC if scoped. Andrei
Aug 19 2014
next sibling parent "Brian Rogoff" <brogoff gmail.com> writes:
On Tuesday, 19 August 2014 at 14:13:38 UTC, Andrei Alexandrescu 
wrote:
 On 8/19/14, 12:25 AM, ketmar via Digitalmars-d wrote:
 of course, you'll loose such nice features as closures and
slices, but hey, C++ doesn't have them too! ok, C++11 has lambdas, and i don't know if D lambdas can work without GC and don't leak.
They don't use GC if scoped. Andrei
And, in 2.066, it works with nogc. Scoped no-gc downward closures. alias dgFloatToFloat = float delegate(float) nogc; alias dgFloatPairToFloat = float delegate(float, float) nogc; float integrate(scope dgFloatToFloat f, float lo, float hi, size_t n) nogc { float result = 0.0; float dx = (hi - lo) / n; float dx2 = dx * 0.5; for (size_t i = 0; i < n; i++) { result += f(lo + i * dx2) * dx; } return result; } float integrate(scope dgFloatPairToFloat f, float x0, float x1, size_t nx, float y0, float y1, size_t ny) nogc { return integrate((y) => integrate((x) => f(x,y), x0, x1, nx), y0, y1, ny); } I was going to ask for an nogc { <fundefs> } to reduce the noise, but I just tried it and it seems to work. Nice!
Aug 19 2014
prev sibling parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Tue, 19 Aug 2014 07:13:45 -0700
Andrei Alexandrescu via Digitalmars-d <digitalmars-d puremagic.com>
wrote:

 One issue with this is e.g. hashtables have no primitive for freeing=20
 entries.
ah, so don't use built-in AAs. templated class/struct can replace 'em, let with less beauty in declaration. i believe that something similar can be done with arrays and strings, and with RC slices. and 2.066 moved some built-in properties to functions, which will help too. but seems that *most* people are OK with GC, so such replacements should not be defaults. and if somebody will port Leandro's CDGC, "stop the world" problem can be avoided almost entirely. but maybe there is some sense to shipping alternate "nogc/rc" runtime with D. the main problem with that is that someone has to write and support it. ;-)
 i don't know if D lambdas can work without GC and don't leak.
They don't use GC if scoped.
that's great. i'm very used to GNU C extension, which allows nested functions and something like lambdas in C, and was very happy to find that feature ofically blessed in D. i'm even more happy now when i know for sure that they will not trigger allocations.
Aug 19 2014
prev sibling parent reply "Kagamin" <spam here.lot> writes:
On Monday, 18 August 2014 at 18:56:42 UTC, b wrote:
  Not really accurate. First of all, don't include unique_ptr as 
 if it had the same overhead as the other two, it doesn't.
It's still scope guarded, which is not zero cost in sjlj and win32 seh exception models. It also injects (inlines?) cleanup code everywhere, which wastes cache.
  With RC you pay a price during creation/deletion/sharing, but 
 not while it is alive.
  With GC you pay almost no cost during allocation/deletion, but 
 a constant cost while it is alive. You allocate enough objects 
 and the sum cost ant so small.
Quite the contrary: with RC creation and deletion should be cheaper because they don't scan whole memory, using the object is more expensive, because you need to track it always. With GC you pay the cost when collection runs, and it doesn't necessarily runs always.
  Besides that, in C++ it works like this.
  90% of objects: value types, on stack or embedded into other 
 objects
  9% of objects: unique types, use unique_ptr, no overhead
  ~1% of objects: shared, use shared_ptr/weak_ptr etc.
Aren't strings inherently shared? Do you think they account for only 1% of objects?
  With GC you give up deterministic behavior, which is 
 *absolutely* not worth giving up for 1% of objects.
Memory needs deterministic management only under condition of memory scarcity, but it's not a common case, and D allows manual memory management, but why force it on everyone because only someone needs it?
Aug 19 2014
parent reply Marco Leise <Marco.Leise gmx.de> writes:
Am Tue, 19 Aug 2014 07:23:33 +0000
schrieb "Kagamin" <spam here.lot>:

  With GC you give up deterministic behavior, which is 
 *absolutely* not worth giving up for 1% of objects.
Memory needs deterministic management only under condition of memory scarcity, but it's not a common case, and D allows manual memory management, but why force it on everyone because only someone needs it?
Don't dismiss his point easily. The real memory cost is not always visible. Take for example bindings to GUI libraries where bitmap objects left to be garbage collected may have a 16 byte wrapper object, but several megabytes of memory associated with it inside a C library. The GC wont see the need to run a sweep and the working set blows out of proportion. (Happened to me a few years ago.) Other times you may just run out of handles because the GC is not called for a while. In practice you then add .close/.release methods to every resource object: and here we are back at malloc/free. Cycles aside, reference counting does a better job here. In other words, a GC cannot handle anything outside of the runtime it was written for, in short: OS handles and foreign language library data structures. -- Marco
Aug 26 2014
parent "Kagamin" <spam here.lot> writes:
On Tuesday, 26 August 2014 at 07:10:32 UTC, Marco Leise wrote:
 In practice you then add .close/.release methods to every
 resource object
Yes.
 and here we are back at malloc/free.
No. GC exists because general purpose memory has different properties and used in a different ways than other resources. For example, you can't build a cycle of files. Hence different ways of management for memory and other types of resources. And people coming from unmanaged environments, like C++, should learn about that difference and how to manage resources in GC environment. I agree it's a problem there's no tutorial for that. Maybe, .net tutorials can be used to teach it.
Aug 26 2014
prev sibling next sibling parent reply "Kagamin" <spam here.lot> writes:
On Monday, 18 August 2014 at 10:01:59 UTC, maik klein wrote:
 Does a GC still have advantages over heap allocations that do 
 not need to be reference counted such as the unique_ptr in c++?
Isn't unique_ptr unique? What to do when the object is non-unique?
Aug 18 2014
next sibling parent reply "Maik Klein" <maikklein googlemail.com> writes:
On Monday, 18 August 2014 at 12:06:27 UTC, Kagamin wrote:
 On Monday, 18 August 2014 at 10:01:59 UTC, maik klein wrote:
 Does a GC still have advantages over heap allocations that do 
 not need to be reference counted such as the unique_ptr in c++?
Isn't unique_ptr unique? What to do when the object is non-unique?
Not sure what you mean by unqiue. It's like a reference counted object but with a counter of 1, meaning if it goes out of scope it will be deleted. (But it does not do any reference counting). I think D also has a unique ptr. https://github.com/D-Programming-Language/phobos/blob/master/std/typecons.d#L58 Is it correct that when I create a class it will always be on the heap and therefore be garbage collected? class Foo .. So I don't have the option to put a class on the stack like in C++? auto foo = Foo(); //stack auto foo_ptr = new Foo(); // heap If I do something like: auto ptr = Unique(Foo); Would the GC still be used, or would the resource be freed by the destructor?
Aug 18 2014
next sibling parent "Dicebot" <public dicebot.lv> writes:
On Monday, 18 August 2014 at 12:27:44 UTC, Maik Klein wrote:
 Is it correct that when I create a class it will always be on 
 the heap and therefore be garbage collected?

 class Foo
 ..

 So I don't have the option to put a class on the stack like in 
 C++?

 auto foo = Foo(); //stack
 auto foo_ptr = new Foo(); // heap

 If I do something like:

 auto ptr = Unique(Foo);

 Would the GC still be used, or would the resource be freed by 
 the destructor?
http://dlang.org/phobos/std_typecons.html#.scoped
Aug 18 2014
prev sibling parent Nick Treleaven <ntrel-public yahoo.co.uk> writes:
On 18/08/2014 13:27, Maik Klein wrote:
 If I do something like:

 auto ptr = Unique(Foo);

 Would the GC still be used, or would the resource be freed by the
 destructor?
It uses GC allocation, but the memory will be freed deterministically: { Unique!Foo u = new Foo; // u's destructor will call 'delete' on its instance of Foo here }
Aug 18 2014
prev sibling parent reply "Peter Alexander" <peter.alexander.au gmail.com> writes:
On Monday, 18 August 2014 at 12:06:27 UTC, Kagamin wrote:
 On Monday, 18 August 2014 at 10:01:59 UTC, maik klein wrote:
 Does a GC still have advantages over heap allocations that do 
 not need to be reference counted such as the unique_ptr in c++?
Isn't unique_ptr unique? What to do when the object is non-unique?
Yes, unique_ptr is unique :-) It is not reference counted -- it just destroys the owned object when it goes out of scope. The near thing about unique_ptrs is that you can move them around, transferring ownership. If the object is non-unique, then typically C++ programmers will use shared_ptr (+ weak_ptr). I'm not sure what the status of std.typecons.Unique is. Last I heard it had some issues, but I haven't tried it much myself.
Aug 18 2014
next sibling parent "Kagamin" <spam here.lot> writes:
On Monday, 18 August 2014 at 12:55:52 UTC, Peter Alexander wrote:
 On Monday, 18 August 2014 at 12:06:27 UTC, Kagamin wrote:
 On Monday, 18 August 2014 at 10:01:59 UTC, maik klein wrote:
 Does a GC still have advantages over heap allocations that do 
 not need to be reference counted such as the unique_ptr in 
 c++?
Isn't unique_ptr unique? What to do when the object is non-unique?
Yes, unique_ptr is unique :-) It is not reference counted -- it just destroys the owned object when it goes out of scope. The near thing about unique_ptrs is that you can move them around, transferring ownership. If the object is non-unique, then typically C++ programmers will use shared_ptr (+ weak_ptr). I'm not sure what the status of std.typecons.Unique is. Last I heard it had some issues, but I haven't tried it much myself.
So if it's not known to be unique beforehand, the safe bet is to use shared_ptr, if the ownership is not known beforehand, the safe bet is to not use weak_ptr, then you are left with all references annotated with spurious shared_ptr all over the place. It will be also easy to mess up the annotations and get dangling pointer.
Aug 18 2014
prev sibling parent Nick Treleaven <ntrel-public yahoo.co.uk> writes:
On 18/08/2014 13:55, Peter Alexander wrote:
 I'm not sure what the status of std.typecons.Unique is. Last I heard it
 had some issues, but I haven't tried it much myself.
It should work, but it is bug-prone without the disable this(this) fix in master. Unfortunately that won't make it into 2.066. Also, the web docs lack a concrete example - I have a pull which fixes that: https://github.com/D-Programming-Language/phobos/pull/2346
Aug 18 2014
prev sibling parent "deadalnix" <deadalnix gmail.com> writes:
Various reasons:
1. Memory safety. A whole class of bugs are eliminated by the use
of a GC.
2. It is faster on multithreaded systems than RC (as the
reference count must be synchronized properly).
3. Having all the heap under GC control is important so that the
GC won't delete live objects.

If you are willing to get rid of 1., D's allow you to free object
from the GC explicitly via GC.free, which will bring you back to
a manual memory management safety and performance level, but will
still provide you with a protection net against memory leaks.
Aug 18 2014