www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - shared array?

reply "Prudence" <Pursuit Happyness.All> writes:
I can't create a shared array:

static Array!(bool delegate(int, WPARAM, LPARAM)) callbacks;

(prepending shared produce a ton of errors with the Array class)

I've tried making it a pointer and other things.


The array must be static and must be shared. Without shared 
everything works but I don't get, obviously, a useful array(it's 
empty because all the updating goes on in the main thread).

1. How do I create a shared array?
2. Why prepending shared produces any problems? I thought shared 
simply made a variable global to all threads?
Sep 10 2015
next sibling parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Friday, 11 September 2015 at 00:48:28 UTC, Prudence wrote:
 static Array!(bool delegate(int, WPARAM, LPARAM)) callbacks;
Try just using a regular array instead of the library Array. static bool delegate(int, WPARAM, LPARAM)[] callbacks; my guess is the Array library thing isn't marked as shared internally.
Sep 10 2015
next sibling parent reply "Prudence" <Pursuit Happyness.All> writes:
On Friday, 11 September 2015 at 00:50:15 UTC, Adam D. Ruppe wrote:
 On Friday, 11 September 2015 at 00:48:28 UTC, Prudence wrote:
 static Array!(bool delegate(int, WPARAM, LPARAM)) callbacks;
Try just using a regular array instead of the library Array. static bool delegate(int, WPARAM, LPARAM)[] callbacks; my guess is the Array library thing isn't marked as shared internally.
I thought about that but then I have to rely on the GC for some simple things. Doesn't seem like the right way to go.
Sep 10 2015
parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Friday, 11 September 2015 at 04:28:52 UTC, Prudence wrote:
 I thought about that but then I have to rely on the GC for some 
 simple things. Doesn't seem like the right way to go.
Since it is static, it will never be collected anyway, so you could just use it and it'll work for convenience and probably lose nothing, or very trivially write an append function that uses any scheme you want instead of doing ~= on it without even worrying about freeing it.
Sep 11 2015
parent reply "Prudence" <Pursuit Happyness.All> writes:
On Friday, 11 September 2015 at 13:12:14 UTC, Adam D. Ruppe wrote:
 On Friday, 11 September 2015 at 04:28:52 UTC, Prudence wrote:
 I thought about that but then I have to rely on the GC for 
 some simple things. Doesn't seem like the right way to go.
Since it is static, it will never be collected anyway, so you could just use it and it'll work for convenience and probably lose nothing, or very trivially write an append function that uses any scheme you want instead of doing ~= on it without even worrying about freeing it.
And that makes it worse!! If it's never collected and the GC scans it every time, it means it adds a constant overhead to the GC for absolutely no reason, right? It also then makes every dependency on it GC dependent( nogc can't be used)? It just seems like it's the wrong way to go about it.
Sep 11 2015
parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Friday, 11 September 2015 at 14:47:15 UTC, Prudence wrote:
 If it's never collected and the GC scans it every time, it 
 means it adds a constant overhead to the GC for absolutely no 
 reason, right?
GC overhead isn't quite constant, it happens only when you call for a collection cycle. But the array would still be scanned to clean up dead objects referred to by the delegates. The array itself is never collected (since it is always referenced by the global), but if you null out some entries, the objects they pointed to may be collected.
 It also then makes every dependency on it GC dependent( nogc 
 can't be used)? It just seems like it's the wrong way to go 
 about it.
You can access an array without the GC. And if you manually allocate it, then there's no GCness to it at all. Built in arrays and slices are not necessarily managed by the garbage collector. It depends on how you create and use them.
Sep 11 2015
prev sibling parent Jonathan M Davis via Digitalmars-d-learn writes:
On Friday, September 11, 2015 00:50:13 Adam D. Ruppe via Digitalmars-d-learn
wrote:
 On Friday, 11 September 2015 at 00:48:28 UTC, Prudence wrote:
 static Array!(bool delegate(int, WPARAM, LPARAM)) callbacks;
Try just using a regular array instead of the library Array. static bool delegate(int, WPARAM, LPARAM)[] callbacks; my guess is the Array library thing isn't marked as shared internally.
Given how shared works, for a class or struct to work with shared, it pretty much has to be designed specifically to be used with shared. Depending on its members, it's possible to declare a variable of class or struct type as shared and then cast away shared to operate on it (after protecting access to it with a mutex of course), but actually using an object as shared is impossible unless the type was specifically designed to be used that way, and even marking one as shared with the idea that you'll cast away shared to operate on it (after protecting it with a mutex) probably won't work in many cases simply because the type's internals weren't designed to work with shared and could end up with compilation errors even if it was never actually going to be used without casting away shared. In general, shared should only be used with built-in types or types which were specifically designed to be shared, and that means that types that you'd use in non-shared code aren't going to work in shared code. This can be annoying, but it really forces you to separate out your shared code from your normal code, which is arguably a good thing. Still, shared is one of those things that we need to re-examine and see how it should be changed to be more usable. It's a great idea, but the devil is in the details. - Jonathan M Davis
Sep 11 2015
prev sibling next sibling parent reply "Kagamin" <spam here.lot> writes:
I get only one error:
Error: non-shared method std.container.array.Array!(void 
delegate()).Array.~this is not callable using a shared object.

It will try to destruct the array on program termination, but it 
requires the destructor to be aware of the shared context.
Sep 11 2015
parent reply "Prudence" <Pursuit Happyness.All> writes:
On Friday, 11 September 2015 at 07:41:10 UTC, Kagamin wrote:
 I get only one error:
 Error: non-shared method std.container.array.Array!(void 
 delegate()).Array.~this is not callable using a shared object.

 It will try to destruct the array on program termination, but 
 it requires the destructor to be aware of the shared context.
But in this case it is static, so why does it matter? Do you have any ideas how to wrap it or fix this?
Sep 11 2015
parent reply "Kagamin" <spam here.lot> writes:
On Friday, 11 September 2015 at 14:54:00 UTC, Prudence wrote:
 But in this case it is static, so why does it matter? Do you 
 have any ideas how to wrap it or fix this?
It matters exactly because it is static. A code written for single-threaded environment may not work correctly in shared context. It simply wasn't written for it. The way to fix it is to write code for shared context.
Sep 11 2015
parent reply "Prudence" <Pursuit Happyness.All> writes:
On Friday, 11 September 2015 at 16:04:22 UTC, Kagamin wrote:
 On Friday, 11 September 2015 at 14:54:00 UTC, Prudence wrote:
 But in this case it is static, so why does it matter? Do you 
 have any ideas how to wrap it or fix this?
It matters exactly because it is static. A code written for single-threaded environment may not work correctly in shared context. It simply wasn't written for it. The way to fix it is to write code for shared context.
I don't care about "maybe" working. Since the array is hidden inside a class I can control who and how it is used and deal with the race conditions. What I want is to be able to use Array so I don't have to rely on the GC. but since it complains about the ~this destruction, how can I fix that? If I wrap Array, and use a non-shared array inside it, I'll still have the same problem because it will be thread local to the object? or is shared applied to all sub types of a class? e.g., class MySharedArrayWrapper { static Array!(int) a; } and instead I use static shared MySharedArrayWrapper; But a isn't marked shared, so will it be TLS, which put's me back at square one. Or it it marked shared, which then still complains? Again, I'm asking how, not why.
Sep 11 2015
next sibling parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Friday, 11 September 2015 at 17:29:47 UTC, Prudence wrote:
 I don't care about "maybe" working. Since the array is hidden 
 inside a class I can control who and how it is used and deal 
 with the race conditions.
You could use __gshared instead of shared. It means put it in non-tls storage, just like shared, but the compiler will not attempt to help you use it correctly; you're on your own for synchronization, etc.
 What I want is to be able to use Array so I don't have to rely 
 on the GC.
But, again, built-in slices do NOT rely on the GC. Only specific methods on them do and you can use your own implementation for them.
Sep 11 2015
parent reply "Prudence" <Pursuit Happyness.All> writes:
On Friday, 11 September 2015 at 19:27:49 UTC, Adam D. Ruppe wrote:
 On Friday, 11 September 2015 at 17:29:47 UTC, Prudence wrote:
 I don't care about "maybe" working. Since the array is hidden 
 inside a class I can control who and how it is used and deal 
 with the race conditions.
You could use __gshared instead of shared. It means put it in non-tls storage, just like shared, but the compiler will not attempt to help you use it correctly; you're on your own for synchronization, etc.
 What I want is to be able to use Array so I don't have to rely 
 on the GC.
But, again, built-in slices do NOT rely on the GC. Only specific methods on them do and you can use your own implementation for them.
Really? Can you back up this claim? Not saying your lying, I'd just like to know it's true for a fact? Ho wan you use "specific methods"? Do you mean I do not use new to allocate and use malloc(more or less)? In that case, am I not essentially just re-creating Array? Obviously I can write my own array type and I can even write my own compiler, but that's no that the point, is it?
Sep 11 2015
parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Friday, 11 September 2015 at 20:06:53 UTC, Prudence wrote:
 Can you back up this claim? Not saying your lying, I'd just 
 like to know it's true for a fact?
The list of things that trigger the GC is pretty short. See the bottom of this page: http://dlang.org/garbage.html Basically, the three things that can do a gc allocation in a built in array are: increasing the length, the ~= and ~ operators, and the [a,b,c] literal syntax. Slicing, indexing, etc, the other basic operations do not. If you do: ubyte[] a = (cast(ubyte*) malloc(4)[0..4];, it will compile... and create a slice from the malloced memory. That's one way to create an array without GCing it.
 In that case, am I not essentially just re-creating Array?
Array does a lot of other stuff too... you only really need append and maybe shrink for a static variable, since tracking ownership doesn't matter (it is never disappearing since it is global)
Sep 11 2015
parent reply "Prudence" <Pursuit Happyness.All> writes:
On Friday, 11 September 2015 at 20:30:37 UTC, Adam D. Ruppe wrote:
 On Friday, 11 September 2015 at 20:06:53 UTC, Prudence wrote:
 Can you back up this claim? Not saying your lying, I'd just 
 like to know it's true for a fact?
The list of things that trigger the GC is pretty short. See the bottom of this page: http://dlang.org/garbage.html Basically, the three things that can do a gc allocation in a built in array are: increasing the length, the ~= and ~ operators, and the [a,b,c] literal syntax. Slicing, indexing, etc, the other basic operations do not. If you do: ubyte[] a = (cast(ubyte*) malloc(4)[0..4];, it will compile... and create a slice from the malloced memory. That's one way to create an array without GCing it.
 In that case, am I not essentially just re-creating Array?
Array does a lot of other stuff too... you only really need append and maybe shrink for a static variable, since tracking ownership doesn't matter (it is never disappearing since it is global)
Oh really?!?! I thought slicing used the GC? Is this a recent development or always been that way? ok, so if I just use a shared [], and create it using malloc(as you've done above) then release and remalloc when I need to append(not efficient but ok in my senario), then it won't use the GC? If so, then I can handle that! I guess [] doesn't have a capacity field so I'll have to keep track of that. Other wise, it should be pretty simple. Of course, I still feel like I'm trying to implement array because everything turns in to "lots of stuff" at some point ;/
Sep 11 2015
parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Friday, 11 September 2015 at 21:48:14 UTC, Prudence wrote:
 Oh really?!?! I thought slicing used the GC? Is this a recent 
 development or always been that way?
Always been that way. A D slice is just a C pointer + length packed together. A slice simply increments the pointer and/or decrements the length - no allocation needed. GC can make slices more convenient, since you don't need to think about who owns that memory you're slicing to free it, but that's true of pointers and references and everything too.
 you've done above) then release and remalloc when I need to 
 append(not efficient but ok in my senario), then it won't use 
 the GC?
yeah. you might want to GC.addRange it though so the contents are scanned anyway... (btw the garbage collector is actually pretty nice, why are you avoiding it anyway?)
 I guess [] doesn't have a capacity field so I'll have to keep 
 track of that. Other wise, it should be pretty simple.
Nope but you can just use realloc which does track some capacity for you. (Actually, the built-in ~= operator does that too, just with the GC instead of the C function. It sometimes won't reallocate because it knows it already has enough capacity. Read more here: http://dlang.org/d-array-article.html )
Sep 11 2015
parent reply "Laeeth Isharc" <spamnolaeeth nospamlaeeth.com> writes:
On Friday, 11 September 2015 at 21:58:28 UTC, Adam D. Ruppe wrote:
 On Friday, 11 September 2015 at 21:48:14 UTC, Prudence wrote:
 Oh really?!?! I thought slicing used the GC? Is this a recent 
 development or always been that way?
Always been that way. A D slice is just a C pointer + length packed together. A slice simply increments the pointer and/or decrements the length - no allocation needed. (btw the garbage collector is actually pretty nice, why are you avoiding it anyway?)
Seems to be quite a lot of FUD wrt use of standard library and GC, which means also perhaps we don't communicate this point very well as a community. Making Phobos GC-optional perhaps is an ultimate answer. But people seem to think that you're back to C without the GC.
Sep 11 2015
parent reply Jonathan M Davis via Digitalmars-d-learn writes:
On Friday, September 11, 2015 23:29:05 Laeeth Isharc via Digitalmars-d-learn
wrote:
 On Friday, 11 September 2015 at 21:58:28 UTC, Adam D. Ruppe wrote:
 On Friday, 11 September 2015 at 21:48:14 UTC, Prudence wrote:
 Oh really?!?! I thought slicing used the GC? Is this a recent
 development or always been that way?
Always been that way. A D slice is just a C pointer + length packed together. A slice simply increments the pointer and/or decrements the length - no allocation needed. (btw the garbage collector is actually pretty nice, why are you avoiding it anyway?)
Seems to be quite a lot of FUD wrt use of standard library and GC, which means also perhaps we don't communicate this point very well as a community. Making Phobos GC-optional perhaps is an ultimate answer. But people seem to think that you're back to C without the GC.
Aside from the few classes in Phobos, its GC usage is almost entirely restricted to when it allocates arrays or when it has to allocate a closure for a delegate, which can happen in some cases when passing predicates to range-based algorithms. Avoiding functions that need to allocate arrays avoids that source of allocation, and using functors or function pointers as predicates avoids having to allocate closures. So, you _can_ end up with GC allocations accidentally in Phobos if you're not careful, but on the whole, the assertion that Phobos uses the GC heavily is FUD - or at least a misunderstanding. But as we make more of the functions use lazy ranges rather than arrays (particularly with regards to strings), and we make more of the code nogc, it becomes even clearer that the GC isn't involved. Also, improvements to how lambdas are handled should reduce how often closures have to be allocated for them. Probably the may issue that actually requires a language change is that exceptions currently pretty much have to be GC-allocated whereas they really should be reference-counted (though possibly still GC-allocated, just not left for a garbage collection cycle), but exceptions are thrown rarely, meaning that the fact that they're GC allocated is really only a problem if you're trying to avoid the GC completely rather than just minimize its use, so it's rarely a problem. And there are plans to support reference-counted types in the language, in which case, the exception types would be change to use that Object hierarchy rather than the normal one. So, we have improvements to make, but for the most part, I think that the idea that Phobos requires a GC is FUD. It's not completely false, but folks seem to think using Phobos means heavy GC usage, and for the most part, that's not true at all. - Jonathan M Davis
Sep 11 2015
parent reply "Prudence" <Pursuit Happyness.All> writes:
On Saturday, 12 September 2015 at 06:23:12 UTC, Jonathan M Davis 
wrote:
 On Friday, September 11, 2015 23:29:05 Laeeth Isharc via 
 Digitalmars-d-learn wrote:
 On Friday, 11 September 2015 at 21:58:28 UTC, Adam D. Ruppe 
 wrote:
 [...]
Seems to be quite a lot of FUD wrt use of standard library and GC, which means also perhaps we don't communicate this point very well as a community. Making Phobos GC-optional perhaps is an ultimate answer. But people seem to think that you're back to C without the GC.
Aside from the few classes in Phobos, its GC usage is almost entirely restricted to when it allocates arrays or when it has to allocate a closure for a delegate, which can happen in some cases when passing predicates to range-based algorithms. Avoiding functions that need to allocate arrays avoids that source of allocation, and using functors or function pointers as predicates avoids having to allocate closures. So, you _can_ end up with GC allocations accidentally in Phobos if you're not careful, but on the whole, the assertion that Phobos uses the GC heavily is FUD - or at least a misunderstanding. But as we make more of the functions use lazy ranges rather than arrays (particularly with regards to strings), and we make more of the code nogc, it becomes even clearer that the GC isn't involved. Also, improvements to how lambdas are handled should reduce how often closures have to be allocated for them.
I don't think it's that simple. Saying that it doesn't use it most of the time is not an answer/solution. Using it at all is a problem because one doesn't know when and where. I realize there is a switch now(-vgc), and maybe that is the solution, but you say "well, phobos only uses 0.01% on the GC", yet since you either don't, can't, or won't know where that is, then it might as well be 100% if you would like to potentially get off the GC one day. It's like playing Russian roulette. It doesn't matter if only 1/6 times will kill you. It's totally different than 0/6.
Sep 12 2015
next sibling parent reply Laeeth Isharc <spamnolaeeth nospamlaeeth.com> writes:
 On Saturday, 12 September 2015 at 06:23:12 UTC, Jonathan M 
 Davis wrote:
 Aside from the few classes in Phobos, its GC usage is almost 
 entirely restricted to when it allocates arrays or when it has 
 to allocate a closure for a delegate, which can happen in some 
 cases when passing predicates to range-based algorithms. 
 Avoiding functions that need to allocate arrays avoids that 
 source of allocation, and using functors or function pointers 
 as predicates avoids having to allocate closures. So, you 
 _can_ end up with GC allocations accidentally in Phobos if 
 you're not careful, but on the whole, the assertion that 
 Phobos uses the GC heavily is FUD - or at least a 
 misunderstanding. But as we make more of the functions use 
 lazy ranges rather than arrays (particularly with regards to 
 strings), and we make more of the code  nogc, it becomes even 
 clearer that the GC isn't involved. Also, improvements to how 
 lambdas are handled should reduce how often closures have to 
 be allocated for them.
Thank you for this. How large is the allocation for closure for a delegate? Just a pair of pointers? On Saturday, 12 September 2015 at 13:42:44 UTC, Prudence wrote:
 I don't think it's that simple.

 Saying that it doesn't use it most of the time is not an 
 answer/solution. Using it at all is a problem because one 
 doesn't know when and where. I realize there is a switch 
 now(-vgc), and maybe that is the solution, but you say "well, 
 phobos only uses 0.01% on the GC", yet since you either don't, 
 can't, or won't know where that is, then it might as well be 
 100% if you would like to potentially get off the GC one day.

 It's like playing Russian roulette. It doesn't matter if only 
 1/6 times will kill you. It's totally different than 0/6.
But if you hardly use the GC, how long is it really going to take to run?
Sep 12 2015
parent Adam D. Ruppe <destructionator gmail.com> writes:
On Saturday, 12 September 2015 at 22:36:00 UTC, Laeeth Isharc 
wrote:
 Thank you for this.  How large is the allocation for closure 
 for a delegate?  Just a pair of pointers?
It depends on what the delegate needs to capture. It makes a copy of the local variables the function is referencing. (Usually pretty small I'd gamble; how big are your typical function's arguments and local vars?)
Sep 12 2015
prev sibling next sibling parent Laeeth Isharc <spamnolaeeth nospamlaeeth.com> writes:
On Saturday, 12 September 2015 at 13:42:44 UTC, Prudence wrote:
 Saying that it doesn't use it most of the time is not an 
 answer/solution. Using it at all is a problem because one 
 doesn't know when and where. I realize there is a switch 
 now(-vgc), and maybe that is the solution, but you say "well, 
 phobos only uses 0.01% on the GC", yet since you either don't, 
 can't, or won't know where that is, then it might as well be 
 100% if you would like to potentially get off the GC one day.
"you either don't, can't, or won't know where that is" just check the signature, no ? eg http://dlang.org/phobos/std_string.html pure nothrow nogc system inout(char)[] fromStringz(inout(char)* cString); ^^^^^
Sep 12 2015
prev sibling next sibling parent Adam D. Ruppe <destructionator gmail.com> writes:
On Saturday, 12 September 2015 at 13:42:44 UTC, Prudence wrote:
 Using it at all is a problem because one doesn't know when and 
 where.
It is called when the collect function is called and where it was called from. D's garbage collector isn't magic, it is just a function that frees memory when the pool runs low.
 It's like playing Russian roulette. It doesn't matter if only 
 1/6 times will kill you. It's totally different than 0/6.
The big difference is the garbage collector doesn't actually kill you. Memory corruption, use-after-free, and double-free bugs on the other hand often do terminate your process.
Sep 12 2015
prev sibling parent reply Jonathan M Davis via Digitalmars-d-learn writes:
On Saturday, September 12, 2015 13:42:42 Prudence via Digitalmars-d-learn wrote:
 On Saturday, 12 September 2015 at 06:23:12 UTC, Jonathan M Davis
 wrote:
 On Friday, September 11, 2015 23:29:05 Laeeth Isharc via
 Digitalmars-d-learn wrote:
 On Friday, 11 September 2015 at 21:58:28 UTC, Adam D. Ruppe
 wrote:
 [...]
Seems to be quite a lot of FUD wrt use of standard library and GC, which means also perhaps we don't communicate this point very well as a community. Making Phobos GC-optional perhaps is an ultimate answer. But people seem to think that you're back to C without the GC.
Aside from the few classes in Phobos, its GC usage is almost entirely restricted to when it allocates arrays or when it has to allocate a closure for a delegate, which can happen in some cases when passing predicates to range-based algorithms. Avoiding functions that need to allocate arrays avoids that source of allocation, and using functors or function pointers as predicates avoids having to allocate closures. So, you _can_ end up with GC allocations accidentally in Phobos if you're not careful, but on the whole, the assertion that Phobos uses the GC heavily is FUD - or at least a misunderstanding. But as we make more of the functions use lazy ranges rather than arrays (particularly with regards to strings), and we make more of the code nogc, it becomes even clearer that the GC isn't involved. Also, improvements to how lambdas are handled should reduce how often closures have to be allocated for them.
I don't think it's that simple. Saying that it doesn't use it most of the time is not an answer/solution. Using it at all is a problem because one doesn't know when and where. I realize there is a switch now(-vgc), and maybe that is the solution, but you say "well, phobos only uses 0.01% on the GC", yet since you either don't, can't, or won't know where that is, then it might as well be 100% if you would like to potentially get off the GC one day. It's like playing Russian roulette. It doesn't matter if only 1/6 times will kill you. It's totally different than 0/6.
If someone wants to avoid the GC entirely, then they need to use nogc and -vgc to verify that they didn't miss marking something with nogc somewhere whether they're using Phobos or not. But regardless, it's still FUD to claim that Phobos uses the GC heavily. And the reality of the matter is that the vast majority of programs will have _no_ problems with using the GC so long as they don't use it heavily. Programming like you're in Java and allocating everything on the heap will kill performance, but idiomatic D code doesn't do that, and Phobos doesn't do that. Far too many programmers freak out at the thought of D even having a GC and overreact thinking that they have to root it out completely, when there really is no need to. Plenty of folks how written highly performant code in D using the GC. You just have to avoid doing a lot of allocating and make sure you track down unwanted allocations when you have a performance problem. nogc will help folks avoid allocations that they don't intend to have, and other improvements to Phobos like rangifying most of the array/string-based stuff such that very little functionality actually needs to operate on arrays and instead is able to operate on lazy ranges will help as well. But the idea that your average D program is going to run into problems with the GC while using Phobos is just plain wrong. The folks who need to care are the rare folks who need extreme enough performance that they can't afford for the GC to _ever_ stop the world. And anyone who cares that much is simply going to have to avoid using new anywhere in their code, and if they use any code that they don't write - Phobos included - they will need to verify whether the GC is being used or not by that coding nogc or by using -vgc. That's not going to change no matter what we do with Phobos. And it doesn't change the fact that it's just plain wrong to claim that Phobos requires the GC. _Some_ of its functionality does. The vast majority of it doesn't, and anyone who cares enough to make sure that they don't use the GC while using Phobos can mark their code with nogc to make sure that they don't accidentally use something in Phobos which could allocate on the GC heap. - Jonathan M Davis
Sep 13 2015
next sibling parent reply ponce <contact gam3sfrommars.fr> writes:
On Sunday, 13 September 2015 at 15:35:07 UTC, Jonathan M Davis 
wrote:
 But the idea that your average D program is going to run into 
 problems with the GC while using Phobos is just plain wrong. 
 The folks who need to care are the rare folks who need extreme 
 enough performance that they can't afford for the GC to _ever_ 
 stop the world.
Even in that case not all threads need to be real-time and you can do threads without GC. Honestly I think only people using microcontrollers or really constrained environments and don't have the memory have that problem. I suspect preciously few of the GC haters actually have those requirements or misrepresents the ways to avoid GC-related problems. Same arguments but there is a solution for everything: "Don't want memory overhead" => minimize heap usage, use -vgc / -profile=gc "Don't want pauses" => unregister thread + nogc "Want shorter pauses" => minimize heap usage, use -vgc / -profile=gc "Want determinism" => ways to do that GC is basically ok for anything soft-realtime, where you already spend a lot of time to go fast enough. And if you want hard-realtime, well you wouldn't want malloc either. It's a non-problem.
Sep 13 2015
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Sunday, 13 September 2015 at 16:53:20 UTC, ponce wrote:
 GC is basically ok for anything soft-realtime, where you 
 already spend a lot of time to go fast enough. And if you want 
 hard-realtime, well you wouldn't want malloc either.

 It's a non-problem.
If this was true then Go would not have a concurrent collector.
Sep 13 2015
parent reply ponce <contact gam3sfrommars.fr> writes:
On Sunday, 13 September 2015 at 17:00:30 UTC, Ola Fosheim Grøstad 
wrote:
 On Sunday, 13 September 2015 at 16:53:20 UTC, ponce wrote:
 GC is basically ok for anything soft-realtime, where you 
 already spend a lot of time to go fast enough. And if you want 
 hard-realtime, well you wouldn't want malloc either.

 It's a non-problem.
If this was true then Go would not have a concurrent collector.
I was speaking of the D language.
Sep 13 2015
next sibling parent Prudence <Pursuit Happyness.All> writes:
On Sunday, 13 September 2015 at 17:16:02 UTC, ponce wrote:
 On Sunday, 13 September 2015 at 17:00:30 UTC, Ola Fosheim 
 Grøstad wrote:
 On Sunday, 13 September 2015 at 16:53:20 UTC, ponce wrote:
 GC is basically ok for anything soft-realtime, where you 
 already spend a lot of time to go fast enough. And if you 
 want hard-realtime, well you wouldn't want malloc either.

 It's a non-problem.
If this was true then Go would not have a concurrent collector.
I was speaking of the D language.
Of course, that makes it make sense!
Sep 13 2015
prev sibling parent reply Ola Fosheim Grostad <ola.fosheim.grostad+dlang gmail.com> writes:
On Sunday, 13 September 2015 at 17:16:02 UTC, ponce wrote:
 On Sunday, 13 September 2015 at 17:00:30 UTC, Ola Fosheim
 If this was true then Go would not have a concurrent collector.
I was speaking of the D language.
Go only added concurrent GC now at version 1.5 and keep improving it to avoid blocking for more than 10ms. Efficient mark sweep GC may also affects real time threads by polluting the caches and reducing available memory bandwith. The theoretical limit for 10ms mark sweep collection on current desktop cpus is 60 megabytes at peak performance. That means you'll have to stay below 30 MiB in total memory use with pointers. Not a non-issue.
Sep 13 2015
parent ponce <contact gam3sfrommars.fr> writes:
On Sunday, 13 September 2015 at 19:39:20 UTC, Ola Fosheim Grostad 
wrote:
 The theoretical limit for 10ms mark sweep collection on current 
 desktop cpus is 60 megabytes at peak performance. That means 
 you'll have to stay below 30 MiB in total memory use with 
 pointers.
30 MiB of scannable heap. My point is that we now have the tools to reduce that amount of memory with -profile=gc
Sep 13 2015
prev sibling parent reply Jonathan M Davis via Digitalmars-d-learn writes:
On Sunday, September 13, 2015 16:53:18 ponce via Digitalmars-d-learn wrote:
 On Sunday, 13 September 2015 at 15:35:07 UTC, Jonathan M Davis
 wrote:
 But the idea that your average D program is going to run into
 problems with the GC while using Phobos is just plain wrong.
 The folks who need to care are the rare folks who need extreme
 enough performance that they can't afford for the GC to _ever_
 stop the world.
Even in that case not all threads need to be real-time and you can do threads without GC. Honestly I think only people using microcontrollers or really constrained environments and don't have the memory have that problem. I suspect preciously few of the GC haters actually have those requirements or misrepresents the ways to avoid GC-related problems. Same arguments but there is a solution for everything: "Don't want memory overhead" => minimize heap usage, use -vgc / -profile=gc "Don't want pauses" => unregister thread + nogc "Want shorter pauses" => minimize heap usage, use -vgc / -profile=gc "Want determinism" => ways to do that GC is basically ok for anything soft-realtime, where you already spend a lot of time to go fast enough. And if you want hard-realtime, well you wouldn't want malloc either. It's a non-problem.
There _are_ some programs that simply cannot afford a stop-the-world GC. For instance, this has come up in discussions on games where a certain framerate needs to be maintained. Even a 100 ms stop would be way too much for them. In fact, it came up with the concurrent GC that was presented at dconf 2013 that it would likely have to be guaranteed to stop the world for less than 10 ms (or something in that range anyway) to be acceptable for such environments. So, it _is_ a problem for some folks. That being said, it is _not_ a problem for most folks, and the folks who have those sorts of performance requirements frequently can't even use malloc after the program has gotten past its startup phase. So, many of them would simply be allocating up front and then only reusing existing memory for the rest of the program's run, whether that memory was GC-allocated or malloced. For instance, as I understand it, Warp used the GC, but it allocated everything up front and didn't allocate once it got going, so the GC wasn't a performance problem for it at all, and it's _very_ fast. But there are other solutions such as having the critical threads not use the GC (as you mentioned) which make it so that you can use the GC in parts of your program while still avoiding its performance costs in the critical portions. Regardless, idiomatic D involves a lot more stack allocations than you often get even in C++, so GC usage tends to be low in programs that use idiomatic D, and there are ways to work around the cases where the GC actually turns out to be a bottleneck. And for the most part, the folks who are freaking out about the GC and insisting that it's simply evil and shouldn't exist and losing out on some great stuff. And even with more lazy ranges in Phobos and more consistent nogc usage, I suspect that many of them will continue to complain about Phobos using the GC even though it uses it pretty minimally. - Jonathan M Davis
Sep 13 2015
parent reply Ola Fosheim Grostad <ola.fosheim.grostad+dlang gmail.com> writes:
On Monday, 14 September 2015 at 00:41:28 UTC, Jonathan M Davis 
wrote:
 stop-the-world GC. For instance, this has come up in 
 discussions on games where a certain framerate needs to be 
 maintained. Even a 100 ms stop would be way too much for them. 
 In fact, it came up with the concurrent GC that was presented 
 at dconf 2013 that it would likely have to be guaranteed to 
 stop the world for less than 10 ms (or something in that range 
 anyway) to be acceptable for such environments. So, it _is_ a 
 problem for some folks.
In the renderloop you only have 15 ms, so it would be more like 2ms (not realistic). The 10 ms target is for high performance servers and regular interactive applications.
 That being said, it is _not_ a problem for most folks, and the 
 folks who have those sorts of performance requirements 
 frequently can't even use malloc after the program has gotten 
 past its startup phase.
I don't agree with this. You can use your own allocators that don't syscall in critical areas.
 GC-allocated or malloced. For instance, as I understand it, 
 Warp used the GC, but it allocated everything up front and 
 didn't allocate once it got going, so the GC wasn't a 
 performance problem for it at all, and it's _very_ fast.
A c preprocessor is a simple program that has to allocate for macro defs, but the rest can just use static buffers... So it should not be a problem.
 Regardless, idiomatic D involves a lot more stack allocations 
 than you often get even in C++, so GC usage tends to be low in
Really? I use VLAs in my C++ (a C extension) and use very few mallocs after init. In C++ even exceptions can be put outside the heap. Just avoid STL after init and you're good.
Sep 13 2015
parent reply Jonathan M Davis via Digitalmars-d-learn writes:
On Monday, September 14, 2015 01:12:02 Ola Fosheim Grostad via
Digitalmars-d-learn wrote:
 On Monday, 14 September 2015 at 00:41:28 UTC, Jonathan M Davis
 wrote:
 Regardless, idiomatic D involves a lot more stack allocations
 than you often get even in C++, so GC usage tends to be low in
Really? I use VLAs in my C++ (a C extension) and use very few mallocs after init. In C++ even exceptions can be put outside the heap. Just avoid STL after init and you're good.
From what I've seen of C++ and understand of typical use cases from other
folks, that's not at all typical of C++ usage (though there's enough people using C++ across a wide enough spectrum of environments and situations that there's obviously going to be quite a wide spread of what folks do with it). A lot of C++ folks use classes heavily, frequently allocating them on the heap. Major C++ libraries such as Qt certainly are designed with the idea that you're going to be allocating as the program runs. And C++ historically has been tooted fairly heavily by many folks as an OO language, in which case, inheritance (and therefore heap allocation) are used heavily by many programs. And with C++11/14, more mechanisms for safely handling memory have been added, thereby further encouraging the use of certain types of heap allocations in your typical C++ program - e.g. make_shared has become the recommended way to allocate memory in most cases. And while folks who are trying to get the bare metal performance that some stuff like games require, most folks are going to use the STL quite a bit. And if they aren't, they're probably using similar classes from a 3rd party library such as Qt. It's the folks who are in embedded environments or who have much more restrictive performance requirements who are more likely to avoid the STL or do stuff like avoid heap allocations after the program has been initialized. So, a _lot_ of C++ code uses the heap quite heavily, and I expect that very little of it tries to allocate everything up front. I, for one, have never worked on an application where that even made sense aside from something very small. I know that applications like that definitely exist, but from everything I've seen, I'd expect them to be the exception to the rule rather than the norm. Regardless, idiomatic D promotes ranges, which naturally help reduce heap allocation. It also means using structs heavily and classes sparingly (though there are plenty of cases where inheritance is required and thus classes get used). And while arrays/strings get allocated on the heap, slicing seriously reduces how often they need to be copied in memory, which reduces heap allocations. So, idiomatic D encourages programs to be written in a way that keeps heap allocation to a minimum. The big place that it happens in most D programs is probably strings, but slicing helps consderably with that, and even some of the string stuff can be made to live on the stack rather than the heap (which is what a lot of Walter's recent work in Phobos has been for - making string-based stuff work as lazy ranges), reducing heap allocations for strings ever further. C++ on the other hand does not have such idioms as the norm or as promoted in any serious way. So, programmers are much more likely to use idioms that involve a lot of heap allocations, and the language and standard library don't really have much to promote idioms that avoid heap allocations (and std::string definitely isn't designed to avoid copying). You can certainly do it - and many do - but since it's not really what's promoted by the language or standard library, it's less likely to happen in your average program. It's much more likely to be done by folks who avoid the STL. So, you _can_ have low heap allocation in a C++ program, and many people do, but from what I've seen, that really isn't the norm across the C++ community in general. - Jonathan M Davis
Sep 14 2015
next sibling parent ponce <contact gam3sfrommars.fr> writes:
On Monday, 14 September 2015 at 20:54:55 UTC, Jonathan M Davis 
wrote:
 So, you _can_ have low heap allocation in a C++ program, and 
 many people do, but from what I've seen, that really isn't the 
 norm across the C++ community in general.

 - Jonathan M Davis
Fully agreed, C++ in the wild often make lots of copies of data structure, sometimes by mistake (like std::vector passed by value instead of ref). When you copy an aggregate by mistake, every field itself gets copied etc. Copies copies copies everywhere.
Sep 14 2015
prev sibling parent Ola Fosheim Grostad <ola.fosheim.grostad+dlang gmail.com> writes:
On Monday, 14 September 2015 at 20:54:55 UTC, Jonathan M Davis 
wrote:
 On Monday, September 14, 2015 01:12:02 Ola Fosheim Grostad via 
 Digitalmars-d-learn wrote:
 On Monday, 14 September 2015 at 00:41:28 UTC, Jonathan M Davis 
 wrote:
 Regardless, idiomatic D involves a lot more stack 
 allocations than you often get even in C++, so GC usage 
 tends to be low in
Really? I use VLAs in my C++ (a C extension) and use very few mallocs after init. In C++ even exceptions can be put outside the heap. Just avoid STL after init and you're good.
From what I've seen of C++ and understand of typical use cases 
from other
folks, that's not at all typical of C++ usage (though there's enough people using C++ across a wide enough spectrum of environments and situations that there's obviously going to be quite a wide spread of what folks do with it). A lot of C++ folks use classes heavily, frequently allocating them on the heap.
Dude, my c++ programs are all static ringbuffers and stack allocations. :) It varies a lot. Some c++ programmers turn off everything runtime related and use it as a better c. When targetting mobile you have to be careful about wasting memory...
 types of heap allocations in your typical C++ program - e.g. 
 make_shared has become the recommended way to allocate memory 
 in most cases
I use unique_ptr with custom deallocator (custom freelist), so it can be done outside the heap. :)
 And while folks who are trying to get the bare metal 
 performance that some stuff like games require, most folks are 
 going to use the STL quite a bit.
I use std::array. And my own array view type to reference it. Array_view us coming to c++17 I think. Kinda like D slices. STL/string/iostream is for me primarily useful for init and testing...
 such as Qt. It's the folks who are in embedded environments or 
 who have much more restrictive performance requirements who are 
 more likely to avoid the STL or do stuff like avoid heap 
 allocations after the program has been initialized.
Mobile audio/graphics...
 So, you _can_ have low heap allocation in a C++ program, and 
 many people do, but from what I've seen, that really isn't the 
 norm across the C++ community in general.
I dont think there is a C++ community ;-) I think c++ programmers are quite different based on what they do and when they started using it. I only use it where performance/latency matters. C++ is too annoying (time consuming) for full blown apps IMHO. Classes are easy to stack allocate though, no need to heap allocate most of the time. Lambdas in c++ are often just stack allocated objects, so not so different from D's "ranges" (iterators) anyhow. I don't see my own programs suffer from c++isms anyway...
Sep 14 2015
prev sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Sunday, 13 September 2015 at 15:35:07 UTC, Jonathan M Davis 
wrote:
 the GC heavily. And the reality of the matter is that the vast 
 majority of programs will have _no_ problems with using the GC 
 so long as they don't use it heavily. Programming like you're 
 in Java and allocating everything on the heap will kill 
 performance, but idiomatic D code doesn't do that, and Phobos 
 doesn't do that. Far too many programmers freak out at the 
 thought of D even having a GC and overreact thinking that they 
 have to root it out completely, when there really is no need 
 to. Plenty of folks how written highly performant code in D 
 using the GC. You just have to avoid doing a lot of allocating 
 and make sure you track down unwanted allocations when you have 
 a performance problem.
I don't understand this argument. Even if the GC heap only contains a single live object, you still have to scan ALL memory that contains pointers. So how does programming like you do in Java affect anything related to the GC? Or are you saying that finalization is taking up most of the time?
Sep 13 2015
next sibling parent reply Prudence <Pursuit Happyness.All> writes:
On Sunday, 13 September 2015 at 16:58:22 UTC, Ola Fosheim Grøstad 
wrote:
 On Sunday, 13 September 2015 at 15:35:07 UTC, Jonathan M Davis 
 wrote:
 the GC heavily. And the reality of the matter is that the vast 
 majority of programs will have _no_ problems with using the GC 
 so long as they don't use it heavily. Programming like you're 
 in Java and allocating everything on the heap will kill 
 performance, but idiomatic D code doesn't do that, and Phobos 
 doesn't do that. Far too many programmers freak out at the 
 thought of D even having a GC and overreact thinking that they 
 have to root it out completely, when there really is no need 
 to. Plenty of folks how written highly performant code in D 
 using the GC. You just have to avoid doing a lot of allocating 
 and make sure you track down unwanted allocations when you 
 have a performance problem.
I don't understand this argument. Even if the GC heap only contains a single live object, you still have to scan ALL memory that contains pointers. So how does programming like you do in Java affect anything related to the GC? Or are you saying that finalization is taking up most of the time?
The problem is that these people arguing that the GC is the holy grail simply use statistics for their reasoning. They pigeon everyone in the same box they exist in, and you find out it's useless to argue with them because their logic is flawed. What if I happen to write a RT app that happens to use a part of phobo's that happens to heavily rely on the GC? Am I suppose to use -vgs all the time to avoid that? Do I avoid phobo's because 3 functions in it use the GC? Am I suppose to memorize a table of all the places phobo's uses the GC and then roll my own to avoid them? The fact is, that the proponents of the GC such as JMD do not write RT apps and could care less bout that aspect. This is why they make such easy claims. For them, RT is just theoretical mumbo jumbo that doesn't exist in the real world. The GC is, also, for them, a safety blanket so they can be lazy and not have to keep track of all the things they should be. This type of mentality seems to run rampet in the contributors of D. They simply cannot understand the perspective of the other side(or refuse to). Statistics has nothing to do with facts. The fact is, for a hard real time app, the GC and it's stop the world behavior is a no go. As long as the mentality exists that the GC is good enough because it 99% of phobo's doesn't use it or 99% of apps don't need RT, or whatever, D will never be as powerful as it can be. Basically, I know you need your snotty safety blanket and it works for you, but I don't want to use it! Don't force me! I won't force you to give up your blanket but don't force me to use it. The road goes both ways, stop trying to make it one way. (The argument is fundamentally different. They want to exclude, I want to include) Of course, the real issue is, that it will take someone that has the opposite point of view from them to actually do anything about it, because it's obvious they won't work the direction they think is a waste. So, ultimately, it's people like me that have to step up and actually do the work. I am hesitant because it's always an uphill battle with such people. Instead of working together, they have to make it a struggle. (it's always "Why are you trying to take my safety blanket away!!! wa, wa wa" and tears follow)
Sep 13 2015
parent Jonathan M Davis via Digitalmars-d-learn writes:
On Sunday, September 13, 2015 17:14:05 Prudence via Digitalmars-d-learn wrote:
 On Sunday, 13 September 2015 at 16:58:22 UTC, Ola Fosheim Grstad
 wrote:
 On Sunday, 13 September 2015 at 15:35:07 UTC, Jonathan M Davis
 wrote:
 the GC heavily. And the reality of the matter is that the vast
 majority of programs will have _no_ problems with using the GC
 so long as they don't use it heavily. Programming like you're
 in Java and allocating everything on the heap will kill
 performance, but idiomatic D code doesn't do that, and Phobos
 doesn't do that. Far too many programmers freak out at the
 thought of D even having a GC and overreact thinking that they
 have to root it out completely, when there really is no need
 to. Plenty of folks how written highly performant code in D
 using the GC. You just have to avoid doing a lot of allocating
 and make sure you track down unwanted allocations when you
 have a performance problem.
I don't understand this argument. Even if the GC heap only contains a single live object, you still have to scan ALL memory that contains pointers. So how does programming like you do in Java affect anything related to the GC? Or are you saying that finalization is taking up most of the time?
What if I happen to write a RT app that happens to use a part of phobo's that happens to heavily rely on the GC? Am I suppose to use -vgs all the time to avoid that? Do I avoid phobo's because 3 functions in it use the GC? Am I suppose to memorize a table of all the places phobo's uses the GC and then roll my own to avoid them?
nogc was added specifically to support folks who want to guarantee that they aren't using the GC. If you want to guarantee that your function isn't using the GC, then mark it with nogc, and you try and call a function in it that isn't nogc (be it because it's not marked with nogc, or it's a templated function that wasn't inferred to be nogc), then you'll get a compilation error, and you'll know that you need to do something different. Whether the function you're calling is in Phobos or a 3rd party library doesn't really matter. If you want to be sure that it's nogc, you'll just have to mark your code with nogc, and you'll catch any accidental or otherwise unknown allocations. You can then use -vgc to figure out exactly what is allocating in a function that you thought should be nogc but can't be. But you don't have to use it simply to find out whether your code is using the GC or not. nogc does that. Yes, unlike C++, with D, if you don't want to use a GC at all, then you're going to have be careful about how you write your code, because some features use the GC (albeit not many), and some 3rd party code that you might want to use (be it the standard library or someone else's code) is likely going to end up using the GC, whereas in C++, that's not a concern. But having the GC makes a lot of programs easier to write, and it does solve some safety concern with regards to memory and allow us to have a few features that C++ doesn't. If you're using the GC, D can guarantee memory safety in a way that C++ can't. And that can be a big gain. And yes, that can be a bit of a pain for those folks who can't use the GC, but that's not the majority of programs, and the language and compiler do have tools for supporting folks who insist on minimizing GC usage or even avoiding it completely. So, it's not like the anti-GC folks are being left out in the cold here. And work is being done to improve the GC and to make sure that Phobos doesn't allocate using the GC except when it absolutely needs to. And ranges are really helping with that. Regardless, it's still the case that Phobos has never used the GC heavily. It just hasn't always avoided it everywhere it could or should, and that's being fixed. But it's always going to use the GC in some places, because some things simply require it, and the language does have a GC. Those few pieces of functionality will simply have to be avoided by anyone who insists on never using the GC, and nogc will help them ensure that they don't use that functionality. - Jonathan M Davis
Sep 13 2015
prev sibling parent reply Jonathan M Davis via Digitalmars-d-learn writes:
On Sunday, September 13, 2015 16:58:21 Ola Fosheim Grstad via
Digitalmars-d-learn wrote:
 On Sunday, 13 September 2015 at 15:35:07 UTC, Jonathan M Davis
 wrote:
 the GC heavily. And the reality of the matter is that the vast
 majority of programs will have _no_ problems with using the GC
 so long as they don't use it heavily. Programming like you're
 in Java and allocating everything on the heap will kill
 performance, but idiomatic D code doesn't do that, and Phobos
 doesn't do that. Far too many programmers freak out at the
 thought of D even having a GC and overreact thinking that they
 have to root it out completely, when there really is no need
 to. Plenty of folks how written highly performant code in D
 using the GC. You just have to avoid doing a lot of allocating
 and make sure you track down unwanted allocations when you have
 a performance problem.
I don't understand this argument. Even if the GC heap only contains a single live object, you still have to scan ALL memory that contains pointers.
 So how does programming like you do in Java affect anything
 related to the GC?

 Or are you saying that finalization is taking up most of the time?
Only the stack and the GC heap get scanned unless you tell the GC about memory that was allocated by malloc or some other mechanism. malloced memory won't be scanned by default. So, if you're using the GC minimally and coding in a way that doesn't involve needing to tell the GC about a bunch of malloced memory, then the GC won't have all that much to scan. And while the amount of memory that the GC has to scan does affect the speed of a collection, in general, the less memory that's been allocated by the GC, the faster a collection is. Idiomatic D code uses the stack heavily and allocates very little on the GC heap. Classes are used only rarely, and ranges are generally preferred over arrays, so idiomatic D code ends up with structs on the stack rather than classes on the heap, and the number of allocations required for arrays goes down considerably. So, there simply isn't all that much garbage to collect, and memory isn't being constantly allocated, so it's a lot rarer that a collection needs to be run in order to get more memory. So, the big win is simply not allocating much on the GC heap, whether it's because it's allocated on the malloced heap or because it's on the stack. The result of that is that even if a collection isn't super fast, collections are actually relatively rare. So, for a _lot_ of idiomatic D code, collections simply won't be happening often, and as long as you don't have realtime constraints, then having an occasional collection occur that's a bit longer than desirable isn't necessarily a problem - though we definitely do want to improve the GC so that collections are faster and thus less likely to cause performance problems (and work has been done recently on that; Martin Nowak was supposed to give a talk on it at dconf this year, but he missed his flight). So, while the fact that D's GC is less than stellar is certainly a problem, and we would definitely like to improve that, the idioms that D code typically uses seriously reduce the number of performance problems that we get. - Jonathan M Davis
Sep 13 2015
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Monday, 14 September 2015 at 00:53:58 UTC, Jonathan M Davis 
wrote:
 So, while the fact that D's GC is less than stellar is 
 certainly a problem, and we would definitely like to improve 
 that, the idioms that D code typically uses seriously reduce 
 the number of performance problems that we get.
What D needs is some way for a static analyzer to be certain that a pointer does not point to a specific GC heap. And that means language changes... one way or the other. Without language changes it becomes very difficult to reduce the amount of memory scanned without sacrificing memory safety. And I don't think a concurrent GC is realistic given the complexity and performance penalties. The same people who complain about GC would not accept performance hits on pointer-writes. That would essentially make D and Go too similar IMO.
Sep 14 2015
parent reply Laeeth Isharc <spamnolaeeth nospamlaeeth.com> writes:
On Monday, 14 September 2015 at 08:57:07 UTC, Ola Fosheim Grøstad 
wrote:
 On Monday, 14 September 2015 at 00:53:58 UTC, Jonathan M Davis 
 wrote:
 So, while the fact that D's GC is less than stellar is 
 certainly a problem, and we would definitely like to improve 
 that, the idioms that D code typically uses seriously reduce 
 the number of performance problems that we get.
What D needs is some way for a static analyzer to be certain that a pointer does not point to a specific GC heap. And that means language changes... one way or the other. Without language changes it becomes very difficult to reduce the amount of memory scanned without sacrificing memory safety.
Personally, when I make a strong claim about something and find that I am wrong (the claim that D needs to scan every pointer), I take a step back and consider my view rather than pressing harder. It's beautiful to be wrong because through recognition of error, growth. If recognition.
 And I don't think a concurrent GC is realistic given the 
 complexity and performance penalties. The same people who 
 complain about GC would not accept performance hits on 
 pointer-writes. That would essentially make D and Go too 
 similar IMO.
Given one was written by one (very smart) student for his PhD thesis, and that as I understand it that formed the basis of Sociomantic's concurrent garbage collector (correct me if I am wrong), and that this is being ported to D2, and whether or not it is released, success will spur others to follow - it strikes me as a problematic claim to make that developing one isn't realistic unless one is deeply embedded in the nitty gritty of the problem (because theory and practice are more different in practice than they are in theory!) There is etcimon's work too (at research stage). Don't underestimate too how future corporate support combined with an organically growing community may change what's possible. Andy Smith gave his talk based on his experience at one of the largest and well-run hedge funds. An associate who sold a decent sized marketing group got in contact to thank me for posting links on D as it helped him implement a machine-learning problem better. And if I look at what's in front of me, I really am not aware of a better solution to the needs I have, which I am pretty sure are needs that are more generally shared - corporate inertia may be a nuisance but it is also a source of opportunity for others. In response to your message earlier where you suggested that Sociomantic was an edge case of little relevance for the rest of us. I made that point in response to the claim that D had no place for such purposes. It's true that being able to do something doesn't mean it is a good idea, but really having seen them speak and looked at the people they hire, I really would be surprised if they do not know what they are doing. (I would say the same if they had never been bought). And they say that using D has significantly lowered their costs compared to their competitors. It's what I have been finding, too, dealing with data sets that are for now by no means 'big' but will be soon enough. It's also a human group phenomenon that it's very difficult to do something for the first time, and the more people that follow, the easier it is for others. So the edge case of yesteryear shall be the best practice of the future. One sees this also with allocators, where Andrei's library is already beginning to be integrated in different projects. I had never even heard of D two years ago and had approaching a twenty year break from doing a lot of programming. But they weren't difficult to pick up and use effectively. Clearly, latency and performance hits are different things, and the category of people who care about performance is only a partial intersection of those who care about latency. Part of what I do involves applying the principle of contrarian thinking, and I can say that it is very useful, and not just in the investment world: http://www.amazon.com/The-Contrary-Thinking-Humphrey-Neill/dp/087004110X On the other hand, there is also the phenomenon of just being contrary. One sometimes has the impression that some people like to argue for the sake of it. Nothing wrong with that, provided one understands the situation. Poking holes at things without taking any positive steps to fix them is understandable for people that haven't a choice about their situation, but in my experience is rarely effective in making the world better.
Sep 14 2015
next sibling parent Laeeth Isharc <spamnolaeeth nospamlaeeth.com> writes:
On Monday, 14 September 2015 at 13:56:16 UTC, Laeeth Isharc wrote:
 An associate who sold a decent sized marketing group
Should read marketmaking. Making prices in listed equity options.
Sep 14 2015
prev sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Monday, 14 September 2015 at 13:56:16 UTC, Laeeth Isharc wrote:
 Personally, when I make a strong claim about something and find 
 that I am wrong (the claim that D needs to scan every pointer), 
 I take a step back and consider my view rather than pressing 
 harder.  It's beautiful to be wrong because through recognition 
 of error, growth.  If recognition.
The claim is correct: you need to follow every pointer that through some indirection may lead to a pointer that may point into the GC heap. Not doing so will lead to unverified memory unsafety.
 Given one was written by one (very smart) student for his PhD 
 thesis, and that as I understand it that formed the basis of 
 Sociomantic's concurrent garbage collector (correct me if I am 
 wrong), and that this is being ported to D2, and whether or not 
 it is released, success will spur others to follow - it strikes
As it has been described, it is fork() based and unsuitable for the typical use case.
 provided one understands the situation.  Poking holes at things 
 without taking any positive steps to fix them is understandable 
 for people that haven't a choice about their situation, but in 
 my experience is rarely effective in making the world better.
Glossing over issues that needs attention is not a good idea. It wastes other people's time. I am building my own libraries, also for memory management with move semantics etc.
Sep 14 2015
parent reply Jonathan M Davis via Digitalmars-d-learn writes:
On Monday, September 14, 2015 14:19:30 Ola Fosheim Grstad via
Digitalmars-d-learn wrote:
 On Monday, 14 September 2015 at 13:56:16 UTC, Laeeth Isharc wrote:
 The claim is correct: you need to follow every pointer that
 through some indirection may lead to a pointer that may point
 into the GC heap. Not doing so will lead to unverified memory
 unsafety.

 Given one was written by one (very smart) student for his PhD
 thesis, and that as I understand it that formed the basis of
 Sociomantic's concurrent garbage collector (correct me if I am
 wrong), and that this is being ported to D2, and whether or not
 it is released, success will spur others to follow - it strikes
As it has been described, it is fork() based and unsuitable for the typical use case.
I'm not sure why it wouldn't be suitable for the typical use case. It's quite performant. It would still not be suitable for many games and environments that can't afford to stop the world for more than a few milliseconds, but it brings the stop the world time down considerably, making the GC more suitable for more environments than it would be now, and I'm not aware of any serious downsides to it on a *nix system. Its achilles heel is Windows. On *nix, forking is cheap, but on Windows, it definitely isn't. So, a different mechanism would be needed to make the concurrent GC work on Windows, and I don't know if Windows really provides the necessarily tools to do that, though I know that some folks were looking into it at least at the time of Leandro's talk. So, we're either going to need to figure out how to get the concurrent GC working on Windows via some mechanism other than fork, or Windows is going to need a different solution to get that kind of improvement out of the GC. - Jonathan M Davis
Sep 14 2015
parent Ola Fosheim Grostad <ola.fosheim.grostad+dlang gmail.com> writes:
On Monday, 14 September 2015 at 20:34:03 UTC, Jonathan M Davis 
wrote:
 I'm not sure why it wouldn't be suitable for the typical use 
 case. It's quite performant. It would still not be suitable for 
 many games and environments that can't afford to stop the world 
 for more than a few milliseconds, but it brings the stop the 
 world time down considerably, making the GC more suitable for 
 more environments than it would be now, and I'm not aware of 
 any serious downsides to it on a *nix system.
For me concurrent gc implies interactive applications or webservices that are memory constrained/diskless. You cannot prevent triggering actions that writes all over memory during collection without taking special care, like avoiding RC. A fork kan potentially double memory consumption. Gc by itself uses ~2x memory, with fork you have to plan for 3-4x. In the cloud you pay for extra RAM. So configuring the app to a fixed sized memory heap that matches the instance RAM capacity is useful. With fork you just have to play it safe and halve the heap size. So more collections and less utilized RAM per dollar with fork. Only testing will show the effect, but it does not sound promising for my use cases.
Sep 14 2015
prev sibling parent Laeeth Isharc <spamnolaeeth nospamlaeeth.com> writes:
On Monday, 14 September 2015 at 00:53:58 UTC, Jonathan M Davis 
wrote:
 Only the stack and the GC heap get scanned unless you tell the 
 GC about memory that was allocated by malloc or some other 
 mechanism. malloced memory won't be scanned by default. So, if 
 you're using the GC minimally and coding in a way that doesn't 
 involve needing to tell the GC about a bunch of malloced 
 memory, then the GC won't have all that much to scan. And while 
 the amount of memory that the GC has to scan does affect the 
 speed of a collection, in general, the less memory that's been 
 allocated by the GC, the faster a collection is.

 Idiomatic D code uses the stack heavily and allocates very 
 little on the GC heap.
...
 So, while the fact that D's GC is less than stellar is 
 certainly a problem, and we would definitely like to improve 
 that, the idioms that D code typically uses seriously reduce 
 the number of performance problems that we get.

 - Jonathan M Davis
Thank you for your posts on this (and the others), Jonathan. I appreciate your taking the time to write so carefully and thoroughly, and I learn a lot from reading your work.
Sep 14 2015
prev sibling parent Kagamin <spam here.lot> writes:
On Friday, 11 September 2015 at 17:29:47 UTC, Prudence wrote:
 I don't care about "maybe" working. Since the array is hidden 
 inside a class I can control who and how it is used and deal 
 with the race conditions.
Looks like destruction slipped out of your control. That is solved by making array an instance member of wrapper singleton, then you will control its lifetime:
 class MySharedArrayWrapper
 {
     private Array!(int) a;

 }

 and instead I use

 static shared MySharedArrayWrapper;
You will have one static instance of wrapper and it will have one instance of the array.
Sep 13 2015
prev sibling parent "Kagamin" <spam here.lot> writes:
You can try to write a wrapper for the array that it aware of 
concurrency.
Sep 11 2015