www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - LRU cache for ~=

reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
I just wrote this to Sean and Walter and subsequently discussed it with 
Walter. Walter thinks this should work. Does anyone have the time and 
inclination to test this out? It would involve hacking into druntime's 
implementation of ~= (I'm not sure what the function name is). I'd 
really appreciate this; I'm overloaded as it is.

==================

In wake of the recent demise of T[new], I was thinking of finding ways 
of making ~= efficient for T[].

Currently ~= is slow because accessing GC.sizeOf(void*) acquires a 
global lock and generally must figure out a lot of things about the 
pointer to make a decision.

Also, ~= is dangerous because it allows slices to stomp over other slices.

I was thinking of solving these issues by keeping an LRU (Least Recently 
Used) cache inside the implementation of ~=. The LRU would only have a 
few entries (4-8) and would store the parameters of the last ~= calls, 
and their cached capacities.

So whenever code calls arr ~= b, the LRU is searched first. If the 
system finds "arr" (both bounds) in the LRU, that means the cached 
capacity is correct and can solve the matter without an actual trip to 
the GC at all! Otherwise, do the deed and cache the new slice and the 
new capacity.

This also solves the lack of safety: if you request a growth on an array 
you just grew, it's impossible  to have a valid slice beyond that array.

This LRU would allow us to keep the slice API as it currently is, and 
also at excellent efficiency.

What do you think?


Andrei
Oct 19 2009
next sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s article
 I just wrote this to Sean and Walter and subsequently discussed it with
 Walter. Walter thinks this should work. Does anyone have the time and
 inclination to test this out? It would involve hacking into druntime's
 implementation of ~= (I'm not sure what the function name is). I'd
 really appreciate this; I'm overloaded as it is.
 ==================
 In wake of the recent demise of T[new], I was thinking of finding ways
 of making ~= efficient for T[].
 Currently ~= is slow because accessing GC.sizeOf(void*) acquires a
 global lock and generally must figure out a lot of things about the
 pointer to make a decision.
 Also, ~= is dangerous because it allows slices to stomp over other slices.
 I was thinking of solving these issues by keeping an LRU (Least Recently
 Used) cache inside the implementation of ~=. The LRU would only have a
 few entries (4-8) and would store the parameters of the last ~= calls,
 and their cached capacities.
 So whenever code calls arr ~= b, the LRU is searched first. If the
 system finds "arr" (both bounds) in the LRU, that means the cached
 capacity is correct and can solve the matter without an actual trip to
 the GC at all! Otherwise, do the deed and cache the new slice and the
 new capacity.
 This also solves the lack of safety: if you request a growth on an array
 you just grew, it's impossible  to have a valid slice beyond that array.
 This LRU would allow us to keep the slice API as it currently is, and
 also at excellent efficiency.
 What do you think?
 Andrei

1. I assume the LRU cache would be thread-local, so no lock would be necessary? 2. I don't understand how this solves the safety problem: // foo lives on the heap b/c we've idup'd it. string foo = "This is only a test.".idup; string bar = foo[0..4]; bar ~= " is _not "; writeln(foo); // prints "This is _not a test." Having access to the capacity in an LRU cache doesn't help if I understand it correctly. 3. I'm pretty familiar with these parts of druntime and would probably be willing to do the implementation after I understand a few details of it a little better.
Oct 19 2009
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
dsimcha wrote:
 == Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s 
 article
 I just wrote this to Sean and Walter and subsequently discussed it 
 with Walter. Walter thinks this should work. Does anyone have the 
 time and inclination to test this out? It would involve hacking 
 into druntime's implementation of ~= (I'm not sure what the 
 function name is). I'd really appreciate this; I'm overloaded as it
  is. ================== In wake of the recent demise of T[new], I 
 was thinking of finding ways of making ~= efficient for T[]. 
 Currently ~= is slow because accessing GC.sizeOf(void*) acquires a
  global lock and generally must figure out a lot of things about
 the pointer to make a decision. Also, ~= is dangerous because it 
 allows slices to stomp over other slices. I was thinking of solving
  these issues by keeping an LRU (Least Recently Used) cache inside 
 the implementation of ~=. The LRU would only have a few entries 
 (4-8) and would store the parameters of the last ~= calls, and 
 their cached capacities. So whenever code calls arr ~= b, the LRU 
 is searched first. If the system finds "arr" (both bounds) in the 
 LRU, that means the cached capacity is correct and can solve the 
 matter without an actual trip to the GC at all! Otherwise, do the 
 deed and cache the new slice and the new capacity. This also solves
  the lack of safety: if you request a growth on an array you just 
 grew, it's impossible  to have a valid slice beyond that array. 
 This LRU would allow us to keep the slice API as it currently is, 
 and also at excellent efficiency. What do you think? Andrei

1. I assume the LRU cache would be thread-local, so no lock would be necessary?

Yes.
 2.  I don't understand how this solves the safety problem:

It's rather subtle. I'd figured it out in the morning and forgot it by the time I explained to Walter.
 // foo lives on the heap b/c we've idup'd it. string foo = "This is 
 only a test.".idup; string bar = foo[0..4]; bar ~= " is _not "; 
 writeln(foo);  // prints "This is _not a test."

Upon the call to ~=, bar is not in the LRU cache so ~= conservatively reallocates it. As a rule of thumb (which takes care of the subtler issues): if a growing slice is not found in the LRU cache, it will always be conservatively reallocated. This is exactly because you don't know whether that slice was reduced from a larger slice.
 Having access to the capacity in an LRU cache doesn't help if I 
 understand it correctly.
 
 3.  I'm pretty familiar with these parts of druntime and would 
 probably be willing to do the implementation after I understand a few
  details of it a little better.

That would be awesome!!! Andrei
Oct 19 2009
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
dsimcha wrote:
 2.  I don't understand how this solves the safety problem:
 
 // foo lives on the heap b/c we've idup'd it.
 string foo = "This is only a test.".idup;
 string bar = foo[0..4];
 bar ~= " is _not ";
 writeln(foo);  // prints "This is _not a test."
 
 Having access to the capacity in an LRU cache doesn't help if I understand it
 correctly.

Let me stress a point harder that I think I expressed poorly: The LRU cache stores the capacity of a given slice given _BOTH_ the slice's left and right bounds. If you later come with a slice that has only one correct bound, the LRU doesn't care about it. That's the safety tidbit. Andrei
Oct 19 2009
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s article
 dsimcha wrote:
 2.  I don't understand how this solves the safety problem:

 // foo lives on the heap b/c we've idup'd it.
 string foo = "This is only a test.".idup;
 string bar = foo[0..4];
 bar ~= " is _not ";
 writeln(foo);  // prints "This is _not a test."

 Having access to the capacity in an LRU cache doesn't help if I understand it
 correctly.

The LRU cache stores the capacity of a given slice given _BOTH_ the slice's left and right bounds. If you later come with a slice that has only one correct bound, the LRU doesn't care about it. That's the safety tidbit. Andrei

I think I get it now, although it's very conservative. One more question: Is this going to take the place of ArrayBuilder or be inaddition? The LRU is a good hack to preserve syntactic elegance and ease of use, but it's somewhat of a kludge nonetheless and I'd ideally still like to see a real ArrayBuilder with full array-like semantics if T[new] is definitely out.
Oct 19 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
dsimcha wrote:
 == Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s article
 dsimcha wrote:
 2.  I don't understand how this solves the safety problem:

 // foo lives on the heap b/c we've idup'd it.
 string foo = "This is only a test.".idup;
 string bar = foo[0..4];
 bar ~= " is _not ";
 writeln(foo);  // prints "This is _not a test."

 Having access to the capacity in an LRU cache doesn't help if I understand it
 correctly.

The LRU cache stores the capacity of a given slice given _BOTH_ the slice's left and right bounds. If you later come with a slice that has only one correct bound, the LRU doesn't care about it. That's the safety tidbit. Andrei

I think I get it now, although it's very conservative.

That's why we need experimentation. My gut tells me that linear search will scram through 8 elements real fast, and if there's something in the cache it will almost always be in a top position. Also, I think the LRU will solve all cases that matter.
 One more question:  Is
 this going to take the place of ArrayBuilder or be inaddition?  The LRU is a
good
 hack to preserve syntactic elegance and ease of use, but it's somewhat of a
kludge
 nonetheless and I'd ideally still like to see a real ArrayBuilder with full
 array-like semantics if T[new] is definitely out.

Ideally we'd be able to render ArrayBuilder/Appender unnecessary. I think there is a need for a UniqueArray, but that's only loosely related. Andrei
Oct 19 2009
parent Fawzi Mohamed <fmohamed mac.com> writes:
On 2009-10-19 21:53:53 +0200, Andrei Alexandrescu 
<SeeWebsiteForEmail erdani.org> said:

 dsimcha wrote:
 == Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s article
 dsimcha wrote:
 2.  I don't understand how this solves the safety problem:
 
 // foo lives on the heap b/c we've idup'd it.
 string foo = "This is only a test.".idup;
 string bar = foo[0..4];
 bar ~= " is _not ";
 writeln(foo);  // prints "This is _not a test."
 
 Having access to the capacity in an LRU cache doesn't help if I understand it
 correctly.

The LRU cache stores the capacity of a given slice given _BOTH_ the slice's left and right bounds. If you later come with a slice that has only one correct bound, the LRU doesn't care about it. That's the safety tidbit. Andrei

I think I get it now, although it's very conservative.

That's why we need experimentation. My gut tells me that linear search will scram through 8 elements real fast, and if there's something in the cache it will almost always be in a top position. Also, I think the LRU will solve all cases that matter.

yes you are probably right
 
 One more question:  Is
 this going to take the place of ArrayBuilder or be inaddition?  The LRU 
 is a good
 hack to preserve syntactic elegance and ease of use, but it's somewhat 
 of a kludge
 nonetheless and I'd ideally still like to see a real ArrayBuilder with full
 array-like semantics if T[new] is definitely out.

Ideally we'd be able to render ArrayBuilder/Appender unnecessary. I think there is a need for a UniqueArray, but that's only loosely related.

a library array knowing its capacity is still useful I think. Fawzi
Oct 19 2009
prev sibling next sibling parent reply Leandro Lucarella <llucax gmail.com> writes:
Andrei Alexandrescu, el 19 de octubre a las 13:51 me escribiste:
 I just wrote this to Sean and Walter and subsequently discussed it
 with Walter. Walter thinks this should work. Does anyone have the
 time and inclination to test this out? It would involve hacking into
 druntime's implementation of ~= (I'm not sure what the function name
 is). I'd really appreciate this; I'm overloaded as it is.
 
 ==================
 
 In wake of the recent demise of T[new], I was thinking of finding
 ways of making ~= efficient for T[].
 
 Currently ~= is slow because accessing GC.sizeOf(void*) acquires a
 global lock and generally must figure out a lot of things about the
 pointer to make a decision.
 
 Also, ~= is dangerous because it allows slices to stomp over other slices.
 
 I was thinking of solving these issues by keeping an LRU (Least
 Recently Used) cache inside the implementation of ~=. The LRU would
 only have a few entries (4-8) and would store the parameters of the
 last ~= calls, and their cached capacities.
 
 So whenever code calls arr ~= b, the LRU is searched first. If the
 system finds "arr" (both bounds) in the LRU, that means the cached
 capacity is correct and can solve the matter without an actual trip
 to the GC at all! Otherwise, do the deed and cache the new slice and
 the new capacity.
 
 This also solves the lack of safety: if you request a growth on an
 array you just grew, it's impossible  to have a valid slice beyond
 that array.
 
 This LRU would allow us to keep the slice API as it currently is,
 and also at excellent efficiency.
 
 What do you think?

I think this is moving in the wrong direction. It's a patch, not a solution. And this makes dynamic arrays more coupled with the way the GC works. I think the GC shouldn't even provide a sizeOf(void*), we should move in that direction, removing restrictions to the GC instead of enforcing the current ones; at lest if the goal is to eventually have a better GC (a copying GC doesn't even have to store the size of the cell, for example). I hope you eventually realize that you are complicating things much more than necessary. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- ACCIDENTE FATAL EN FLORES, MUEREN DOS PERSONAS Y UN BOLIVIANO -- Crónica TV
Oct 19 2009
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Leandro Lucarella wrote:
 Andrei Alexandrescu, el 19 de octubre a las 13:51 me escribiste:
 I just wrote this to Sean and Walter and subsequently discussed it
 with Walter. Walter thinks this should work. Does anyone have the
 time and inclination to test this out? It would involve hacking into
 druntime's implementation of ~= (I'm not sure what the function name
 is). I'd really appreciate this; I'm overloaded as it is.

 ==================

 In wake of the recent demise of T[new], I was thinking of finding
 ways of making ~= efficient for T[].

 Currently ~= is slow because accessing GC.sizeOf(void*) acquires a
 global lock and generally must figure out a lot of things about the
 pointer to make a decision.

 Also, ~= is dangerous because it allows slices to stomp over other slices.

 I was thinking of solving these issues by keeping an LRU (Least
 Recently Used) cache inside the implementation of ~=. The LRU would
 only have a few entries (4-8) and would store the parameters of the
 last ~= calls, and their cached capacities.

 So whenever code calls arr ~= b, the LRU is searched first. If the
 system finds "arr" (both bounds) in the LRU, that means the cached
 capacity is correct and can solve the matter without an actual trip
 to the GC at all! Otherwise, do the deed and cache the new slice and
 the new capacity.

 This also solves the lack of safety: if you request a growth on an
 array you just grew, it's impossible  to have a valid slice beyond
 that array.

 This LRU would allow us to keep the slice API as it currently is,
 and also at excellent efficiency.

 What do you think?

I think this is moving in the wrong direction. It's a patch, not a solution. And this makes dynamic arrays more coupled with the way the GC works. I think the GC shouldn't even provide a sizeOf(void*), we should move in that direction, removing restrictions to the GC instead of enforcing the current ones; at lest if the goal is to eventually have a better GC (a copying GC doesn't even have to store the size of the cell, for example).

The LRU doesn't need GC.sizeOf. It could always conservatively reallocate whenever in doubt.
 I hope you eventually realize that you are complicating things much more
 than necessary.

I actually did a couple of days ago :o). Andrei
Oct 19 2009
prev sibling next sibling parent Chris Nicholson-Sauls <ibisbasenji gmail.com> writes:
Andrei Alexandrescu wrote:
 I just wrote this to Sean and Walter and subsequently discussed it with 
 Walter. Walter thinks this should work. Does anyone have the time and 
 inclination to test this out? It would involve hacking into druntime's 
 implementation of ~= (I'm not sure what the function name is). I'd 
 really appreciate this; I'm overloaded as it is.
 
 ==================
 
 In wake of the recent demise of T[new], I was thinking of finding ways 
 of making ~= efficient for T[].
 
 Currently ~= is slow because accessing GC.sizeOf(void*) acquires a 
 global lock and generally must figure out a lot of things about the 
 pointer to make a decision.
 
 Also, ~= is dangerous because it allows slices to stomp over other slices.
 
 I was thinking of solving these issues by keeping an LRU (Least Recently 
 Used) cache inside the implementation of ~=. The LRU would only have a 
 few entries (4-8) and would store the parameters of the last ~= calls, 
 and their cached capacities.
 
 So whenever code calls arr ~= b, the LRU is searched first. If the 
 system finds "arr" (both bounds) in the LRU, that means the cached 
 capacity is correct and can solve the matter without an actual trip to 
 the GC at all! Otherwise, do the deed and cache the new slice and the 
 new capacity.
 
 This also solves the lack of safety: if you request a growth on an array 
 you just grew, it's impossible  to have a valid slice beyond that array.
 
 This LRU would allow us to keep the slice API as it currently is, and 
 also at excellent efficiency.
 
 What do you think?
 
 
 Andrei

Its an interesting idea, and if I have time tonight I'll take a crack at it. For those others who may, the function you care about is "_d_arrayappendcT" in compiler/dmd/lifetime. -- Chris Nicholson-Sauls
Oct 19 2009
prev sibling next sibling parent reply grauzone <none example.net> writes:
Andrei Alexandrescu wrote:
 I was thinking of solving these issues by keeping an LRU (Least Recently 
 Used) cache inside the implementation of ~=. The LRU would only have a 
 few entries (4-8) and would store the parameters of the last ~= calls, 
 and their cached capacities.

Sounds like a bad hack, but at least it would solve the issue about overwritten slices. But for some uses of array appending, you had to use a library based "Appender" (or whatever you have in Phobos 2): class Something { T[] data; void add(T x) { data ~= x; //chances that data are in the cache are minimal } } There's no "cache locality" or anything, but obviously you still would like to have efficient appender behavior. Also look at this: string[] t; t ~= somefunction(); t ~= someotherfunction(); Chances are high that those functions will remove "t" from the cache, and it would all be inefficient again. Nothing solved! Now you could try to make the cache function local to solve this issue. Whenever a function contains the ~= operator, the compiler allocates a "cache" struct, and the ~= operator passes a hidden pointer to it to the runtime. But that wouldn't work if you want pass slices to other functions and to append to them (but it would still be safe, I guess?). Looks like these performance hacks just don't work out. It'd be so much simpler to make "a~=b;" equivalent to "a=a~b;". Then there's no "mystery" about the performance of the ~= operator. You can explain everyone: "yes, this allocates; if you want performance, use ArrayAppender". Hey, you could alias this ArrayAppender type to something like T[new] to make it feel more natural. But then, we're at the beginning again.
Oct 19 2009
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
grauzone wrote:
 Andrei Alexandrescu wrote:
 I was thinking of solving these issues by keeping an LRU (Least 
 Recently Used) cache inside the implementation of ~=. The LRU would 
 only have a few entries (4-8) and would store the parameters of the 
 last ~= calls, and their cached capacities.

Sounds like a bad hack, but at least it would solve the issue about overwritten slices. But for some uses of array appending, you had to use a library based "Appender" (or whatever you have in Phobos 2): class Something { T[] data; void add(T x) { data ~= x; //chances that data are in the cache are minimal } } There's no "cache locality" or anything, but obviously you still would like to have efficient appender behavior.

Why is there no cache locality?
 Also look at this:
 
 string[] t;
 t ~= somefunction();
 t ~= someotherfunction();
 
 Chances are high that those functions will remove "t" from the cache, 
 and it would all be inefficient again. Nothing solved!

Why are there chances those functions will remove "t" from the cache? For it to become a performance problem, there must be a loop with nine or more round-robin appends. When that does happen, yes, appends will become slower. (We may be able to make that better by using random eviction.)
 Now you could try to make the cache function local to solve this issue. 
 Whenever a function contains the ~= operator, the compiler allocates a 
 "cache" struct, and the ~= operator passes a hidden pointer to it to the 
 runtime.
 
 But that wouldn't work if you want pass slices to other functions and to 
 append to them (but it would still be safe, I guess?).
 
 Looks like these performance hacks just don't work out.

Caches have a long and solid history of working. You'd have a very hard time convincing me that a cache that directly addresses a performance problem is a hack.
 It'd be so much 
 simpler to make "a~=b;" equivalent to "a=a~b;". Then there's no 
 "mystery" about the performance of the ~= operator. You can explain 
 everyone: "yes, this allocates; if you want performance, use 
 ArrayAppender". Hey, you could alias this ArrayAppender type to 
 something like T[new] to make it feel more natural. But then, we're at 
 the beginning again.

I think making ~= faster is worth pursuing. Andrei
Oct 19 2009
prev sibling next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 19 Oct 2009 14:51:32 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 I just wrote this to Sean and Walter and subsequently discussed it with  
 Walter. Walter thinks this should work. Does anyone have the time and  
 inclination to test this out? It would involve hacking into druntime's  
 implementation of ~= (I'm not sure what the function name is). I'd  
 really appreciate this; I'm overloaded as it is.

 ==================

 In wake of the recent demise of T[new], I was thinking of finding ways  
 of making ~= efficient for T[].

 Currently ~= is slow because accessing GC.sizeOf(void*) acquires a  
 global lock and generally must figure out a lot of things about the  
 pointer to make a decision.

 Also, ~= is dangerous because it allows slices to stomp over other  
 slices.

 I was thinking of solving these issues by keeping an LRU (Least Recently  
 Used) cache inside the implementation of ~=. The LRU would only have a  
 few entries (4-8) and would store the parameters of the last ~= calls,  
 and their cached capacities.

 So whenever code calls arr ~= b, the LRU is searched first. If the  
 system finds "arr" (both bounds) in the LRU, that means the cached  
 capacity is correct and can solve the matter without an actual trip to  
 the GC at all! Otherwise, do the deed and cache the new slice and the  
 new capacity.

 This also solves the lack of safety: if you request a growth on an array  
 you just grew, it's impossible  to have a valid slice beyond that array.

 This LRU would allow us to keep the slice API as it currently is, and  
 also at excellent efficiency.

 What do you think?

This is a very good idea. Incidentally, you only need the upper bound location, the beginning location is irrelevant, since you don't grow down. What do you do in the case where the memory was recycled? Does a GC collection cycle clean out the cache as well? This is better than my two previous ideas. The only drawback I see is if you have many threads doing appending, or you are appending more than 8 arrays at once in a round-robin fashion, you would lose all the benefit (although it shouldn't affect correctness). At that point however, you'd have to ask yourself why you aren't using a specialized appender type or function. In response to other's queries about how many LRUs to use, you'd probably want one per heap, and you'd want to lock/not lock based on whether the heap is thread local or not. -Steve
Oct 20 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Steven Schveighoffer wrote:
 On Mon, 19 Oct 2009 14:51:32 -0400, Andrei Alexandrescu 
 <SeeWebsiteForEmail erdani.org> wrote:
 
 I just wrote this to Sean and Walter and subsequently discussed it 
 with Walter. Walter thinks this should work. Does anyone have the time 
 and inclination to test this out? It would involve hacking into 
 druntime's implementation of ~= (I'm not sure what the function name 
 is). I'd really appreciate this; I'm overloaded as it is.

 ==================

 In wake of the recent demise of T[new], I was thinking of finding ways 
 of making ~= efficient for T[].

 Currently ~= is slow because accessing GC.sizeOf(void*) acquires a 
 global lock and generally must figure out a lot of things about the 
 pointer to make a decision.

 Also, ~= is dangerous because it allows slices to stomp over other 
 slices.

 I was thinking of solving these issues by keeping an LRU (Least 
 Recently Used) cache inside the implementation of ~=. The LRU would 
 only have a few entries (4-8) and would store the parameters of the 
 last ~= calls, and their cached capacities.

 So whenever code calls arr ~= b, the LRU is searched first. If the 
 system finds "arr" (both bounds) in the LRU, that means the cached 
 capacity is correct and can solve the matter without an actual trip to 
 the GC at all! Otherwise, do the deed and cache the new slice and the 
 new capacity.

 This also solves the lack of safety: if you request a growth on an 
 array you just grew, it's impossible  to have a valid slice beyond 
 that array.

 This LRU would allow us to keep the slice API as it currently is, and 
 also at excellent efficiency.

 What do you think?

This is a very good idea. Incidentally, you only need the upper bound location, the beginning location is irrelevant, since you don't grow down.

Awesome, didn't think of that. So now more cases are caught: auto a = new int[100]; a ~= 42; a = a[50 .. $]; a ~= 52; That wouldn't have worked with my original suggestion, but it does work safely with yours.
 What do you do in the case where the memory was recycled?  Does a 
 GC collection cycle clean out the cache as well?

As you saw, there was some discussion about that as well.
 This is better than my two previous ideas.  The only drawback I see is 
 if you have many threads doing appending, or you are appending more than 
 8 arrays at once in a round-robin fashion, you would lose all the 
 benefit (although it shouldn't affect correctness).  At that point 
 however, you'd have to ask yourself why you aren't using a specialized 
 appender type or function.

Yah. As I suspect a lot of code is actually doing round-robin naturally, I'm considering using a random eviction strategy. That way performance will degrade smoother. A more advanced algorithm would use introspection to choose dynamically between LRU and random. Andrei
Oct 20 2009
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Robert Jacques wrote:
 So you want to synchronize the ~= function? I thought the LRU would be 
 thread local and therefore independent of these issues, as well as being 
 faster. And if the LRU isn't thread-local, then why not make it part of 
 the GC? It would both be more general and much simpler/cleaner to 
 implement.

I think the cache should be thread-local. Andrei
Oct 20 2009
prev sibling next sibling parent "Robert Jacques" <sandford jhu.edu> writes:
On Tue, 20 Oct 2009 10:14:52 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:
 Steven Schveighoffer wrote:
 On Mon, 19 Oct 2009 14:51:32 -0400, Andrei Alexandrescu  
 <SeeWebsiteForEmail erdani.org> wrote:

 I just wrote this to Sean and Walter and subsequently discussed it  
 with Walter. Walter thinks this should work. Does anyone have the time  
 and inclination to test this out? It would involve hacking into  
 druntime's implementation of ~= (I'm not sure what the function name  
 is). I'd really appreciate this; I'm overloaded as it is.

 ==================

 In wake of the recent demise of T[new], I was thinking of finding ways  
 of making ~= efficient for T[].

 Currently ~= is slow because accessing GC.sizeOf(void*) acquires a  
 global lock and generally must figure out a lot of things about the  
 pointer to make a decision.

 Also, ~= is dangerous because it allows slices to stomp over other  
 slices.

 I was thinking of solving these issues by keeping an LRU (Least  
 Recently Used) cache inside the implementation of ~=. The LRU would  
 only have a few entries (4-8) and would store the parameters of the  
 last ~= calls, and their cached capacities.

 So whenever code calls arr ~= b, the LRU is searched first. If the  
 system finds "arr" (both bounds) in the LRU, that means the cached  
 capacity is correct and can solve the matter without an actual trip to  
 the GC at all! Otherwise, do the deed and cache the new slice and the  
 new capacity.

 This also solves the lack of safety: if you request a growth on an  
 array you just grew, it's impossible  to have a valid slice beyond  
 that array.

 This LRU would allow us to keep the slice API as it currently is, and  
 also at excellent efficiency.

 What do you think?

location, the beginning location is irrelevant, since you don't grow down.

Awesome, didn't think of that. So now more cases are caught: auto a = new int[100]; a ~= 42; a = a[50 .. $]; a ~= 52; That wouldn't have worked with my original suggestion, but it does work safely with yours.
 What do you do in the case where the memory was recycled?  Does a GC  
 collection cycle clean out the cache as well?

As you saw, there was some discussion about that as well.
 This is better than my two previous ideas.  The only drawback I see is  
 if you have many threads doing appending, or you are appending more  
 than 8 arrays at once in a round-robin fashion, you would lose all the  
 benefit (although it shouldn't affect correctness).  At that point  
 however, you'd have to ask yourself why you aren't using a specialized  
 appender type or function.

Yah. As I suspect a lot of code is actually doing round-robin naturally, I'm considering using a random eviction strategy. That way performance will degrade smoother. A more advanced algorithm would use introspection to choose dynamically between LRU and random. Andrei

So you want to synchronize the ~= function? I thought the LRU would be thread local and therefore independent of these issues, as well as being faster. And if the LRU isn't thread-local, then why not make it part of the GC? It would both be more general and much simpler/cleaner to implement.
Oct 20 2009
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 20 Oct 2009 10:37:40 -0400, Robert Jacques <sandford jhu.edu>  
wrote:

 So you want to synchronize the ~= function? I thought the LRU would be  
 thread local and therefore independent of these issues, as well as being  
 faster. And if the LRU isn't thread-local, then why not make it part of  
 the GC? It would both be more general and much simpler/cleaner to  
 implement.

quoting myself earlier: On Tue, 20 Oct 2009 09:58:01 -0400, Steven Schveighoffer <schveiguy yahoo.com> wrote:
 In response to other's queries about how many LRUs to use, you'd  
 probably want one per heap, and you'd want to lock/not lock based on  
 whether the heap is thread local or not.

You need a locked operation in the case where the heap is shared, otherwise, you lose safety. At the moment all we *have* is a shared heap. So ~= is a synchronized operation until thread-local heaps are available. I think the only logical place for the LRU is the GC, it makes no sense to have a a shared LRU for an unshared GC or vice versa. -Steve
Oct 20 2009
prev sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 20 Oct 2009 10:14:52 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 Steven Schveighoffer wrote:
  This is a very good idea.  Incidentally, you only need the upper bound  
 location, the beginning location is irrelevant, since you don't grow  
 down.

Awesome, didn't think of that. So now more cases are caught: auto a = new int[100]; a ~= 42; a = a[50 .. $]; a ~= 52; That wouldn't have worked with my original suggestion, but it does work safely with yours.

It was one of the coolest parts of my original proposal :) http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.D&article_id=63146 But using a cache solves a lot of the problems I didn't.
 What do you do in the case where the memory was recycled?  Does a GC  
 collection cycle clean out the cache as well?

As you saw, there was some discussion about that as well.

Yeah, I'm reading in thread order :) Still got 91 unread messages, so maybe I'll read all before replying again... -Steve
Oct 20 2009