www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.announce - LRUCache - a simple least recently used cache

reply Charles D Hixson <charleshixsn earthlink.net> writes:
In the hopes that someone find this useful:
http://www.prowiki.org/wiki4d/wiki.cgi?CharlesHixson

This particular version presumes D2.x code, but I think the 
adjustments are trivial for D1.x.
Nov 03 2007
next sibling parent reply renoX <renosky free.fr> writes:
Charles D Hixson a écrit :
 In the hopes that someone find this useful:
 http://www.prowiki.org/wiki4d/wiki.cgi?CharlesHixson
 
 This particular version presumes D2.x code, but I think the adjustments 
 are trivial for D1.x.

I think that there is a flaw in your code: the access counter is a ushort (16bit value) and is incremented each time you access an element in the cache: it can overflow quickly and will wrap around.. This means that recently used elements could have a _nodeId smaller than nodes used less recently, which means that the overflow handling code will drop the wrong elements.. This may not be the case in your usage, but this is either a bug or at least a pretty big restriction in the usage which should be clearly indicated. Regards, renoX
Nov 04 2007
parent reply Charles D Hixson <charleshixsn earthlink.net> writes:
renoX wrote:
 Charles D Hixson a écrit :
 In the hopes that someone find this useful:
 http://www.prowiki.org/wiki4d/wiki.cgi?CharlesHixson

 This particular version presumes D2.x code, but I think the 
 adjustments are trivial for D1.x.

I think that there is a flaw in your code: the access counter is a ushort (16bit value) and is incremented each time you access an element in the cache: it can overflow quickly and will wrap around.. This means that recently used elements could have a _nodeId smaller than nodes used less recently, which means that the overflow handling code will drop the wrong elements.. This may not be the case in your usage, but this is either a bug or at least a pretty big restriction in the usage which should be clearly indicated. Regards, renoX

Well, two things: 1) This code isn't really adapted for large sets of data. If you want that, you'd better hand optimize it for your intended application. 2) Whenever the cache is cleared, all the cache nodeId's are decreased by the minimum nodeId (so the new minimum is zero...or one, I'd need to check again. This should decrease the rate at which the maximum nodeId grows. However I *did* take your suggestion about documenting it. OTOH, it would be better if I could renumber the nodes sequentially, but doing that without a sort is difficult (i.e., I haven't figured out how). A different organization would be to keep nodeIds in a queue, with pointers to the hash table, but that has other costs. Still, I SHOULD add a routine to renumber everything whenever the maxT value starts to get withing maxSize of ushort.max. This routine will probably be introduced at some later version. Any suggestions for a quick and simple way to do this? Would it be reasonable to arbitrarily renumber everything in the cache serially (i.e., without considering the current nodeIds) whenever the maximum nodeId started to approach ushort.max? (That's the direction that I'm leaning in...but I haven't decided whether or not it's reasonable.)
Nov 04 2007
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Charles D Hixson wrote:
 renoX wrote:
 Charles D Hixson a écrit :
 In the hopes that someone find this useful:
 http://www.prowiki.org/wiki4d/wiki.cgi?CharlesHixson

 This particular version presumes D2.x code, but I think the 
 adjustments are trivial for D1.x.

I think that there is a flaw in your code: the access counter is a ushort (16bit value) and is incremented each time you access an element in the cache: it can overflow quickly and will wrap around..

1) This code isn't really adapted for large sets of data. If you want that, you'd better hand optimize it for your intended application. 2) Whenever the cache is cleared, all the cache nodeId's are decreased by the minimum nodeId (so the new minimum is zero...or one, I'd need to check again. This should decrease the rate at which the maximum nodeId grows.

I don't know why you chose to go with a short, but I have had situations in the past where using a short instead of an int had significantly worse performance. In my case it was a big array of shorts vs a big array of ints, so probably there was extra alignment mojo that had to happen to fetch data from the short array into a register and then put it back. But anyway CPUs seem to like working on chunks of 32-bits better than 16. You might try making it an int just to see how it performs. --bb
Nov 04 2007
parent Charles D Hixson <charleshixsn earthlink.net> writes:
Bill Baxter wrote:
 Charles D Hixson wrote:
 renoX wrote:
 Charles D Hixson a écrit :
 In the hopes that someone find this useful:
 http://www.prowiki.org/wiki4d/wiki.cgi?CharlesHixson

 This particular version presumes D2.x code, but I think the 
 adjustments are trivial for D1.x.

I think that there is a flaw in your code: the access counter is a ushort (16bit value) and is incremented each time you access an element in the cache: it can overflow quickly and will wrap around..

1) This code isn't really adapted for large sets of data. If you want that, you'd better hand optimize it for your intended application. 2) Whenever the cache is cleared, all the cache nodeId's are decreased by the minimum nodeId (so the new minimum is zero...or one, I'd need to check again. This should decrease the rate at which the maximum nodeId grows.

I don't know why you chose to go with a short, but I have had situations in the past where using a short instead of an int had significantly worse performance. In my case it was a big array of shorts vs a big array of ints, so probably there was extra alignment mojo that had to happen to fetch data from the short array into a register and then put it back. But anyway CPUs seem to like working on chunks of 32-bits better than 16. You might try making it an int just to see how it performs. --bb

ubyte.... but that seemed like cutting things too close. (It's certainly easy enough to change it in the code, but that wouldn't be my preferred solution.)
Nov 04 2007
prev sibling next sibling parent reply renoX <renosky free.fr> writes:
Charles D Hixson a écrit :
 renoX wrote:
 Charles D Hixson a écrit :
 In the hopes that someone find this useful:
 http://www.prowiki.org/wiki4d/wiki.cgi?CharlesHixson

 This particular version presumes D2.x code, but I think the 
 adjustments are trivial for D1.x.

I think that there is a flaw in your code: the access counter is a ushort (16bit value) and is incremented each time you access an element in the cache: it can overflow quickly and will wrap around.. This means that recently used elements could have a _nodeId smaller than nodes used less recently, which means that the overflow handling code will drop the wrong elements.. This may not be the case in your usage, but this is either a bug or at least a pretty big restriction in the usage which should be clearly indicated. Regards, renoX

Well, two things: 1) This code isn't really adapted for large sets of data. If you want that, you'd better hand optimize it for your intended application.

I'm not talking about a large set of data, just about a program which runs for a long time and/or which make heavy usage of the data in the cache: each call to 'contains' or 'opIndex' increase maxNode so it could overflow quite quickly (an ushort isn't that big).
 Any suggestions for a quick and simple way to do this?

Just make the access counter 32bit, it should be enough for a 'normal' program (not an Apache style program though).
 Would it be reasonable to arbitrarily renumber everything in the cache 
 serially (i.e., without considering the current nodeIds) whenever the 
 maximum nodeId started to approach ushort.max?  (That's the direction 
 that I'm leaning in...but I haven't decided whether or not it's 
 reasonable.)

I'm not an expert in D's but I think that an array of struct Node{ Key key; Body bdy; ushort _nodeId; } or an array of struct Node{ Key key; Body bdy; uint _nodeId; } have exactly the same size due to the padding.. And maybe on x86-64, you could use ulong without increasing the size of the array.. Could someone confirm/infirm? For me, key and bdy are reference, so they have the same size as pointers, am I right? Regards, renoX
Nov 04 2007
parent reply Charles D Hixson <charleshixsn earthlink.net> writes:
renoX wrote:
 Charles D Hixson a écrit :
 renoX wrote:
 Charles D Hixson a écrit :
 In the hopes that someone find this useful:
 http://www.prowiki.org/wiki4d/wiki.cgi?CharlesHixson

 This particular version presumes D2.x code, but I think the 
 adjustments are trivial for D1.x.

I think that there is a flaw in your code: the access counter is a ushort (16bit value) and is incremented each time you access an element in the cache: it can overflow quickly and will wrap around.. This means that recently used elements could have a _nodeId smaller than nodes used less recently, which means that the overflow handling code will drop the wrong elements.. This may not be the case in your usage, but this is either a bug or at least a pretty big restriction in the usage which should be clearly indicated. Regards, renoX

Well, two things: 1) This code isn't really adapted for large sets of data. If you want that, you'd better hand optimize it for your intended application.

I'm not talking about a large set of data, just about a program which runs for a long time and/or which make heavy usage of the data in the cache: each call to 'contains' or 'opIndex' increase maxNode so it could overflow quite quickly (an ushort isn't that big).
 Any suggestions for a quick and simple way to do this?

Just make the access counter 32bit, it should be enough for a 'normal' program (not an Apache style program though).
 Would it be reasonable to arbitrarily renumber everything in the cache 
 serially (i.e., without considering the current nodeIds) whenever the 
 maximum nodeId started to approach ushort.max?  (That's the direction 
 that I'm leaning in...but I haven't decided whether or not it's 
 reasonable.)

I'm not an expert in D's but I think that an array of struct Node{ Key key; Body bdy; ushort _nodeId; } or an array of struct Node{ Key key; Body bdy; uint _nodeId; } have exactly the same size due to the padding.. And maybe on x86-64, you could use ulong without increasing the size of the array.. Could someone confirm/infirm? For me, key and bdy are reference, so they have the same size as pointers, am I right? Regards, renoX

the test examples. OTOH, they easily COULD be. I've done a slight redesign (not yet posted) that should answer your objections (I decided to renumber when necessary, but I made NodeId a type, so it's easy for anyone to change if they want to). OTOH, Body MIGHT be rather large, so I feel that I need to split things up so that foreach won't necessarily copy the entire structure when only an access to the nodeId, e.g., is needed.
Nov 04 2007
parent Charles D Hixson <charleshixsn earthlink.net> writes:
Charles D Hixson wrote:
 renoX wrote:
 Charles D Hixson a écrit :
 renoX wrote:
 Charles D Hixson a écrit :
 In the hopes that someone find this useful:
 http://www.prowiki.org/wiki4d/wiki.cgi?CharlesHixson

 This particular version presumes D2.x code, but I think the 
 adjustments are trivial for D1.x.

I think that there is a flaw in your code: the access counter is a ushort (16bit value) and is incremented each time you access an element in the cache: it can overflow quickly and will wrap around.. This means that recently used elements could have a _nodeId smaller than nodes used less recently, which means that the overflow handling code will drop the wrong elements.. This may not be the case in your usage, but this is either a bug or at least a pretty big restriction in the usage which should be clearly indicated. Regards, renoX

Well, two things: 1) This code isn't really adapted for large sets of data. If you want that, you'd better hand optimize it for your intended application.

I'm not talking about a large set of data, just about a program which runs for a long time and/or which make heavy usage of the data in the cache: each call to 'contains' or 'opIndex' increase maxNode so it could overflow quite quickly (an ushort isn't that big).
 Any suggestions for a quick and simple way to do this?

Just make the access counter 32bit, it should be enough for a 'normal' program (not an Apache style program though).
 Would it be reasonable to arbitrarily renumber everything in the 
 cache serially (i.e., without considering the current nodeIds) 
 whenever the maximum nodeId started to approach ushort.max?  (That's 
 the direction that I'm leaning in...but I haven't decided whether or 
 not it's reasonable.)

I'm not an expert in D's but I think that an array of struct Node{ Key key; Body bdy; ushort _nodeId; } or an array of struct Node{ Key key; Body bdy; uint _nodeId; } have exactly the same size due to the padding.. And maybe on x86-64, you could use ulong without increasing the size of the array.. Could someone confirm/infirm? For me, key and bdy are reference, so they have the same size as pointers, am I right? Regards, renoX

examples. OTOH, they easily COULD be. I've done a slight redesign (not yet posted) that should answer your objections (I decided to renumber when necessary, but I made NodeId a type, so it's easy for anyone to change if they want to). OTOH, Body MIGHT be rather large, so I feel that I need to split things up so that foreach won't necessarily copy the entire structure when only an access to the nodeId, e.g., is needed.

Version 0.2 has now been posted. I switched from using structures to using arrays. It's now on a page linked off my main page, because I wasn't able to get changes to the main page to stick. http://www.prowiki.org/wiki4d/wiki.cgi?LruCache I still haven't tested the renumbering...that will have to wait for something that uses it for an extended period of time, but the code is in there, if you need it. (It's also easy to change the NodeId type from ushort to ulong if that's your preference.)
Nov 04 2007
prev sibling parent reply janderson <askme me.com> writes:
Charles D Hixson wrote:
 
 However I *did* take your suggestion about documenting it.
 
 OTOH, it would be better if I could renumber the nodes sequentially, but 
 doing that without a sort is difficult (i.e., I haven't figured out how).
 
 A different organization would be to keep nodeIds in a queue, with 
 pointers to the hash table, but that has other costs. Still, I SHOULD 
 add a routine to renumber everything whenever the maxT value starts to 
 get withing maxSize of ushort.max. This routine will probably be 
 introduced at some later version.
 
 Any suggestions for a quick and simple way to do this?

A good form of documentation is to use contracts for this sort of thing. -Joel
Nov 07 2007
parent Charles D Hixson <charleshixsn earthlink.net> writes:
janderson wrote:
 Charles D Hixson wrote:
 However I *did* take your suggestion about documenting it.

 OTOH, it would be better if I could renumber the nodes sequentially, 
 but doing that without a sort is difficult (i.e., I haven't figured 
 out how).

 A different organization would be to keep nodeIds in a queue, with 
 pointers to the hash table, but that has other costs. Still, I SHOULD 
 add a routine to renumber everything whenever the maxT value starts to 
 get withing maxSize of ushort.max. This routine will probably be 
 introduced at some later version.

 Any suggestions for a quick and simple way to do this?

A good form of documentation is to use contracts for this sort of thing. -Joel

any application of contracts to this problem. Note that the maximum length is specified as a ushort, so it's guaranteed that there will always be enough allocatable nodeId's to hold all the entries in the list. (This *doesn't* mean that this would be a good idea. A cache that large would usually be a very bad idea.) Also, an LRUCache is allowed to drop entries on the grounds that they "haven't been used recently enough", so you can't depend on the thing that you stuck in there awhile ago to still be there. The problem was a quick way of renumbering the entries that retained the time ordering. The current version gives up on retaining the time ordering, and after a cache clear that leaves the highest node near the maximum nodeId value, it just renumbers everything from by hashcode order. This should work well enough in most cases, but it would be better if I didn't need to lose the order of last access. I just haven't thought of a way to do that that's even close to efficient. N.B.: While thinking of this I've just noticed a potential flaw. With frequent access and no additions the maximum nodeId could increase to too large a number without invoking a cache compression, so I'd better move the check for renumbering to where the nodeId is assigned.
Nov 08 2007
prev sibling parent reply BCS <ao pathlink.com> writes:
Reply to Charles,

 In the hopes that someone find this useful:
 http://www.prowiki.org/wiki4d/wiki.cgi?CharlesHixson
 This particular version presumes D2.x code, but I think the
 adjustments are trivial for D1.x.
 

Would you like to add this to scrapple? I will get you access if you would.
Nov 04 2007
next sibling parent Charles D Hixson <charleshixsn earthlink.net> writes:
BCS wrote:
 Reply to Charles,
 
 In the hopes that someone find this useful:
 http://www.prowiki.org/wiki4d/wiki.cgi?CharlesHixson
 This particular version presumes D2.x code, but I think the
 adjustments are trivial for D1.x.

Would you like to add this to scrapple? I will get you access if you would.

have any in mind right now.
Nov 04 2007
prev sibling parent reply Charles D Hixson <charleshixsn earthlink.net> writes:
BCS wrote:
 Reply to Charles,
 
 In the hopes that someone find this useful:
 http://www.prowiki.org/wiki4d/wiki.cgi?CharlesHixson
 This particular version presumes D2.x code, but I think the
 adjustments are trivial for D1.x.

Would you like to add this to scrapple? I will get you access if you would.

to undo. 1) I've attached a copy of the program to the main scrapple page. This should be deleted...but I don't know how. 2) I've attached a copy of the program named lrucache2.d to the lrucache page, This one should also be deleted. (I also attached the file with it's correct name, but this didn't override the prior attachment with an incorrect name.) FWIW, there is no significant difference between lrucache.d and lrucache2.d, except that in lrucache2.d the tabs have been expanded into spaces (expand -t3 lrucache.d > lrucache2.d). Even then, I believe that for the version that I entered the tabs had already been converted to spaces. (I much prefer to work with tabs, but apparently many people either prefer spaces, or have a toolchain that doesn't deal nicely with tabs...so I intend to be attaching a version with spaces, even though that's not "the preferred form of the work for making modifications to it".
Nov 10 2007
parent BCS <ao pathlink.com> writes:
Reply to Charles,

 I seem to have made a few errors that I can't figure out how
 to undo.
 1) I've attached a copy of the program to the main scrapple
 page.  This should be deleted...but I don't know how.
 2) I've attached a copy of the program named lrucache2.d to
 the lrucache page,  This one should also be deleted.  (I also
 attached the file with it's correct name, but this didn't
 override the prior attachment with an incorrect name.)
 FWIW, there is no significant difference between lrucache.d
 and lrucache2.d, except that in lrucache2.d the tabs have been
 expanded into spaces (expand -t3 lrucache.d > lrucache2.d).
 Even then, I believe that for the version that I entered the
 tabs had already been converted to spaces.
 (I much prefer to work with tabs, but apparently many people
 either prefer spaces, or have a toolchain that doesn't deal
 nicely with tabs...so I intend to be attaching a version with
 spaces, even though that's not "the preferred form of the work
 for making modifications to it".

With regards to the wiki, I don't known much about how things work. Ask around, I'm sure someone who knows more than me can help you. As for the code, that should be put into the SVN repository rather than the wiki. I have added you to user list for SVN so you can now commit to it.
Nov 10 2007