www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - memory-mapped files

reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Indeed, time and again, "testing is believing".

I tried a simple line splitting program in D with and without memory 
mapping against a 140MB file. The program just reads the entire file and 
does some simple string processing on it.

The loop pattern looks like this:

     foreach (line; byLineDirect(stdin))
     {
         auto r = splitter(line, "|||");
         write(r.head, ":");
         r.next;
         writeln(r.head);
     }

The byLineDirect returns a range that uses memory mapped files when 
possible, or simple fread calls otherwise.

The memory-mapped version takes 2.15 seconds on average. I was fighting 
against Perl's equivalent 2.45. At some point I decided to try without 
memory mapping and I consistently got 1.75 seconds. What the heck is 
going on? When does memory mapping actually help?


Andrei
Feb 17 2009
next sibling parent grauzone <none example.net> writes:
Could you post compilable versions for both approaches, so that we can 
test it our self?
I guess one would also need some input data.
Feb 17 2009
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:

 Indeed, time and again, "testing is believing".

Yep. Time ago I have read that the only science of "computer science" is in things like timing benchmarks and the like :-)
      foreach (line; byLineDirect(stdin))

I don't like that byLineDirect() too much, it will become one of the most used in scripting-like programs, so it deserves to be short&easy.
          write(r.head, ":");

Something tells me that such .head will become so common in D programs that my fingers will learn to write it while I sleep too :-)
          r.next;

.next is clear, nice, and short. Its only fault is that it doesn't sound much like something that has side effects... I presume it's not possible to improve this situation.
What the heck is going on? When does memory mapping actually help?<

You are scanning the file linearly, and the memory window you use is probably very small. In such situation a memory mapping is probably not the best thing. A memory mapping is useful when you for example operate with random access on a wider sliding window on the file. Bye, bearophile
Feb 17 2009
next sibling parent reply Brad Roberts <braddr puremagic.com> writes:
bearophile wrote:
 What the heck is going on? When does memory mapping actually help?<

You are scanning the file linearly, and the memory window you use is probably very small. In such situation a memory mapping is probably not the best thing. A memory mapping is useful when you for example operate with random access on a wider sliding window on the file.

You can drop the 'sliding' part. mmap tends to help when doing random access (or sequential but non-contiguous maybe) over a file. Pure streaming is handled pretty well by both patterns. One nicity with mmap is that you can hint to the os how you'll be using it via madvise. You can't do that with [f]read. Later, Brad
Feb 17 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Brad Roberts wrote:
 bearophile wrote:
  >> What the heck is going on? When does memory mapping actually help?<
 You are scanning the file linearly, and the memory window you use is
 probably very small. In such situation a memory mapping is probably
 not the best thing. A memory mapping is useful when you for example
 operate with random access on a wider sliding window on the file.

You can drop the 'sliding' part. mmap tends to help when doing random access (or sequential but non-contiguous maybe) over a file. Pure streaming is handled pretty well by both patterns. One nicity with mmap is that you can hint to the os how you'll be using it via madvise. You can't do that with [f]read.

This all would make perfect sense if the performance was about the same in the two cases. But in fact memory mapping introduced a large *pessimization*. Why? I am supposedly copying less data and doing less work. This is very odd. Andrei
Feb 17 2009
next sibling parent Sean Kelly <sean invisibleduck.org> writes:
== Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s article
 Brad Roberts wrote:
 bearophile wrote:
  >> What the heck is going on? When does memory mapping actually help?<
 You are scanning the file linearly, and the memory window you use is
 probably very small. In such situation a memory mapping is probably
 not the best thing. A memory mapping is useful when you for example
 operate with random access on a wider sliding window on the file.

You can drop the 'sliding' part. mmap tends to help when doing random access (or sequential but non-contiguous maybe) over a file. Pure streaming is handled pretty well by both patterns. One nicity with mmap is that you can hint to the os how you'll be using it via madvise. You can't do that with [f]read.

in the two cases. But in fact memory mapping introduced a large *pessimization*. Why? I am supposedly copying less data and doing less work. This is very odd.

If I had to guess, I'd say that the OS assumes every file will be read in a linear manner from front to back, and optimizes accordingly. There's no way of knowing how a memory-mapped file will be accessed however, so no such optimization occurs. Sean
Feb 18 2009
prev sibling parent reply Benji Smith <dlanguage benjismith.net> writes:
Andrei Alexandrescu wrote:
 This all would make perfect sense if the performance was about the same 
 in the two cases. But in fact memory mapping introduced a large 
 *pessimization*. Why? I am supposedly copying less data and doing less 

Pessimization? What a great word! I've never heard that before! --benji
Feb 18 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Benji Smith wrote:
 Andrei Alexandrescu wrote:
 This all would make perfect sense if the performance was about the 
 same in the two cases. But in fact memory mapping introduced a large 
 *pessimization*. Why? I am supposedly copying less data and doing less 

Pessimization? What a great word! I've never heard that before! --benji

I've heard it first from Scott Meyers. Andrei
Feb 18 2009
parent Sergey Gromov <snake.scaly gmail.com> writes:
Wed, 18 Feb 2009 20:56:16 -0800, Andrei Alexandrescu wrote:

 Benji Smith wrote:
 Andrei Alexandrescu wrote:
 This all would make perfect sense if the performance was about the 
 same in the two cases. But in fact memory mapping introduced a large 
 *pessimization*. Why? I am supposedly copying less data and doing less 

Pessimization? What a great word! I've never heard that before! --benji

I've heard it first from Scott Meyers.

I've heard this term in connection with premature optimization discussions. Like, premature optimization is investing time in improving something that doesn't really need to be improved. On the other hand, pessimization is doing something which is easy to avoid and is almost guaranteed to slow you down. Like using post-increment in C++.
Feb 19 2009
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Brad Roberts wrote:
 bearophile wrote:
  >> What the heck is going on? When does memory mapping actually help?<
 You are scanning the file linearly, and the memory window you use is
 probably very small. In such situation a memory mapping is probably
 not the best thing. A memory mapping is useful when you for example
 operate with random access on a wider sliding window on the file.

You can drop the 'sliding' part. mmap tends to help when doing random access (or sequential but non-contiguous maybe) over a file. Pure streaming is handled pretty well by both patterns. One nicity with mmap is that you can hint to the os how you'll be using it via madvise. You can't do that with [f]read.

Hey Brad, Nice advice on madvise, didn't know about it. Just in case it might be useful to someone, trying madvise with any of the four possible policies did not yield any noticeable change in timing for my particular test. Andrei
Feb 18 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 Nice advice on madvise, didn't know about it. Just in case it might be 
 useful to someone, trying madvise with any of the four possible policies 
 did not yield any noticeable change in timing for my particular test.

If you can build 4 windows executables, I can time them on my machine, and we can see if windows behaves differently.
Feb 18 2009
parent Kagamin <spam here.lot> writes:
Walter Bright Wrote:

 Andrei Alexandrescu wrote:
 Nice advice on madvise, didn't know about it. Just in case it might be 
 useful to someone, trying madvise with any of the four possible policies 
 did not yield any noticeable change in timing for my particular test.

If you can build 4 windows executables, I can time them on my machine, and we can see if windows behaves differently.

By default windows does random access optimisation simply sucking file into cache which is faster (on XP) than sequential access optimisation. It will behave quite good if all 400MB fit in your file cache.
Feb 19 2009
prev sibling parent "Vladimir Panteleev" <thecybershadow gmail.com> writes:
On Wed, 18 Feb 2009 06:22:17 +0200, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 Brad Roberts wrote:
 bearophile wrote:
  >> What the heck is going on? When does memory mapping actually help?<
 You are scanning the file linearly, and the memory window you use is
 probably very small. In such situation a memory mapping is probably
 not the best thing. A memory mapping is useful when you for example
 operate with random access on a wider sliding window on the file.

access (or sequential but non-contiguous maybe) over a file. Pure streaming is handled pretty well by both patterns. One nicity with mmap is that you can hint to the os how you'll be using it via madvise. You can't do that with [f]read.

This all would make perfect sense if the performance was about the same in the two cases. But in fact memory mapping introduced a large *pessimization*. Why? I am supposedly copying less data and doing less work. This is very odd.

Perhaps this may help: http://en.wikipedia.org/wiki/Memory-mapped_file#Drawbacks -- Best regards, Vladimir mailto:thecybershadow gmail.com
Feb 17 2009
prev sibling next sibling parent reply "Lionello Lunesu" <lionello lunesu.remove.com> writes:
 The memory-mapped version takes 2.15 seconds on average. I was fighting 
 against Perl's equivalent 2.45. At some point I decided to try without 
 memory mapping and I consistently got 1.75 seconds. What the heck is going 
 on? When does memory mapping actually help?

Random seeking in large files :) Sequential read can't possibly gain anything by using MM because that's what the OS will end up doing, but MM is using the paging system, which has some overhead (a page fault has quite a penalty, or so I've heard.) I use std.mmfile for a simple DB implementation, where the DB file is just a large, >1GB, array of structs, conveniently accessible as a struct[] in D. (Primary key is the index, of course.) L.
Feb 17 2009
parent BCS <none anon.com> writes:
Hello Lionello,

 The memory-mapped version takes 2.15 seconds on average. I was
 fighting against Perl's equivalent 2.45. At some point I decided to
 try without memory mapping and I consistently got 1.75 seconds. What
 the heck is going on? When does memory mapping actually help?
 

Sequential read can't possibly gain anything by using MM because that's what the OS will end up doing, but MM is using the paging system, which has some overhead (a page fault has quite a penalty, or so I've heard.)

paging is going to be built to move date in the fastest possible way so it would be expected that using MM would be fast. The only thing I see getting in the way would be 1) it uses up lots of address space and 2) you might be able to lump reads or hint to the OS to pre load when you load the file other ways. It would be neat to see what happens if you MM a file and force page faults on the whole thing right up front (IIRC the is an asm op that forces a page fault but doesn't wait for it). Even better might be to force a page fault for N pages ahead of where you are processing.
Feb 17 2009
prev sibling parent Kagamin <spam here.lot> writes:
May be mm scheme results in more calls to HDD?
Feb 18 2009