Is there better performance copying an entire file to memory then using file mapping. I file mapping is good for being memory efficant but what about speed?
Posted on 2002-02-06 23:47:23 by Quantum
I asked myself about that...
I performed a quick test using the "ReadFile" (free) utility from Gilles Volant (WinImage author) with a "big" (16 MB) file and 64 kb "memory window" if I remember well.
The results were slower than ReadFile, 200 kb/s less, I think but it was on a old machine and didn't try many switches, so I recommend you to compute the test for yourself using this utility ( ).

To me, the way of allocating a memory block corresponding to the file size sucks... (what if your file is 600 MB big ?)

I prefer to read chunks and analyse it seperately (not that slow), but sometimes you can't (search for string in example) and FileMapping is very appreciable in these cases.
Posted on 2002-02-07 01:04:44 by JCP
>To me, the way of allocating a memory block corresponding to the file size sucks... (what if your file is 600 MB big ?)

No problem with ReadFile + ISO image, 634 MB ! (1,5gig ram) :grin:
Posted on 2002-02-07 01:21:29 by bazik
there is one trap with filemappings - files larger than about 600 MB (f.e. 1 GB .vob files) cannot be mapped.

Maybe the limit is the adjusted swap file size.
Posted on 2002-02-07 03:35:50 by beaster

the speed a mapping offers is generally good. I recommend using mappings for files with a medium size (e.g. 100k - xxMByte). Using ReadFile() instead of a mapping does not bring a performance boost.
But there are restrictions as Beaster stated. Mapping an entire large File (VOB 1 GByte) does not work. Maybe there is no compact block of logical memory with that size available because of the funny win memory management. But you can map a file partially. But If you want to do this, ReadFile() seems to be more efficient.
BTW if you have no sequential access to the file because of many jumps in it the mapping works really fine.

Bye Miracle
Posted on 2002-02-07 03:47:09 by miracle
I had peresonally not heard of the limit in file mapping because I have never worked on disk files that large but there is a way around it that is simple enough, just use the open file with a file pointer and read in whatever buffer size you like.

I do not see the point with local disk access of using a buffer smaller than 1 meg, it should run even on old machines with less memory and it will certainly be faster.

Posted on 2002-02-07 04:44:25 by hutch--
According to a quick glance teh max size is 2^64-1
HANDLE CreateFileMapping(

HANDLE hFile, // handle of file to map
LPSECURITY_ATTRIBUTES lpFileMappingAttributes, // optional security attributes
DWORD flProtect, // protection for mapping object
DWORD dwMaximumSizeHigh, // high-order 32 bits of object size
DWORD dwMaximumSizeLow, // low-order 32 bits of object size
LPCTSTR lpName // name of file-mapping object
Posted on 2002-02-09 03:08:41 by eet_1024
the max size is 2^64-1!

this would be fine but can't be true because there is only a adress space of 4 GigaByte the file can be mapped to. Then the upper 2 GigaByte are reserved for system use on win 9x. In addition lots of this address space is used for DLL mapping etc. and thats the reason why you can't map extra large files.

Test to map an entire VOB and it will fail. Atleast under NT/95/98/ME

Bye Miracle

BTW ofcourse you can create mapping that large but MapViewOfFile() will only work with smaller pieces.
Posted on 2002-02-11 03:36:40 by miracle