I'm currently mapping a large 40 mb file to memory using the API. I thought it would be faster and more memory efficient than just reading from a file. However when I start probing the memory map I notice that the memory consumption is increasing. When I reach the end of the scan I notice that the entire size of the file has been allocated to memory. I was wondering if anyone has any links on how memory mapping works and how windows releases the resources after. I would also like to know if Mapping a large block at a time would be better instead of the whole file?

Posted on 2004-08-17 09:36:30 by rorra
moocow, windows releases the memory "when it needs to". Even if you used normal FileRead/FileWrite, windows will cache the file, and release the cache memory "when it needs to".

If you're processing the file sequentially, you might as well use normal FileRead (and specify FILE_FLAG_SEQUENTIAL_SCAN on CreateFile, just for the heck of it). The main advantage of File Mapping is that it makes algorithm design easier, and for random access (ie, seeking back and forth in a file and doing reads/writes), it should be faster.
Posted on 2004-08-17 10:06:46 by f0dder