Hi,

Up until recently I always used the GlobalAlloc/GlobalFree etc APIs, but recently I started trying the Heap APIs. I wrote some code that I started testing by trying to get it to crash (maybe not the best testing method, but I've found that it works). With the Globalxxxx APIs, if you allocate some memory, then free it, then try to write to it, you should and DO get an error:


.386
.MODEL FLAT, STDCALL
OPTION casemap:none

include \masm32\include\windows.inc
include \masm32\include\kernel32.inc

includelib \masm32\lib\kernel32.lib

.data?

pMem DWORD ?

.code
start:

invoke GlobalAlloc, GMEM_FIXED, 1000h
mov pMem, eax
invoke GlobalFree, eax
mov eax, pMem
mov [eax], edx ; <- This causes the following error (or something like it)
;
; The instruction at "0x77f58469" referenced memory at "0xfffffff8".
; The memory could not be "read".

invoke ExitProcess, 0

end start

But when I tried what I can only assume is the Heap equivalent (if not same) it does not produce any error:


.386
.MODEL FLAT, STDCALL
OPTION casemap:none

include \masm32\include\windows.inc
include \masm32\include\kernel32.inc

includelib \masm32\lib\kernel32.lib

.data?

pMem DWORD ?
hHeap DWORD ?

.code
start:

invoke GetProcessHeap
mov hHeap, eax
invoke HeapAlloc, hHeap, NULL, 1000h
mov pMem, eax
invoke HeapFree, hHeap, NULL, eax
mov eax, pMem
mov [eax], edx ; <- But this does NOT cause any error

invoke ExitProcess, 0

end start

Is there any reason that I've missed as to why this doesn't cause any error?

(I'm running XP SP1)

Thanks,
Ossa
Posted on 2004-02-15 05:27:34 by Ossa
Windows has a very complex memory management structure. This is due to the pagefile. Of course you know that few of us have a full 4Gig of physical memory yet we all address and use 4Gig of virtual memory. The time that memory stays active in actual physical memory space is called a Quantum and is based on several variables such as last used, most frequently used and the system timer itself. I can tell you that when GlobalAlloc is freed that it is not erased or written over right away. I have personally tracked the memory then closed the file but with a device driver I can still read and write to it. If I re-open the file I can see the changes that were made with the driver. That means that Windows maintains a pointer to the memory and I guess for sake of speed it does not re-load the file from disk but keeps a cache telling it to load from same memory location.
Posted on 2004-02-15 06:45:18 by mrgone
Ossa, I use this trick :) when an object needs to be locked/unlocked (in multithreaded apps), I place status 2 bytes exactly at the beginning of the object structure:


MyStruct struct
IsDead db ? ; means "always fail at locking"
IsLocked db ? ; means "ok, wait max 4 seconds"
;... more
MyStruct ends

I don't remember my research results on this, but there was some time after you couldn't access more than 20 bytes of the freed memory (or was it 8 bytes). I guess I'll have to redo the experiments with this.
Posted on 2004-02-15 15:47:44 by Ultrano
Hi :)
AFAIK, heap allocation and freeing functions do not operate on memory. A heap is actually a block of allocated memory, that is divided in small fragments. When you call HeapAlloc or HeapFree, you are simply marking this fragments as used or unused. That's why you can still write to heap memory after freeing it. I'm not too familiar with the details of the heap's implementation, but I guess you can end up messing it's internal structures by writing where you should not, so try to avoid it.
Posted on 2004-02-17 11:29:10 by QvasiModo
allocated... reserved or reserved+committed though? ^_^
Global/LocalAlloc ends up calling HeapAlloc on NT anyway - with a couple of undocumented flags, though. This might result in allocating from a different part of the heap, and perhaps flagging that the memory pages should be de-committed sometime after free?

This is speculation, though, and some internal behaviour that you shouldn't depend on as it could change over night with a service pack. It's a valuable debugging tool, but you might as well implement a fully fledged memory tracking system, perhaps relying on VirtualAlloc so you can be 100% sure your memory blocks will be decommitted and flag an exception on further referencing.
Posted on 2004-02-17 11:43:56 by f0dder
It could also be that you're writing to a page that has allocated memory, although you are writing to an unallocated part of it.
The memory protection works on a per-page level (a page is 4k of memory in the regular case), so if some of the page is in use, all of the page has to be readable and writable.
Therefore not all unallocated memory will always cause a pagefault. In fact, I've had some buffer overflow in one of my programs, and I never found out until I ported it to another OS.
Remember: working code != bugfree code!
Posted on 2004-02-17 11:50:09 by Henk-Jan

allocated... reserved or reserved+committed though? ^_^

reserved+commited i guess :)

Global/LocalAlloc ends up calling HeapAlloc on NT anyway - with a couple of undocumented flags, though. This might result in allocating from a different part of the heap, and perhaps flagging that the memory pages should be de-committed sometime after free?

This is speculation, though, and some internal behaviour that you shouldn't depend on as it could change over night with a service pack. It's a valuable debugging tool, but you might as well implement a fully fledged memory tracking system, perhaps relying on VirtualAlloc so you can be 100% sure your memory blocks will be decommitted and flag an exception on further referencing.

From my experience in Win98 (but I never made any serious researching on it :( ) the heap reserves and commits pages, but they never get decommited+released until the heap is destroyed. I also recall reading somewhere in win32.hlp that growable heaps would never decrease in available size. This could mean that memory is never released (but it could be decommited though).
I guess the best way to find out is disassembling some system libraries, which I'm not going to do anyway :rolleyes: so who cares? :grin:
Posted on 2004-02-17 13:18:48 by QvasiModo

It could also be that you're writing to a page that has allocated memory, although you are writing to an unallocated part of it.
The memory protection works on a per-page level (a page is 4k of memory in the regular case), so if some of the page is in use, all of the page has to be readable and writable.
Therefore not all unallocated memory will always cause a pagefault. In fact, I've had some buffer overflow in one of my programs, and I never found out until I ported it to another OS.
Remember: working code != bugfree code!

That makes a lot of sense.

An unrelated question: is the page size always 4k on Intel platforms? In other words, do I really have to get the page size using the APIs, or simply assume it's 4k to speed up things?
Posted on 2004-02-17 13:20:59 by QvasiModo
Could be that the heap isn't decreased automatically - but what if you call HeapCompact()? Also, if your program is going to enter an idle state for a longer period of time, calling SetProcessWorkingSetSize(GetCurrentProcess(), -1, -1); might be a nice idea, as it trims the process working set size. I wouldn't do this too often, though, as it might discard stuff that would then need to be reloaded later, or (even worse) page out stuff to the pagefile. But calling it after application initialization and when your application is going to idle for a while - that might be a nice move.

The page size can be 4k, 2M or 4M on IA32. Dunno when you'd need to explicitly know the pagesize to speed up stuff, though?
Posted on 2004-02-17 13:48:49 by f0dder

The page size can be 4k, 2M or 4M on IA32. Dunno when you'd need to explicitly know the pagesize to speed up stuff, though?

I'm writing a heap replacement with FunkyMeister. It's going to be focused on listview controls, and we're using VirtualAlloc. It might be nice to hardcode the page size rather than having to obtain it and place it in a variable (we could use shifts instead of multiplies, and so on) but it's not a significative optimization anyway. :notsure:
Posted on 2004-02-17 13:53:11 by QvasiModo

Could be that the heap isn't decreased automatically - but what if you call HeapCompact

I have no idea man, I'm just guessing as always, you should know me by now :grin:
Posted on 2004-02-17 13:56:11 by QvasiModo
Humm, well, optimizing the memory usage patterns of the memory manager should be a lot more important than replacing mul with shift - so I guess it's nice to play it safe? I'd probably play around with both, though, testing speed and code simplicity and such. I think it's unlikely that VirtualAlloc will operate on anything but 4k pages on IA32, but then again... per-page X permission bits are include in AMD64 in PAE mode (also for 32bit code, right?) - so things *do* change.
Posted on 2004-02-17 13:58:22 by f0dder
I think it's unlikely that VirtualAlloc will operate on anything but 4k pages on IA32, but then again...


Some Windows versions support PAE, I wonder what VirtualAlloc will do when that is enabled...
Posted on 2004-02-17 14:20:25 by Henk-Jan
You need to flag your executable as large-address aware with the linker, and you need to use the special API subset (AWE* I think?) to use the large memory - well, at least to access a lot of memory at once in *one* application. VirtualAlloc might be churning out 4meg chunks in the page table, but I guess this should be "mostly" transparent to user code?
Posted on 2004-02-17 15:17:48 by f0dder