If you allocate heap memory you can ask for the really created size of the
allocated memory block, as programmers manual says:

The HeapSize function returns the size, in bytes, of a memory block 
allocated from a heap by the HeapAlloc or HeapReAlloc function.

DWORD HeapSize(
    HANDLE hHeap, // handle to the heap
    DWORD dwFlags, // heap size control flags
    LPCVOID lpMem // pointer to memory to return size for 

Is there a possibility to check the biggest size that could POSSIBLY be allocated.
That is: Before you have allocated the memory.

Friendly regards,
Posted on 2006-07-20 13:35:43 by mdevries
Not that I know of.

And what do you mean, exactly, anyway? The largest block of memory that you can grab with HeapAlloc(), or the largest rounding-factor that might be applied to an allocation?
Posted on 2006-07-20 14:45:28 by f0dder
Hi F0dder,

This is what my documentation says about the absolute limitation on size of the allocatable memory block:

In addition, if dwMaximumSize is nonzero, the heap cannot grow,
and an absolute limitation arises: the maximum size of a memory block
in the heap is a bit less than 0x7FFF8 bytes.
Requests to allocate larger blocks will fail, even if the maximum size
of the heap is large enough to contain the block.
If dwMaximumSize is zero, it specifies that the heap is growable.

The heap's size is limited only by available memory. Requests to allocate
blocks larger than 0x7FFF8 bytes do not automatically fail; the system
calls VirtualAlloc to obtain the memory needed for such large blocks.
Applications that need to allocate large memory blocks should set
dwMaximumSize to zero.

So there is an absolute maximum you can ask for.
If you ask for more you have a problem.
But what if you have less memory left than this maximum.
I want to know the maximum allocatable memory block in that situation.

Friendly regards,
Posted on 2006-07-20 15:13:11 by mdevries
That's pretty hard to tell, actually, if not impossible.

Even if there was an API to query the current max you could allocate, before you get a chance to allocate the memory some other application could have allocted it. The max allocatable block will also depend on how fragmented your heap is; HeapCompact() might help but don't depend on it.

Basically I tend to use the Heap for "don't care" allocations. Like, allocations that should never fail on a normal system under normal circumstances. If I need big allocations I tend to VirtualAlloc instead.

If you want to "grab the largest block of memory you can" (if you're, for instance, writing your own cache manager), I would suggest VirtualAlloc, and grabbing some percentage of physical memory (possibly user tweakable).
Posted on 2006-07-20 15:23:36 by f0dder
Strangely, I've had situations where a larger allocation succeeds but a smaller allocation fails.

I developed a 'three strikes and you're out' approach to HeapAlloc.
Try to allocate X bytes.
If failed, Defrag the Heap and try again.
If failed, Try to allocate a LARGER block.
If failed, GIVE UP.

It seems VERY strange to me that a request for a small block fails but a subsequent request for MORE succeeds, but that's what happens (most of the time).
I can only conclude that the Win memory manager must have some kind of mechanism to AVOID heap fragmentation in the first place, otherwise the smaller initial request would have been served.

Interesting, no?
Posted on 2006-07-21 06:34:36 by Homer
Homer, remember SmAlloc (SmallAlloc) o_O ? (although I myself don't have enough experience using it).

mdevries, get the 3-4 dwords before the memory you just allocated - you'll see the inherent chunk-size limitation. I think highest 3 bits of one of the dwords were used for sys-info, and the lower 29 bits- for size of this chunk. Thus, 1 heap - 512MB.

Do some experiments, and always look at these 2-3 dwords. (I'd have done them if I could now - it's interesting)
Posted on 2006-07-22 14:51:50 by Ultrano