I write a test program which function is search all the files in the chosen directory and sub-directory and to fill the files with zero, it works well but it's inefficient when fill the large files(100M or large), I suppose the search snippet(see below) is OK,so...how can I set the NumberOfBytesToWrite in WriteFile to make it efficiently or how can I call different WriteFile(use different parameters) according to the file size?
szSize    equ  1048576     ; It compiles error but 1024 is OK
szZero    BYTE    szSize  dup  (?)
invoke  WriteFile,hFile,addr szZero,szSize,addr BytesWritten,0


invoke	FindFirstFile,addr @szSearch,addr @stFindFile
.if eax != INVALID_HANDLE_VALUE
mov @hFindFile,eax
.repeat
.if @stFindFile.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY
.if @stFindFile.cFileName != '.'
invoke _FindFile,addr @szFindFile
.endif
.else
invoke _ProcessFile,addr @szFindFile
.endif
invoke FindNextFile,@hFindFile,addr @stFindFile
.until (eax == FALSE) || (dwOption & F_STOP)
invoke FindClose,@hFindFile
.endif


And some other questions, how can I change the file name and delete it at last after I fill the file with zero?

THX.
Posted on 2006-03-31 02:54:45 by Eric4ever
You should do your writing in "chunks" - you'll have to experiment to find the optimal size, personally I'd probably use a 64kb block allocated with VirtualAlloc; seems a decent trade-off between not using too much memory and not making too many WriteFile calls. Do experiment with values of your own, though. You'll need a chunk/block writing loop instead of just a single WriteFile call.

Further enhancements would be to open the file in UNBUFFERED mode so you don't go through the cache subsystem (a bit faster + doesn't waste memory for cache) (you might want to use somewhat larger blocks in that case). Also, for file wiping, filling with 0 is a bad idea because some filesystems can optimize all-zero writes away, and thus won't actually overwrite your data on disk.
Posted on 2006-03-31 06:48:10 by f0dder
You might also consider using CreateFileMapping() and MapViewOfFile()
Posted on 2006-03-31 12:49:53 by XCHG

You might also consider using CreateFileMapping() and MapViewOfFile()


It's a thing to consider and test for performance - but you still need to process in "chunks" if you want to support large files, since even on NT there's a limit to how large files you can map... unless you only want to support 64bit editions of windows :)

Also, while filemapping gets the job done in the same time as WriteFile (because you're I/O bound), filemapping puts a higher strain on your CPU because you effectively get a pagefault for each 4kb of your write. This can be noticable even on relatively decent hardware. Furthermore, filemapping doesn't respect flags like NO_BUFFERING, which means you *will* use filesystem cache, which isn't really what you want for a write-through operation like this.
Posted on 2006-03-31 12:54:33 by f0dder
I use the process like this:

Fill Proc

        mov    edi,Fillnum
label0:
        CreateFile
        GetFileSize                                               
        .if eax >= 400H ; largefile
        mov    esi,eax
        shr    eax,SB ;SB == 20
label1:
        .if eax != 0
        push    eax
        WriteFile      ; 1024*1024
        pop    eax
        dec    eax
        jnz    label1
        .endif

        and    esi,OB ; OB == 1024*1024-1
        WriteFile           
        CloseHandle                                             

        dec    edi
        jnz    label0

        .else ; Smallfile
        mov    esi,eax
        shr    eax,SS ; SS == 10
label2:
        .if eax != 0
        push    eax
        WriteFile                ;  1024
        pop    eax
        dec    eax
        jnz    label2
        .endif

        and    esi,OS ; OS == 1023
        WriteFile
        CloseHandle
        dec    edi
        jnz    label0
        .endif
        pop    esi
        pop    edi

        ret

Fill ENDP


The WriteFile size == 1024 is OK, but for the large file the WriteFile size ==1024*1024 error?
Posted on 2006-04-02 21:07:42 by Eric4ever
Try this on for size...

PUBLIC WipeFile
WipeFile PROC STDCALL uses ebx, fn:DWORD
LOCAL _fsize$:DWORD, _bwrite$:DWORD, _buf$:DWORD, _file$:DWORD
WIPESIZE = (1024*256)

mov [_file$], 0
mov [_buf$], 0
xor ebx, ebx ; ebx is used for success/failure

; allocate buffer memory
invoke VirtualAlloc, 0, WIPESIZE, MEM_COMMIT, PAGE_READWRITE
test eax, eax
jz @@cleanup
mov DWORD PTR [_buf$], eax

; open file
invoke CreateFile, , GENERIC_WRITE, 0, 0, OPEN_EXISTING,\
FILE_FLAG_WRITE_THROUGH or FILE_FLAG_NO_BUFFERING, 0
cmp eax, INVALID_HANDLE_VALUE
je @@cleanup
mov DWORD PTR [_file$], eax

; get filesize and round to WIPESIZE boundary
invoke GetFileSize, [_file$], 0
add eax, WIPESIZE-1
and eax, NOT (WIPESIZE-1)
mov [_fsize$], eax

; fill buffer with junk. Don't zero because certain filesystems might do
; zero-optimizations. The best would probably be to fill with Really Random Data (TM).
invoke RtlFillMemory, [_buf$], eax, '@'

; do the actual file wiping
@@FillLoop:
invoke WriteFile, [_file$], [_buf$], WIPESIZE, addr [_bwrite$], 0

test eax, eax
jz @@cleanup
cmp [_bwrite$], WIPESIZE
jne @@cleanup

sub DWORD PTR [_fsize$], WIPESIZE
jnz @@FillLoop

; flush filebuffers. Don't have much afaik, but included for completeness
invoke FlushFileBuffers, [_file$]
inc ebx ; indicate success

@@cleanup:
; free buffer memory, unless the pointer is 0
mov eax, [_buf$]
test eax, eax
jz @@skipfree
invoke VirtualFree, eax, 0, MEM_RELEASE
@@skipfree:

; close file, unless the handle is 0
mov eax, [_file$]
test eax, eax
jz @@skipclose
invoke CloseHandle, [_file$]
@@skipclose:

mov eax, ebx ; returnvalue = success/failure indicator

ret
WipeFile ENDP

Posted on 2006-04-03 09:26:50 by f0dder
I found a way to "fill large files with zero" when I was working on the OpenSlather p2p project some time back. Search this forum.
From memory, I would do the following:
-DeleteFile
-CreateFile with OPEN_ALWAYS
-SetFilePointer to the desired filesize, minus one byte (yes, even though this is beyond the end of the file
-(SetEndOfFile at the current FilePointer position ?? cant remember if I did..)
-Write one zero byte, the filesize will NOT be changed unless we do this
-Close file handle

This method can create huge files eg 10 gigs , and do it in about a half a second.
If we don't delete/recreate the file first, the extended file data is "undefined" and could contain random junk - but since we remade the file, in my experience, its always totally full of NULL, with the exception of the final byte, that in our case is ALSO NULL.




Posted on 2006-04-20 01:37:57 by Homer
Won't zero out the on-disk data though - that's why it's fast. And it only works on NTFS, not FAT (or rather, on FAT it's slow and *does* zero-fill the on-disk data). There's also no guarantee that the original on-disk data will be overwritten, windows *could* choose to write to some other location (although in practice, it shouldn't).

Posted on 2006-04-20 04:32:07 by f0dder