I have received no response on a msg in another thread. Perhaps an ex. would indicate the problem I am experiencing trying to get a Win95/98/ME assembly pgm running under NT/2000/XP. Prior to using: INCLUDE W32.INC in my .asm file I have the statement: UNICODE = 1 (I had 0 for strictly ANSI and the pgm assembled and has worked fine for several yrs on 95/98/ME The UNICODE statement is there because it is used in the W32.INC file at the beginning. I used the following code just to see if it would work (assemble.)

DeleteFileA PROCDESC WINAPI :HANDLE
DeleteFileW PROCDESC WINAPI :HANDLE

if UNICODE
DeleteFile TEXTEQU <DeleteFileW>
else
DeleteFile TEXTEQU <DeleteFileA>
endif

While assembling TASM32 gives me the following warning (not an error), 'Global type doesn't match symbol type' refering to DeleteFileW.
Thanx for any help as to what I am doing wrong..
Posted on 2001-11-06 09:49:56 by DaveTX47
Dave,

very few people now work with TASM as it is not supported by Borland/Inprise any longer and it has not been updated for years. Most of the support around at the moment is for MASM because of its far larger user base and its reasonably current status with late model M$ operating systems.

Mixing UNICODE and ANSI code is not without its difficulties, the data format is different but at the moment, you can run ANSI code on later versions of windows with no problems where unicode will not run as native code on earlier versions like win95 or 98.

Regards,

hutch@movsd.com
Posted on 2001-11-06 15:30:28 by hutch--
I use the DOS DEBUG program on Win2k, so TASM should run just fine.

As Hutch hinted, use the ANSI version. If you have troubles with the ANSI version, you need to check for common errors such as not properly preserving registers. Many of the functions were forgiving in Win9x because they invoked 16-bit code. (Because of what it takes to switch to 16-bit mode, registers were automatically preserved.)

NT/2k/XP is also a bit strict about alignment of stack data.

When switching to Unicode, the major problem will be string data, as all Unicode strings have 16-bit characters. This will cause problems if you're passing the character count as the byte count, or vice versa.
Posted on 2001-11-06 17:00:13 by tank
Writing for unicode... ho humm. I'm afraid it's the way to go, but
I'm a bit reluctant to do it myself, other than where's it necessary.
I also think it's a bit unnecessary, all the people with funky alphabets
should just grow up and use something intelligible. Ok, that didn't
work, so we're all stuck with unicode. Bleh. On nt/2k it might be
good to supply a unicode version of your application, as the ansi
routines go through ansi->unicode conversion, and is then passed
to the unicode version of the API... meaning that if you write for
unicode, your app will run faster. I'm not really convinced that you'll
be able to feel this, as stuff around API calls are hardly ever speed
critical :).
Posted on 2001-11-07 03:03:11 by f0dder