I have been a hobby programmer for about 5 years. The last 3 years I devoted to assembly language on windows XP 32. I worked through Iczelion's tutorials (excellent stuff) and any other I could get a hold off. The assembler I favor at this time is Nasm.
So far I have been coding small apps, special routines parsing/byte scanning etc. I have also played around a lot with GUI components and feel like trying a larger app.
Over the last year I started to monitor the time spent to code an app. This does not include time to plan/brainstorm/research, just coding.
75%  GUI
20% developing Program logic and code/data integration
5% coding specialized routines
I purposely left out debugging from this list as 100% time spent debugging a small app would be proportional for the above categories. I estimate total time to build an app as 60% programing and 40% DEBUGGING with OllyDg. For larger apps this ratio changes exponentially towards debugging.
At present I plan my first real proggie. It will be expanded/modified quite a bit. I have coded the first level functionality. I'm looking into ways to integrate additional functions into my existing program with minimal changes to the existing code. I'd be willing to sacrifice some speed and code size for a robust interface. I have a few ideas but they all tend towards bloat. Maybe the best approach is to code all additional functionality as addin?
All suggestions are welcome
thx Klod
Posted on 2009-09-19 00:15:07 by Klod
Real time savers for big asm projects for me are:
- VKDebug. I spend <1% time debugging thanks to it and the following things
- Consistent rules of preservation of registers. Can be somewhat a bloat at places, but generally saves my skin and time constantly. Put exclamation marks everywhere that you define/declare/use an inconsistent proc.
- macros
- modularity (of course). Create parts in a sandbox, test them. Prepare modules for null-pointers or incorrect args at places.
- an IDE with intellisense.
- often, planning for extensibility a bit too much can turn-out to be a huge time-waste and be in the wrong direction.


N.B. it's almost impossible to get all things right (i.e the software's design) from the first try, if there's extensibility.
Posted on 2009-09-19 01:16:44 by Ultrano
Sadly GUI takes way too long :( reason why I moved all little projects to dialogs. (Who I am kidding... all my projects are little)
Posted on 2009-09-19 07:29:38 by JimmyClif
I'm at the same way! :shock:
Let's Go Assembly! :D
Posted on 2009-09-19 09:47:22 by nathanpc
Thanks Ultrano for your reply
I'm using RadAsm with Nasm. I have problems to get VKDebug to work. So I coded my own debug routine. Actually I translated parts of it from other sources, so I can test variables at  run time and do reg dumps etc. via messagebox.

- modularity (of course). Create parts in a sandbox, test them.

so far I managed only small routines to be modular.

- often, planning for extensibility a bit too much can turn-out to be a huge time-waste and be in the wrong direction.


This is the reason for this post. I realized that many of my projects fail already at this stage, before a single line has been coded.

Thanks JimmyClif for your suggestion.
At this time I favor to code an empty window and implement each part of the program into a dialog box and then display these dialog boxes in the client area.

I had a look at OOP, com  and Table oriented paradigms. I like some aspects of OOP but not all of it. I come to think that its kind like religion to the proponents.

Pointers do cause me grieve, actually its DWORD parameters, since they can be pointers values string etc. If been thinking to implement type checking via macros (nasm does not type check), but what about at run time?
Posted on 2009-09-19 21:10:17 by Klod
Masm and nasm both are capable of performing buildtime typechecking, but ONLY if the function params / data variables themselves have been more strongly typed (in their type declarations) than the usual :dword, :dword, :dword stuff.
And so the problem usually is that we're all using a bunch of old headers (includes) which were carelessly transcribed (usually from C headers) in the first place, and this legacy has persisted.
That being said, if two data types are the same physical size, masm and nasm generally accept this substitution as 'late typecasting', since at the machinecode level, there is no fundamental difference.
Posted on 2009-09-19 23:10:54 by Homer
I also modified VKDebug a bit, and changed half of the code for debug.lib/inc. I even use it in my C++ projects in exactly the same way. Initially I also thought messageboxes would suffice, but really that's not the case. Remake something like VKD, it's a priority.

Here's an (imho good) example of a module, where independence from other modules was possible: http://dl.getdropbox.com/u/1969613/UndoRedoMgr.7z
Notice how it contains a sandbox test-proc in the .asm file, I can assemble+link it to exe to test it extensively; or to .obj/.lib to use it in a/the project.

OOP is extremely useful in some places, completely meaningless in others. Maybe with experience you can guess when it's nice for your needs. But truth be told, we constantly do OOP anyway - just that it's in procedural-like form. A struct in memory is also an object. A handle of a file/bitmap is also an object, etc.

Pointers won't be any problem for you soonish; just write comments in your code which pointer points to what. The only assemble-time checks you really need are for calling function-pointers. (see line 17 in UndoRedoMgr.inc from the 7z file above). You could use Hungarian naming conventions, or a convention you like. I.e names of ptrs to zero-terminated strings start with "lpsz", general ptrs start with "p", ptrs-to-ptrs start with "ppv". At runtime, a simple zero-check is necessary.

I kinda forgot to mention another huge time-saver: resource-wasting. When an action/proc can take either 1us or 10ms per mouse-click, give yourself a break from thinking about "which way is the most optimal for the cpu", and think "which way is easiest for me to type and later manage, regardless of RAM/cycles". It's sometimes hard to switch to that thinking, so I help myself kickstart via "soo, how do I waste as much RAM and cycles here"  XD .

Have your basic lib-stuff ready, especially a vector-object where you can insert/remove dwords (pointers) and enumerate with a custom "foreach" macro; and memory-funcs to resize an array (incrementally and log2-based) and shuffle elements of arbitrary uniform size around.

About extensibility, I don't think I can give any hints - maybe just plan for 2 optional future features, and leave the thinking of how to fit impending features for the future (can require rewriting of some chunks of a project) . I might have quite some experience with that, as at my job the specs/designs for our products often change dramatically. Just know the current state of the code (write lots of comments inside, too) and even a dramatic change won't be too much pain. Even in the worst cases usually you could scavenge roughly around 90% of your existing code, imho/ime.
Posted on 2009-09-19 23:20:21 by Ultrano
Extensibility is one place where a nice clean function interface is useful - here is a simple example.

Imagine that you enshrine the Logic of your program in, say, a DLL, whose Functions are called from the main executable program.
In order to update and extend this program, you only need to update the DLL, no changes are required to be made to the main application.

Even more useful if the main program has a 'mating interface' which is known to the code in the DLL.

OOPs :P


Posted on 2009-09-20 00:10:53 by Homer
Thanks Homer for your replies

Yes I know type checking is possible with Nasm. But I believe this would have to be implemented as macros. I was actually seriously thinking of making types %define Int dword and %define dword_size 4. I could then modify my invoke macro to throw a compile time error if anything else but int would be passed as an argument. I was even thinking of encoding the first 4 letters of a proc name as an int and then adding the numbers of parameters * their individual sizes. This would then trow a compile time error if any of the arguments/combination would not be identical. I would need some insight on the usability of such a schema.

You mention include files, a year ago I switched to golink because I taught it was cool, golink finds the imports without inc's.
I had spent countless hrs to debuging a seemingly simple program only to find out there was an error in the inc file. I had to change my invoke macros to handle the new situation. Lateron I wrote a catstr proc, tested it with all the different possibilities I could think of in a console program. Then I added it to my DLL and it crashed every time I called it. After three days of pulling my hair out I found out that golink linked to a catstr proc with the same name and same number of arguments in a different DLL. The only difference was that the order of the arguments was different. So I'm back to the old include files but use API docs to verify if I have unexpected errors.


Imagine that you enshrine the Logic of your program in, say, a DLL, whose Functions are called from the main executable program.
In order to update and extend this program, you only need to update the DLL, no changes are required to be made to the main application.


Would you have a simple code example?


Thanks Ultrano for your suggestions

I just remember that a few month ago I translated an interesting debug program that used a console window that runs in a separate thread and runtime data is fired to this console. I believe this is the same methodology that VKD is using?


I think I get what you mean about the sand-pile approach. I have tried this with the toolbar in my windows template. All functionality resides within Toolbar.inc. It is activated with a call to createToolbar from the initialization routine in my window proc. At my experience level, this usually fails when the proc depends on other procs.

I'm not sure if I understand line 17 in UndoRedoMgr.inc . PactionFunc is defined as a DWORD and IFUNC seems to be a macro?

I have been using Hungarian naming conventions or at least partial. I don't have a problem with the concept of pointers as such. It is in its use, especially in conjunction with the stack.
Invoke func1, p2Somedata. In fuc1: invoke func2, addr p2Somedata. When do I de-reference. I have to admit that when using pointers I still manage to confuse the crap out of myself. Whenever I GP a proc/proggie, pointers is usually my first suspect.


I'm not sure if I understand resource-wasting. So far I have spent little time optimizing with the exception of simple things like xor eax,eax and the like. For the time being I concentrate more on writing code that works. So I must be a natural at resource wasting techniques.


I do not understand vector-object. I have read some discussions about it but was unable to follow through because I lack the basics of it. Some explanation would be needed.


About extensibility, I think a better explanation of what I try to accomplish would be in order. In my daily work, I deal a lot with invoices and Repair orders. These are created via internet on a corporate server. This is done through a console app of very limited possibilities. The repair order migrates through our company, from hand to hand, desk to desk. Each person keeps a log sheet where they record the progress of their particular invoice. You can imagine the leg work it takes to track down a particular repair order if a customer inquires via phone. So the idea is to create a repair order tracking system. For the last few month I have created scripts that enable me to get the information about repair orders from the server. For each invoice I create a file and the information, repairs and parts information gets appended. The name of the file is auto generated by combining name and repair order number. These scripts are then called via batch files. Crude but helpful. A basic file search program that allows to search an invoice in several ways and display the data in an richedit box is in place and working.

Extensions:

  1. include quotes generated in exel in the file search
  2. automate updating information from the server (about ½ hr to 1hr intervals)
  3. integrate customer database from server (partially completed)
  4. Filter out unwanted data from the server
  5. Integrate route sheet from dispatcher (start time, technician, status etc)
  6. Integrate appointments
  7. Create a database and do away with the files

As you can see, extensibility is a critical element in the design. I believe that each step is within the grasp of my coding ability and there is no time constraint. I'm afraid to run into the “white” elephant syndrome however. I mean starting with a template app, adding features, controls and functionality and suddenly the whole thing blows in your face and nothing works.

All your suggestions are wellcome



Posted on 2009-09-21 00:25:28 by Klod
A vector object is an array of DWORDs which might be pointers to objects.
Its a container, it doesnt know what it contains.
Posted on 2009-09-21 02:37:00 by Homer
Yes I know type checking is possible with Nasm. But I believe this would have to be implemented as macros. I was actually seriously thinking of making types %define Int dword and %define dword_size 4. I could then modify my invoke macro to throw a compile time error if anything else but int would be passed as an argument. I was even thinking of encoding the first 4 letters of a proc name as an int and then adding the numbers of parameters * their individual sizes. This would then trow a compile time error if any of the arguments/combination would not be identical. I would need some insight on the usability of such a schema.


I've started work on my old site again, currently it's void of a lot of content but there is an article on there in the Snippets section which demonstrates the idea of supporting types in NASM. The link is at the bottom of this post if you need it.

Most standard windows import libraries use the _ProcName@XXX convention where ProcName is the name of your procedure and XXX is the total sum of all arguments passed to the procedure. You could modify your INVOKE to incrementally add up arguments passed to it and generate the number after the @ which would call the associated procedure. For example:

_Hello@4:
STRUC _HELLO4ARGS
  .msg RESD 1
ENDSTRUC
  PUSH EBP
  MOV EBP, ESP
     PUSH DWORD 0
     PUSH DWORD defaultCaption
     PUSH DWORD
     PUSH DWORD 0
     CALL _MessageBoxA@16
  LEAVE
  RET

_Hello@8:
STRUC _HELLO8ARGS
  .msg RESD 1
  .caption RESD 1
ENDSTRUC
  PUSH EBP
  MOV EBP, ESP
     PUSH DWORD 0
     PUSH DWORD
     PUSH DWORD
     PUSH DWORD 0
     CALL _MessageBoxA@16
  LEAVE
  RET

...

%imacro INVOKE 1-*
%push _INVOKE_
  %define %$proc %1
  %assign %$size 0
  %rep %0-1
     %rotate -1
     PUSH DWORD %1
     %assign %$size %{$size} + 4
  %endrep
  CALL _%{$proc}@%{$size}
%pop
%endm

...

INVOKE Hello, strMessage ; Calls _Hello@4
INVOKE Hello, strMessage, strCaption ; Calls _Hello@8


Imagine that you enshrine the Logic of your program in, say, a DLL, whose Functions are called from the main executable program.
In order to update and extend this program, you only need to update the DLL, no changes are required to be made to the main application.


Would you have a simple code example?


Say for example you divide your program up into 30 procedures. If you distribute this program as a single EXE file, whenever you update/fix something in one of the procedures, your users will need to download a new copy of the fairly large EXE file. Instead, say you were to group the procedures of similar functionality into 4 or 5 DLL files which are imported into your main EXE. If you ever have to update/fix one of the procedures, the user will only need to download the DLL which the procedure is located in, making for a much faster upgrade of your software. This is the preferred method of development for very large scale application development.

About extensibility, I think a better explanation of what I try to accomplish would be in order. In my daily work, I deal a lot with invoices and Repair orders. These are created via internet on a corporate server. This is done through a console app of very limited possibilities. The repair order migrates through our company, from hand to hand, desk to desk. Each person keeps a log sheet where they record the progress of their particular invoice. You can imagine the leg work it takes to track down a particular repair order if a customer inquires via phone. So the idea is to create a repair order tracking system. For the last few month I have created scripts that enable me to get the information about repair orders from the server. For each invoice I create a file and the information, repairs and parts information gets appended. The name of the file is auto generated by combining name and repair order number. These scripts are then called via batch files. Crude but helpful. A basic file search program that allows to search an invoice in several ways and display the data in an richedit box is in place and working.

Extensions:

  1. include quotes generated in exel in the file search
  2. automate updating information from the server (about ½ hr to 1hr intervals)
  3. integrate customer database from server (partially completed)
  4. Filter out unwanted data from the server
  5. Integrate route sheet from dispatcher (start time, technician, status etc)
  6. Integrate appointments
  7. Create a database and do away with the files

As you can see, extensibility is a critical element in the design. I believe that each step is within the grasp of my coding ability and there is no time constraint. I'm afraid to run into the “white” elephant syndrome however. I mean starting with a template app, adding features, controls and functionality and suddenly the whole thing blows in your face and nothing works.

All your suggestions are wellcome


Before I retired my programming career, I developed quite a lot of VLS projects for various clients and if I may I'd like to pass on my general process for development here.

1. Obtain an MRD from your client.
2. Create a use-case document (or have your client fill one out.)
3. Create a requirements document from the use-case document.
4. Aquire/Develop your inputs.
5. Create your processing functions/objects.
6. Create your persistence functions/objects.
7. Bind the project.
8. Create a distribution/update system.

Okay, to explain the above process..

1) by obtaining a market research document (MRD) from your client or marking group you find out if there is actually a need for your project. From your above explanation of your project I would say no. There are already products out there that can be  used to do what you need and from a productivity point of view it would probably be better to use them. Instead of designing a custom product; rethink the invoice strategy. My opinion is that your client/company should create a relational database of "clients", "invoices", and "work". Using a central clients table and primary key entries you can link the invoices and work table to each client. The work table would contain each persons logs of the work preformed on the particular job with a relational key to the invoices table. Using a properly designed relational database you could make use of something like Microsoft Access or any other database management software to add, remove, and query client data. In the MSA scenario you would create a simple Access form which uses pre-built queries to act as a graphical interface for support specialists and management who get calls from the clients. A second form for updating the database could be created and placed on kioks/terminals in full screen mode around the company so that users can login to the computer and be presented with the MSA form for updating their logs. This will remove any need of paper invoices or logs and will centralize everything to the database for an optimal backup strategy.

2) A use-case document basically details what a user will/can do from the time they start the application to the time they finish working on it. This information is vital for creating applications which are user-friendly, much more so than some flashy GUI with custom-controls out the wazoo.

3) I consider this probably the most important step in preventing "hack" code which is so prevalent on the net today. This step is really what separates small scale from large scale. Your basic requirements document is a table that looks something like this:





Event/ProcedureDescriptionProcess/Algorithm
StartEntry Point ProcedureInitialize variables
Call Main Loop
Clean up Memory
Exit Program
InitializeAppPre-initializes variableslpszCmdLn = GetCommandLine()
bRunning = 1
RunProcesses events and messagesWhile bRunning
  GetEvent
  HandleEvent
End While
ShutdownAppClean up variablesFree pMem
Free pOtherMem


When you create the requirements document you are basically planning out what procedures you need to create and what events you need to handle, and in the process/algorithm column you write out a general procedure to accomplishing the task. By doing this, before you even write a single line of code, you already know what you need to do and how you are going to go about doing it. It can also be used as a reference later when doing updates, just make sure you keep the document recent and add any updates you do later on. If you get into the habit of creating a requirements document, you will spend a lot less time in your debugger later on.

4) When I develop a VLS project I always start off by creating the user interface first. This may sound odd and does take a while to get used to, but by creating the interface from the beginning you will be able to define what arguments procedures need to have later on and minimize the number of events you are processing. Generally (unless you are creating custom/subclass/superclass controls) each input will need only one event handler. That event handler is the action handler. Some controls don't have any events that you need to process. So creating your interface first defines what events are needed and which aren't as well as reducing code to one event per control. If you notice I said "Aquire" for this, as on VLS projects you generally want to outsource UI development to a third party. There are people who do nothing but develop user interfaces for a living, these guys are highly skilled and make a living off of knowing how to create very user friendly interfaces and for VLS projects you really do want to make use of these types of services.

5) This should be obvious by now but your processing functions/objects are any procedure in which handles the manipulation/calculation of user or file input. It's like a buffer between the user-input/output and the file/db/net/whatever input/output. This is where the real guts of your program is and where you SHOULD be spending the bulk of your time.

6) The "persistence" objects/procedures are simply functions which encapsulate file/db/net/whatever access so that your processing functions don't need to be bothered with where the data is coming from. The reason you create this persistence buffer is in the event that later on, with changes in file formats, database access methods, network protocols, etc. you can easily adapt your application by changing these persistence routines rather than having to make modifications throughout your entire project. This is part of the modularization that everyone keeps talking about.

7) When you bind a project you are effectively connecting the dots between your interface, processing routines, and persistence routines. I can't remember ever designing a commercial application as one big project. Normally I develop each tier of the project and create small test-bed apps to ensure that the routines are working at their best capacity. Once all the routines are functioning as expected I bind the various routines into a full distributable project. The only real reason for doing this is it makes debugging the program a LOT easier. You are working with much smaller code segments at a time and runaway errors (errors which may survive through two or three routines before crashing the application) are much easier to track down.

8) Creating the distribution/update system is normally considered to be an afterthought but it can really be considered something that is done throughout the entire project. Like when Homer was talking about the DLL's that is a large part of the distribution/updating strategy which you should get used to working with. With VLS projects you'll have to design a way to maintain the project long after is has been created and that's where creating installers, update servers, and organizing routines into share objects (like DLLs) makes things much easier on you as the developer.

If you follow this process the chances of running into the “white” elephant syndrome (as you called it) is very unlikely. Also keep in mind that this is a VLS strategy for development and for small scale projects it would be a waste of time to go through all this trouble. Anyways, I hope all this helps...

Regards,
Bryant Keller

EDIT: Opps forgot the link > http://assembly.ath.cx/snippets/
Posted on 2009-09-21 03:00:15 by Synfire
I had been away for a few days. Just got back. Thanks for your replies.
Synfire, I have to admit your views are dead on the money your experience in these areas show. There is no “need” for this project. A short time into the MRD revealed that.  There are several commercial products that are specialized and customized exactly to the needs of our company. That means from a point of view of business management and POS. The company has already committed to purchasing a business management software. I have spent considerable time  on step 2 and 3 of your list. I spent 2 years monitoring  the way transactions are handled. Admittedly a business software should eliminate all the bottle necks that relate to POS. I have especially tried to find the “necks” in bottle neck. The findings were very interesting to say the least. What I have done so far definitely falls in the category of hack software as you termed it and I agree with you. What I try to do is a “beyond tutorial with a real case scenario”. This is opposite to the numerous tutorials and test programs I have worked with up to date. From a programmers stand this is a “ideal” situation. No money involved, no time lines, no design specs that must be implemented etc. Hey and it keeps an enthusiast from turning towards security hacking for lack of hands on programming practice.
This tread was going nowhere until a couple of days ago when the light came on. I decided to take time and “scrub’ through my code samples dll’s and procedures. For over a years worth of learning. I checked out Synfires code example and wondered why you would give me a example of function overloading. But after debugging a couple more programs that once worked, I saw that you have presented a solution to my problem. Turns out I its my DLL's that are the problem. I have “refined” my functions over the months and then modified the calling in my programs. This causes  serious compatibility/backwards compatibility issues. So from now on I will rewrite the new version of the function with the same name and then use function overloading to call them. I compiled a couple of test proggies, I really like that code snippet.

I also downloaded your macro implementation of types from your website. I think I understand how it works, but cannot compile it.

%idefine BYTE_size 1
%idefine WORD_size 2
%idefine DWORD_size 4
%idefine QWORD_size 8
%idefine TWORD_size 10
%idefine BYTE_define DB
%idefine WORD_define DW
%idefine DWORD_define DD
%idefine QWORD_define DQ
%idefine TWORD_define DT I do get error on the following line  :
label or instruction expected at start of line
So I haven't been able to test it.
Klod
Posted on 2009-09-24 09:52:13 by Klod
%idefine BYTE_size 1
%idefine WORD_size 2
%idefine DWORD_size 4
%idefine QWORD_size 8
%idefine TWORD_size 10
%idefine BYTE_define DB
%idefine WORD_define DW
%idefine DWORD_define DD
%idefine QWORD_define DQ
%idefine TWORD_define DT I do get error on the following line  :
label or instruction expected at start of line
So I haven't been able to test it.


Check and make sure you are using the latest version of NASM, if you are out of date then it might not recognize the DT in which case you could create a manual type for it:


%idefine REST RESB TWORD_size
%idefine DT TIMES TWORD_size DB


That's more or less what the macro itself does but it associates a new type with an old type. I'm assuming this is your issue as I've used this system of types quite a bit and haven't ran into that error. However I tend to stay very up to date (having a local repository of nightly builds and release builds). I didn't include support for the DO/OWORD built-in's because it's rather new but I might have presumed too much when I figured everyone would have support for DT/TWORD.

If upgrading nor my "quick fix" doesn't prevent that error then try recopying the original by hand. I've seen that error in instances where an invalid character turns up in the source file and NASM chokes on it. Other than that it usually means you don't have any token on that line in which the assembler can understand (which shouldn't happen since %idefine has long been a part of NASM and the error shouldn't occur until usage in the event DT didn't work.. but I can't really say that for certain.) Try it out and let me know if any of my above suggestions work, I'd really like to know why it's doing that as I can't recreate the error here.
Posted on 2009-09-27 21:09:02 by Synfire
Thanks for your reply.
It was as you suggested, possibly a formatting character ....
This code snippet works
%include "\Nasm\inc\nmacros.asm"


%idefine BYTE_size 1
%idefine WORD_size 2
%idefine DWORD_size 4
%idefine QWORD_size 8
%idefine TWORD_size 10
%idefine BYTE_define DB
%idefine WORD_define DW
%idefine DWORD_define DD
%idefine QWORD_define DQ
%idefine TWORD_define DT

%imacro typedef 2
%push _typedef_
       %idefine %{2} %{1}
       %idefine %{2}_size %{1}_size
       %idefine res%{2} resb %{1}_size *
       %idefine d%{2} %{1}_define
       %idefine %{2}_define %{1}_define
%pop
%endm

typedef byte, char
typedef char, string

.DATA
strMessage dstring 'hello, world!', 10, 0
   msg db 'Press any key to exit',13,10,0
.len equ     $ - msg
   conTitle db "Testing xstruc macros",0
NewLine db '',13,10,0
.BSS
strBuffer reschar 256
hBuffer resb   1
.CODE
Start:
invk StrLen,conTitle
invk StdOut, conTitle,eax
invk StdOut,NewLine
invk StdOut, msg,msg.len
invk StdOut, NewLine
invk StdOut, NewLine
;invk message,ChAR strBuffer, DWORD strMessage      
push DWORD strMessage
push DWORD strBuffer
call message
invk StdOut,strBuffer
invk MsgBox,DWORD strBuffer, DWORD strMessage
invk StdOut, NewLine
invk InKey,hBuffer
invk ExitProcess, 0

PROC message,DWORD pstrBuffer, DWORD pstrMessage
  invk lstrcpy,,        
ENDP

EXTERN lstrcpy
EXTERN ExitProcess
EXTERN StrLen
EXTERN StdOut
EXTERN InKey
EXTERN MsgBox


This works. I'm not quite sure if I completly understand how it is supposed to work. I have to mention that my invoke macro has a bug I need to fix. It generates the following code:
push dword DWORD strBuffer

It will not complain if I ;invk message,ChAR strBuffer, byte strMessage.  This shouldn't be to hard to fix.
If I code manual push call, the it will give me a compile time error: COFF format does not support non-32-bit relocations

I assume that this is what I shoul expect?

Thanks for your help
Klod


Posted on 2009-09-28 22:03:33 by Klod

Thanks for your reply.
It was as you suggested, possibly a formatting character ....


Yea, be careful of those. I get them a lot and for assemblers they can be a pain.


This code snippet works
%include "\Nasm\inc\nmacros.asm"


%idefine BYTE_size 1
%idefine WORD_size 2
%idefine DWORD_size 4
%idefine QWORD_size 8
%idefine TWORD_size 10
%idefine BYTE_define DB
%idefine WORD_define DW
%idefine DWORD_define DD
%idefine QWORD_define DQ
%idefine TWORD_define DT

%imacro typedef 2
%push _typedef_
       %idefine %{2} %{1}
       %idefine %{2}_size %{1}_size
       %idefine res%{2} resb %{1}_size *
       %idefine d%{2} %{1}_define
       %idefine %{2}_define %{1}_define
%pop
%endm

typedef byte, char
typedef char, string

.DATA
strMessage dstring 'hello, world!', 10, 0
   msg db 'Press any key to exit',13,10,0
.len equ     $ - msg
   conTitle db "Testing xstruc macros",0
NewLine db '',13,10,0
.BSS
strBuffer reschar 256
hBuffer resb   1
.CODE
Start:
invk StrLen,conTitle
invk StdOut, conTitle,eax
invk StdOut,NewLine
invk StdOut, msg,msg.len
invk StdOut, NewLine
invk StdOut, NewLine
;invk message,ChAR strBuffer, DWORD strMessage      
push DWORD strMessage
push DWORD strBuffer
call message
invk StdOut,strBuffer
invk MsgBox,DWORD strBuffer, DWORD strMessage
invk StdOut, NewLine
invk InKey,hBuffer
invk ExitProcess, 0

PROC message,DWORD pstrBuffer, DWORD pstrMessage
  invk lstrcpy,,        
ENDP

EXTERN lstrcpy
EXTERN ExitProcess
EXTERN StrLen
EXTERN StdOut
EXTERN InKey
EXTERN MsgBox


This works. I'm not quite sure if I completly understand how it is supposed to work. I have to mention that my invoke macro has a bug I need to fix. It generates the following code:
push dword DWORD strBuffer

It will not complain if I ;invk message,ChAR strBuffer, byte strMessage.  This shouldn't be to hard to fix.
If I code manual push call, the it will give me a compile time error: COFF format does not support non-32-bit relocations


I'm not exactly sure what your complaint about the code is here. The "PUSH dword DWORD" stuff is fine, NASM doesn't care how many times you specify size, it's flexible like that. However, take the following code snippet.

;invk message,ChAR strBuffer, DWORD strMessage


I seriously doubt this is what you meant. Keep in mind that you've created the 'char' type to be a byte value so when you are doing this:

PUSH DWORD strMessage
PUSH BYTE strBuffer
CALL message


And this isn't correct at all. NASM's complaint is coming from you trying to push 8-bits of the address onto the stack which is not supported by COFF. You could use lea bx, ; push bl to accomplish 8-bit address push but I'm almost certain you are wanting to push a 32-bit address and the assembler is right in warning you about that. It might help you to understand a little better if you made use of a custom OFFSET and PTR convention that I've used in the past. This will make your code read a little closer to MASM:

...
%idefine OFFSET dword
%idefine PTR &
...
invoke message, offset strBuffer, offset strMessage
...
mov AL, BYTE PTR strBuffer


Using those two definitions will allow you to decide immediately if you are accessing a pointer (the value of) or the offset (address of).

Overall, if you are passing a string value then you HAVE to specify it as a 32-bit value and you didn't in the invk statement.

Posted on 2009-09-29 01:21:57 by Synfire
I seriously doubt this is what you meant. Keep in mind that you've created the 'char' type to be a byte value 

My apology, no that’s not what I try to do. I should have been more specific about my intentions (it was late that  night). I never worked with types before and did not know what to expect when using types.  So  I tried to purposely call with the wrong type to see the response and see how Nasm deals with it.
These calls all produced a working program
;invk message, CHAR strBuffer, DWORD strMessage 	;wrong calling
;push CHAR DWORD strBuffer ;Nasm assembles to this
;invk message,strBuffer, DT strMessage  ;This also worked

This is what I meant my invoke macro has a bug. I can call message any which way I could think of and it worked because my macro appends dword after the type as shown above.

I played around a bit to day and it seems to work as I hoped.
mov ax,BYTE PTR strBuffer      ; error: mismatch in operand sizes
This is what I was expecting

%include "\Nasm\inc\nmacros.asm"


%idefine BYTE_size 1
%idefine WORD_size 2
%idefine DWORD_size 4
%idefine QWORD_size 8
%idefine TWORD_size 10
%idefine BYTE_define DB
%idefine WORD_define DW
%idefine DWORD_define DD
%idefine QWORD_define DQ
%idefine TWORD_define DT

%idefine OFFSET dword
%idefine PTR &

%imacro typedef 2
%push _typedef_
        %idefine %{2} %{1}
        %idefine %{2}_size %{1}_size
        %idefine res%{2} resb %{1}_size *
        %idefine d%{2} %{1}_define
        %idefine %{2}_define %{1}_define
%pop
%endm

typedef byte, char
typedef char, string

.DATA
strMessage dstring 'hello, world!', 10, 0
    msg db 'Press any key to exit',13,10,0
.len equ    $ - msg
    conTitle db "Testing Synfires Type Macros",0
NewLine db '',13,10,0
.BSS
strBuffer reschar 256
hBuffer resb  1
.CODE
Start:
invk StrLen,conTitle
invk StdOut, conTitle,eax
invk StdOut,NewLine
invk StdOut, msg,msg.len
invk StdOut, NewLine
push DWORD strMessage
push DWORD strBuffer
call message
invk StdOut,strBuffer
invk MsgBox,DWORD strBuffer, DWORD strMessage
invk StdOut, NewLine
invk HexPrint, PTR strBuffer
xor eax,eax
mov al,BYTE PTR strBuffer +5 ;cool
push eax
call HexPrint
mov ecx,5
movzx eax,BYTE PTR strBuffer +ecx ;cool
push eax
call HexPrint
invk InKey,hBuffer
invk ExitProcess, 0

PROC message,DWORD pstrBuffer, DWORD pstrMessage
  invk lstrcpy,,       
ENDP

EXTERN lstrcpy
EXTERN ExitProcess
EXTERN StrLen
EXTERN StdOut
EXTERN InKey
EXTERN MsgBox
EXTERN HexPrint


This is good stuff, makes things more readable.
How about defining a structure as type? Is this possible with NASM??
I basically don’t mind to use structure notation of nasm
mov .eax

But when dealing with nested structures, this becaomes painfull. Any suggestions?
Thanks Klod
Posted on 2009-09-30 15:06:45 by Klod
This is good stuff, makes things more readable.
How about defining a structure as type? Is this possible with NASM??
I basically don’t mind to use structure notation of nasm
mov .eax

But when dealing with nested structures, this becaomes painfull. Any suggestions?
Thanks Klod


For this very reason, two or three years ago, I created an "ASSUME" macro for structures.

%imacro ASSUME 2
%ifidni %2, NOTHING
%undef %{1}.
%else
%define %{1}.(_x_) %{1} + %{2}. %+ _x_
%endif
%endm


Using this macro you can enable and disable a wrapper for pretty much any structure type you create (similar to MASM's ASSUME macro.)

STRUC 3dPoint
.x RESD 1
.y RESD 1
.z RESD 1
ENDSTRUC
...
ASSUME EDI, 3dPointer ; Associate Edi with 3dPointer
again:
Mov Edi, _myPoints ; Uses the original Edi because it doesn't have a '.' following it
Mov Eax, DWORD ; 'Edi.()' takes the name of the structure's sub-label
Mov Ecx, DWORD
Mov Ebx, DWORD
Add Edi, SIZEOF(3dPoint) ; move to the next point in our _myPoints array
; do something, then loop to 'again'
ASSUME EDI, NOTHING ; Remove the association between Edi and 3dPointer


This can really clear things up and reduce typing. You might be tempted to add the brackets inside the macro, but it makes the code much less consistent and can create confusion when wanting to use the PTR replacement. You can't use PTR and brackets together as they both mean basically the same thing and it creates a conflict in NASM's interpretation. On a similar note, there is a defect when using these as you can't use OFFSET to obtain the address (even when 'ASSUME'ing labels), it's best to use LEA in that scenario. But it does work for it's intended purpose and should help you in your search to simplify development.
Posted on 2009-09-30 17:15:32 by Synfire
Thanks Synfire, that’s exactly what I was looking for. Coincidentally, I have seen this macro on one of the Nasm related threads and I had copied it. I did not know how to use it properly and it did not give me the results I was looking for at that time., so I discarded it. Your code example has clarified its use and clearly, I did not use it correctly. 

Thanks Klod
Posted on 2009-09-30 17:52:26 by Klod
No worries mate. Glad I could help.
Posted on 2009-09-30 19:42:19 by Synfire
Hi Synfire,
I was trying out your assume macro and I have a problem,
Mov DWORD ,eax ;Asm:49: error: parser: expecting ]
I assembled with -e option to check and it appears that all brackets do match.
STRUC 3dPoint cause the folowing: error label or instruction expected at start of line
It appears Nasm doe not like the variable to start with a number. STRUC Point3d worked Ok
Any suggestions ?


%include "\Nasm\inc\nmacros.asm"
%include "\Nasm\inc\win32\user32.inc"
%include "\Nasm\inc\win32\kernel32.inc"
%include "\Nasm\inc\windows.inc"
%include "\Nasm\inc\win32\dll.inc"

%imacro ASSUME 2
%ifidni %2, NOTHING
%undef %{1}.
%else
%define %{1}.(_x_) %{1} + %{2}. %+ _x_
%warning %{1}.(_x_) %{1} + %{2}. %+ _x_
%endif
%endm

STRUC Point3d
.x RESD 1
.y RESD 1
.z RESD 1
ENDSTRUC
.DATA
    msg db 'Press any key to exit',13,10,0
.len equ    $ - msg
    conTitle db "Testing Synfires Assume Macro",0
    NewLine db '',13,10,0
    Tab db '  ',0

.BSS

hBuffer resb  1
_myPoints resb sizeof(Point3d)

Start:
      mov DWORD[_myPoints+Point3d.x],23        ;initialize  for testing     
invk StrLen,conTitle
invk StdOut, conTitle,eax
invk StdOut,NewLine
ASSUME EDI, Point3d ; Associate Edi with 3dPointer
Mov Edi, _myPoints ; Uses the original Edi because it doesn't have a '.' following it
Mov DWORD ,eax ;Asm:49: error: parser: expecting ]
ASSUME EDI, NOTHING
invk HexPrint,DWORD[_myPoints+Point3d.x]  ;Testing mem
invk locate,,
invk StdOut, msg,msg.len
invk InKey,hBuffer
invk ExitProcess, 0


Thanks Klod
Posted on 2009-09-30 23:36:19 by Klod