As everyone else I want to assemble my code and get a working output. However unlike most others I have some restrictions for the project I am writing. The build technique I present now does work for me but I don't think it's optimal and my questions below is not about why things are like they are but rather a way of optimize the technique I have used until now.

First my goal is to assemble 1 object file which I use as a array in another program. This means the compiled object will have 1 data section (the code section I remap into a data section by using some directives) and 1 export and when linked with this other program the export (entry to the code) is a array.

To do this I have 1 file I assemble (main.asm) which includes the other files in my project.
From my observations include works the following way:

<... code ...>
include coolfunction.asm
<... more code ...>

Then the content of the coolfunction.asm gets inserted between the other code thus making 1 virtual file. The advantage of this is that I can specify where in my code the external code should be placed.

This means I can code in a way like:
include entryofcode.asm
include nextpartofthecode.asm
include endofcode.asm

and I am guarrantied that the code is put in right places as I want it so that the program will work afterwards.

Now to the problem:

The problem about this is the growth of the project which constantly becomes more and more files thus making the build process slower and slower. Since I use include for every file (instead of building other object files for later linking usage) every file gets processed over and over again when assembling including the files which were not changed.
I was thinking about some improved way but so far I didn't come to any conclusions. Essentially what I need is some pre object file step so I can still build it as one object file without needing to process every file which wasn't changed. In past I did another step which made it possible but it was even slower. This was done by assembling every file and linking it into a exe file and then finally write a post-processor which ripped the code out of the exe file. This worked too but it was really slow to build and rip all the time even though it was done automaticly.

I hope someone got some comments on this. I apologize if this is a bit unclear but it's not easy to discribe precisely what goes on.

// CyberHeg
Posted on 2002-09-07 03:01:20 by CyberHeg
There are a few ways you can approach the problem, you can get the old version of NMAKE from Microsoft and use its date/time based build technique. This means you must learn its syntax to be able to write make files.

You can build libraries from the complete modules you are satisfied with and only assembler the newer ones then link them all together to make the project.

You can use response files to handle the increasing command line length and during development, only assembler the new files then link them together to make the finished product.

Posted on 2002-09-07 03:26:52 by hutch--