Greetings. I have a program written with parallel modules optimized for Pentium MMX, Pentium III, Pentium 4 and Athlon XP. It tests for processor and launches the appropriate main routine. If SSE2 then use Pentium 4 routine (will also run this routine on Pentium M and Athlon 64), else if SSE then if AMD then Athlon XP else use Pentium III, else if MMX then use Pentium MMX (will also run this routine on AMD K6-2, etc).

The other subroutines/routines, handlers in the program are identical. So, instead of having to maintain and complile 4 different programs when making a change for example to the save routine, I only need to make one change. Instead of four HLA programs about 30KB each compiling to 25KB exe, Now just one 68KB program compiling to 33KB. Good so far. That saves half the disk space.

Now this program is used in distributed computing, the manual way. I compile different versions to tackle different parts of the set. 15 in fact. But as you can imagine, each version differs only in about 10 instructions (mutiplied by four for the four different processor optimized modules). I have thus written each program as a 1K file with the 10 statements as a #macro, plus a single #include to include the main program, which expands the macro at the four points. This brings down the total source file size to 68+15 instead of 15*68. Executable sizes do not change. Disk space reduced by two thirds.

I batch the 15 different HLA files. Is there any way of writing instead a single HLA file with all the macros that results in the creation of all 15 files?

Posted on 2005-10-03 11:23:32 by V Coder
Look up the #while and #for compile-time language statements and see if they will do the job for you.
Randy Hyde
Posted on 2005-10-06 08:56:23 by rhyde