hi,

how to find out how long it takes to execute some code in a program? i mean, GetTickCount just counts milliseconds..but if some code doesn't take even a millisecond?? what to do then? what's the best method?

and: how to find out how much "calculations" different instructions take? e.g. i read somewhere, that adding to values is faster than multiplying them..where can i check that?

thanks in advance..
-NOP-
Posted on 2002-06-17 13:10:24 by NOP-erator
Use a profiler(Intel VTune or AMD CodeAnalyst) or use Maverick's Profiler

As for add being faster than mul, it's because when using add you can customize the code and break dependencies...of course such a long add instructions as what was reported on the other thread isn't exactly optimization...234 add instructions ??? mul is much better than this... replacing mul with add only shines when calculating small values like 3*3, 4*2, 1000 * 4 ... :)
Posted on 2002-06-17 13:20:29 by stryker
hi,

thx... i think the profiler is exactly what i was looking for..i'll try it out later.. thank you again..

-NOP-
Posted on 2002-06-17 13:42:05 by NOP-erator
You can also get a basic idea of how long each instruction takes with the /Sc MASM switch. This adds the instruction time to the listing, depending on the processor directive, ie .386 .486 .586 etc.

But this is only a guide. Pipelined CPUs can often execute 2 or more instructiuons at a time. :)
Posted on 2002-06-17 14:13:40 by S/390
QueryPerformanceCounter provides higher resolution than GetTickCount
Posted on 2002-06-18 01:13:24 by grv575