Hi, all
Wanna to delay 0.5ms ,how to calculate the value of "Delay500US"
My CPU is AMD2500+, FSB 166MHZ
Thanks in advance,
RGS!
(PS: In my winxp ,every dec instruction will cost how many machine circle
and the machine circle will cost how many time?)
Wanna to delay 0.5ms ,how to calculate the value of "Delay500US"
;MACROS
DO_DELAY500US MACRO
mov eax, DELAY500MS
.while eax
dec eax
.endw
ENDM
.const
DELAY500US EQU ?
My CPU is AMD2500+, FSB 166MHZ
Thanks in advance,
RGS!
(PS: In my winxp ,every dec instruction will cost how many machine circle
and the machine circle will cost how many time?)
I would propose that you use sleep instead.
Hi,roticv
Thanks for your advise.
But ,the function" Sleep" uses a minum interval of ms, it cant't delay under 1ms time.
RGS!
Thanks for your advise.
But ,the function" Sleep" uses a minum interval of ms, it cant't delay under 1ms time.
RGS!
What about using QueryPerformanceCounter + QueryPerformanceFrequency?
Hi,Japheth
Thanks for advise.
I have tested this. My machine does not support high performance timer:the return value of QueryPerformanceCounter is NULL; Some older PC maybe has the high performace timer.
RGS!
Thanks for advise.
I have tested this. My machine does not support high performance timer:the return value of QueryPerformanceCounter is NULL; Some older PC maybe has the high performace timer.
RGS!
ok, you could use CreateWaitableTimer/SetWaitableTimer then. Theoretically, if used as a one-time shot, this timer has a resolution of 100 nanoseconds. If this doesn't work, one could do:
- get a timer with CreateWaitableTimer/SetWaitableTimer, set it to 4 ms
- do your increment-variable-loop until the timer signals
- now divide the resulting variable by 8
of course, if you are in a multi-thread environment, you have to ensure that no thread-switching occurs during your loop.
- get a timer with CreateWaitableTimer/SetWaitableTimer, set it to 4 ms
- do your increment-variable-loop until the timer signals
- now divide the resulting variable by 8
of course, if you are in a multi-thread environment, you have to ensure that no thread-switching occurs during your loop.
Hi,japheth
Thanks.
You are refering to the multimedia timers,which I have never porgrammed this .It sounds to be a complex task with
creating events and threads ,but I think I coud deal with it in one or two days.
BTW:if I want use loops with MACRO, like what I have stated just in the first floor(which I think it a more simple and beautiful way by MACRO),does it can't be achieved?
With BST RGS!
Thanks.
You are refering to the multimedia timers,which I have never porgrammed this .It sounds to be a complex task with
creating events and threads ,but I think I coud deal with it in one or two days.
BTW:if I want use loops with MACRO, like what I have stated just in the first floor(which I think it a more simple and beautiful way by MACRO),does it can't be achieved?
With BST RGS!
Hello Luckrock,
AFAIK the term "multimedia timers" refers to the timer functions implemented in winmm.dll (and the function names beginning with timeXXX). OTOH, CreateWaitableTimer/SetWaitableTimer are kernel functions and to use them you dont need to create threads or events.
Of course you could just do a loop and don't use any win32 function, but then the duration of the loop will depend on the machine speed, which is "commonly" regarded as "poor" programming.
Regards
And not only that, context switching makes it inaccurate and you are making your processor to run at 100% for no reasons.
Hi,japheth
Well,my mis-understand.
Yes, it's a poor programming method.
But anyway,Assembly Language,radically speaking ,is a low lever language ,dependent of hardware,instructions is different from one platform to another.
The reason I hate MFC and other high lever encapsulation languages is that they cover all the inner aspects and the syntax is long and complex.
Compared, win32asm is more clear,more frank,more direct to hardware and system,and if isn't so, I prefer to go back to my earlier Visual C++ (win32 api) programming.
O.K.,back again:
For example,if one CPU speed is 100MHZ, and the "dec" instruction is cost 2 machine circle, that is 2/100M =0.02us , to my 0.5ms loop,I need loop 0.5ms/0.02us=25,000 times. and reduce some intrance and extrance the loop instructions time cost.and about 24,990 times will be suitable.
We got to know that windows nt/xp is not a real time system,and instead it's multitasking.
And my purpose is to output a signal pulse through port and delay loop,and the electronic pulse has requirement of minum stability time ,despite the rising edge and falling edge, and I will check this pulse from outer device. So,for my condition, it doesn't matter the time become large ,as long as it does not become small. The context switching isn't an obstacler ,and Cpu speed could be recognized from some WIN32API calls and then use this speed as a parameter for my time loop.
And the question remains: how many time for each "dec" instruction cost for my AMD Barton 2500+ processor and FSB 166MHZ, and how do I set the DelayTime_500us constant?
Thanks Very Much.
With BST RGS!
Well,my mis-understand.
Yes, it's a poor programming method.
But anyway,Assembly Language,radically speaking ,is a low lever language ,dependent of hardware,instructions is different from one platform to another.
The reason I hate MFC and other high lever encapsulation languages is that they cover all the inner aspects and the syntax is long and complex.
Compared, win32asm is more clear,more frank,more direct to hardware and system,and if isn't so, I prefer to go back to my earlier Visual C++ (win32 api) programming.
O.K.,back again:
For example,if one CPU speed is 100MHZ, and the "dec" instruction is cost 2 machine circle, that is 2/100M =0.02us , to my 0.5ms loop,I need loop 0.5ms/0.02us=25,000 times. and reduce some intrance and extrance the loop instructions time cost.and about 24,990 times will be suitable.
We got to know that windows nt/xp is not a real time system,and instead it's multitasking.
And my purpose is to output a signal pulse through port and delay loop,and the electronic pulse has requirement of minum stability time ,despite the rising edge and falling edge, and I will check this pulse from outer device. So,for my condition, it doesn't matter the time become large ,as long as it does not become small. The context switching isn't an obstacler ,and Cpu speed could be recognized from some WIN32API calls and then use this speed as a parameter for my time loop.
And the question remains: how many time for each "dec" instruction cost for my AMD Barton 2500+ processor and FSB 166MHZ, and how do I set the DelayTime_500us constant?
Thanks Very Much.
With BST RGS!