Hello,

Is possible a _real_ randomizer to generate quickly many (millions+) rendom numbers to not repeat the order aswhile the variation range is exploited. I had quick look at C's _rand but it's just a formula. Also mixing results from variety of time funcs are comming to my mind but their problem is very slow change of the result (one sec) thus generation alots of same 'random' numbers in one chunk. I'd be very grateful if there's a real random algo.

Perhaps to apply an analogic component unit to my Intel , sigh ;o)

Is possible a _real_ randomizer to generate quickly many (millions+) rendom numbers to not repeat the order aswhile the variation range is exploited. I had quick look at C's _rand but it's just a formula. Also mixing results from variety of time funcs are comming to my mind but their problem is very slow change of the result (one sec) thus generation alots of same 'random' numbers in one chunk. I'd be very grateful if there's a real random algo.

Perhaps to apply an analogic component unit to my Intel , sigh ;o)

You can't get real random algo.

I wonder though, if you could get access to the raw binary feed than comes in through the modem and at the same time pick up your phone and blow onto it would that create a random binary string. Hmmmmm, maybe not :(

I wonder though, if you could get access to the raw binary feed than comes in through the modem and at the same time pick up your phone and blow onto it would that create a random binary string. Hmmmmm, maybe not :(

**Stephen Wolfram**, the guy that wrote Mathematica and a book I'm reading ( A New Kind of Science ) uses cellular automata for the random numbers/data. Basically, put some psuedo random data in a buffer as large as needed - I like to use:

```
RDTSC
```

_2: rcr eax, 1

jnc _4

RDTSC

_4: adc eax, edx

mov [edi + esi*4], eax

dec esi

jns _2

...sensitive dependance on initial conditions is the only reason I use this code. After the buffer is initialized use some automata rule that produces psuedo randomness. Here is the code I created based on the description in the book:```
; load rule lookup register
```

movzx eax, BYTE PTR [ebx].rule.rule

; all four bytes are same!

mov edx, 01010101h

mul edx

mov ebx, eax

mov ebp, [esi + ecx*4]

bswap ebp

mov edx, ebp ; ?

rol edx, 2

; xor edx, edx ; ?

inc ecx

_3:

; edx = ...| -2 | -1 | 0

shr edx, 2 ; get bit -1

rcl ebp, 3 ; 3 |...| 31 | -1 | 0 | 1

; save bit 2

adc edx, edx ; ? |...| ? | 2

; rule bit 0

bt ebx, ebp

rcl eax, 1

; put bit 2 back

rcr edx, 1 ; ? |...| ?

rcr ebp, 3 ; 0 | 1 | 2 | ... | 31

rol ebp, 3 ; 3 |...| 31 | 0 | 1 | 2

REPEAT 32-3

; rule bit 1 thru 29

bt ebx, ebp

rcl eax, 1

rol ebp, 1

ENDM

bt ebx, ebp

rcl eax, 1

mov edx, [esi + ecx*4]

bswap edx

shld ebp, edx, 1

; rule bit 31

bt ebx, ebp

rcl eax, 1

xchg edx, ebp

bswap eax

mov [edi + ecx*4 - 4], eax

inc ecx

jne _3

It is optimized a little. The resulting bit is based on the previous bit and the bit on both sides of that bit. Attached is a program that produces lots of random data. Note how local features can be seen, but no long range features are predicatable (Rule 105). There are a variety of rules to produce different size features - depending on what scale of randomness is needed.You can't get real random algo.

I don't quite understand the random concept..

Something may only seem random..?

I mean like chaotic system, like of fractals and stuff, they seem produce random result but there is a system behind it

So it's not really random, or? It's not random if there was a system behind it?

If a 'real random algo' could exist theoretically, how would it work like?

:stupid:

Good question

**david**. Personnally, I say there is no randomness - only complexity.The imponderable. :)

By quantum mechanics and uncertainty princilpes there is true randomness....

But String theory has a more convincing explanation, which is much different than Quantum one. ;)

and also not to forget the second rule of thermodynamics,the entropy.

Regards,

Vortex

Regards,

Vortex

When a child comes into the world all things seem random - we are still children in the universe.

RC4 is very good for pseudo-random bit-streams.

What do you mean by random?

- hard for an attacker to predict: gather all sorts of entropy, stuff it in a buffer, and (crypto) hash it. See linux drivers/char/random.c

- totally non-deterministic: I've read about a hardware RNG in Intels 845 chipset (principle: sample thermal noise). That would be ideal, if everyone had it, and I knew how to get at it ;)

- the intuitive definition: just about any PRNG will do (although these are deterministic 'formulas'). Mersenne Twister is one such algorithm that's better (longer period, better distribution of esp. the low bits) than VC's rand(), albeit slower.

HTH

Jan

- hard for an attacker to predict: gather all sorts of entropy, stuff it in a buffer, and (crypto) hash it. See linux drivers/char/random.c

- totally non-deterministic: I've read about a hardware RNG in Intels 845 chipset (principle: sample thermal noise). That would be ideal, if everyone had it, and I knew how to get at it ;)

- the intuitive definition: just about any PRNG will do (although these are deterministic 'formulas'). Mersenne Twister is one such algorithm that's better (longer period, better distribution of esp. the low bits) than VC's rand(), albeit slower.

HTH

Jan

Hello,

Sure only way of real randomizer is additional hardware.

I haven't fully clear what to look for first, now RANROT/Mother algo which is claimed to have extreme live period and quite chaotic behaviour seems to be sufficient, though pseudo.

Also I was thinking the seed should be de-synchronized after certain pass count, eg. by time value as background traffic makes the intervals slightly different. I think de-sync frequency is indirectly related to quality of random generating function, and may be useless for this algorithm.

Just came into doubt if it won't make yet worse on condition the data I'm trying to decrypt could be generated by std. _rand() func. ;o)

Thanks for all infos ...

servil

Sure only way of real randomizer is additional hardware.

I haven't fully clear what to look for first, now RANROT/Mother algo which is claimed to have extreme live period and quite chaotic behaviour seems to be sufficient, though pseudo.

Also I was thinking the seed should be de-synchronized after certain pass count, eg. by time value as background traffic makes the intervals slightly different. I think de-sync frequency is indirectly related to quality of random generating function, and may be useless for this algorithm.

Just came into doubt if it won't make yet worse on condition the data I'm trying to decrypt could be generated by std. _rand() func. ;o)

Thanks for all infos ...

servil

Hi All,

I read a very interesting paper on the nature of randomness and a comparison of two effective pseudo random number generators (PRNGs), the Park-Miller algorithm which many are probably aware of, but also one known as The Mersenne Twister which appears to have some advantages over the former.

The Twisted Road to Randomness

http://www.coyotegulch.com/algorithm/libcoyote/TwistedRoad/TwistedRoad.html

I've been researching Genetic Algorithm (GA) programming lately, as used as a stochastic model in applications such as AI, neural net, fractal generation, many forms of predictive analysis and so on. I'm fascinated in how it's implemented as a generational model of natural populations, using the concepts of population size, fitness and reproduction success, crossover and mutation rates, all creating a method of problem solving.

Naturally a good random number generator is of prime (no pun) interest in creating an initial random population of values to begin with. I've been working on a GA model to use as an input application to play with any number of GA ideas, perhaps graphic representation or fractals at some point, perhaps encryption algos, perhaps just creating my own little menagerie of creatures, a zoo of bits and bytes... In any case...

The paper I mentioned describes the Park-Miller PRNG (pseudo random number generator) algo and its optimum parameters:

-- ------------------

Ni+1 = (a ? Ni + c) mod m

"Numerical research by S. K. Park and K. W. Miller has identified a theoretical "best" set of parameters. For the linear congruential algorithm to be effective, a and m can take on only a very few values, with m most certainly being prime.

Park and Miller identified the parameter values a = 16807, m = 2147483647, c = 0 as producing the most statistically-random values for 32-bit signed integers. For producing 16-bit values, a good set of parameters is a = 171, m = 30269, c = 0. If an application requires 32-bit numbers, Park and Miller suggested a = 42871 or a = 69621."

-------------------

I quite Thank! NaN for the Park-Miller algo source I've been using, but I'd mention that the 'magic numbers' used (16807, 127773, 2836 if I'm interpreting it right) are slightly different from the above recommended ones. I haven't got a way of accurately testing them one way or the other at the moment. Any PRNG will eventually generate its initial seed again and the values will start repeating themselves. For a small population this isn't critical, but for some applications it could be.

The Mersenne Twister algorithm sounds interesting as an option to the Park-Miller

http://www.math.keio.ac.jp/~matumoto/emt.html

http://www.math.keio.ac.jp/~nisimura/random/doc/mt.pdf

-----------------

..appealing aspects of the Mersenne Twister is its use of binary operations -- as opposed to time- consuming multiplication -- for generating numbers.

.. algorithm's period is ~10^6001, as compared to a period of ~10^8 for the best variants of the linear congruential methods -- in other words, for practical purposes, the Mersenne Twister doesn't repeat itself.

..the basic algorithm generates 32-bit integers, which provide a greater granularity than 16-bit linear congruential generators. Granularity allows finer distinctions between individual values and is particularly important when dealing with floating-point numbers.

Smaller granularity means a broader range of results, which is required by stochastic algorithms that solve large problems to a fine degree.

-----------------

Unfortunately there's no sweet MASM implementation of this code... Yet?? !! :D

Cheers,

Kayaker

EDIT: I see Jan already mentioned the Mersenne Twister, maybe we can learn more?

I read a very interesting paper on the nature of randomness and a comparison of two effective pseudo random number generators (PRNGs), the Park-Miller algorithm which many are probably aware of, but also one known as The Mersenne Twister which appears to have some advantages over the former.

The Twisted Road to Randomness

http://www.coyotegulch.com/algorithm/libcoyote/TwistedRoad/TwistedRoad.html

I've been researching Genetic Algorithm (GA) programming lately, as used as a stochastic model in applications such as AI, neural net, fractal generation, many forms of predictive analysis and so on. I'm fascinated in how it's implemented as a generational model of natural populations, using the concepts of population size, fitness and reproduction success, crossover and mutation rates, all creating a method of problem solving.

Naturally a good random number generator is of prime (no pun) interest in creating an initial random population of values to begin with. I've been working on a GA model to use as an input application to play with any number of GA ideas, perhaps graphic representation or fractals at some point, perhaps encryption algos, perhaps just creating my own little menagerie of creatures, a zoo of bits and bytes... In any case...

The paper I mentioned describes the Park-Miller PRNG (pseudo random number generator) algo and its optimum parameters:

-- ------------------

Ni+1 = (a ? Ni + c) mod m

"Numerical research by S. K. Park and K. W. Miller has identified a theoretical "best" set of parameters. For the linear congruential algorithm to be effective, a and m can take on only a very few values, with m most certainly being prime.

Park and Miller identified the parameter values a = 16807, m = 2147483647, c = 0 as producing the most statistically-random values for 32-bit signed integers. For producing 16-bit values, a good set of parameters is a = 171, m = 30269, c = 0. If an application requires 32-bit numbers, Park and Miller suggested a = 42871 or a = 69621."

-------------------

I quite Thank! NaN for the Park-Miller algo source I've been using, but I'd mention that the 'magic numbers' used (16807, 127773, 2836 if I'm interpreting it right) are slightly different from the above recommended ones. I haven't got a way of accurately testing them one way or the other at the moment. Any PRNG will eventually generate its initial seed again and the values will start repeating themselves. For a small population this isn't critical, but for some applications it could be.

The Mersenne Twister algorithm sounds interesting as an option to the Park-Miller

http://www.math.keio.ac.jp/~matumoto/emt.html

http://www.math.keio.ac.jp/~nisimura/random/doc/mt.pdf

-----------------

..appealing aspects of the Mersenne Twister is its use of binary operations -- as opposed to time- consuming multiplication -- for generating numbers.

.. algorithm's period is ~10^6001, as compared to a period of ~10^8 for the best variants of the linear congruential methods -- in other words, for practical purposes, the Mersenne Twister doesn't repeat itself.

..the basic algorithm generates 32-bit integers, which provide a greater granularity than 16-bit linear congruential generators. Granularity allows finer distinctions between individual values and is particularly important when dealing with floating-point numbers.

Smaller granularity means a broader range of results, which is required by stochastic algorithms that solve large problems to a fine degree.

-----------------

Unfortunately there's no sweet MASM implementation of this code... Yet?? !! :D

Cheers,

Kayaker

EDIT: I see Jan already mentioned the Mersenne Twister, maybe we can learn more?

I don't think its MASM, but Anger Fog distributes his Mersenne twister source code here along with others. Most already know of Ranrot.

My mistake, on closer reading there is a MASM 6 version of the Mersenne Twister written by Sterling Stuart Stein

http://www.math.keio.ac.jp/~matumoto/CODES/S3mt.zip

Thanks for the Agner link as well, very nice.

Kayaker

http://www.math.keio.ac.jp/~matumoto/CODES/S3mt.zip

Thanks for the Agner link as well, very nice.

Kayaker

There have been some changes to the array initialization code to improve the randomness:

http://www.math.keio.ac.jp/~matumoto/MT2002/emt19937ar.html

http://www.math.keio.ac.jp/matumoto/CODES/MT2002/mt19937ar.c

The convertion to ASM is rather straight forward:

http://www.math.keio.ac.jp/~matumoto/MT2002/emt19937ar.html

http://www.math.keio.ac.jp/matumoto/CODES/MT2002/mt19937ar.c

The convertion to ASM is rather straight forward:

```
init_genrand PROC seed:DWORD
```

; See Knuth TAOCP Vol2. 3rd Ed. P.106 for multiplier.

; In the previous versions, MSBs of the seed affect

; only MSBs of the array mt[].

; 2002/01/09 modified by Makoto Matsumoto

mov eax, seed

mov ecx, 0

; mt[mti] = (1812433253UL * (mt[mti-1] ^ (mt[mti-1] >> 30)) + mti);

@@: mov [mt + ecx*4], eax

inc ecx

cmp ecx, N

je @F

mov edx, eax

shr eax, 30

xor eax, edx

mov edx, 6C078965h

mul edx

add eax, ecx

jmp @B

@@: mov [mti], ecx

ret

init_genrand ENDP

Here is my conversion:

Make sure to initialize the array:

I'd like to optimise the loops, but I'm not sure that the following code doesn't adversely effect the randomness of the algorithm:

Here are some macros for other types of random number ranges with floats:

a sec on a 1.3Ghz Athlon processor (~20 cycles each).

...and I did an MMX version that is >15% faster (needs some more work).

```
; Period parameters
```

N EQU 624

M EQU 397

MATRIX_A EQU 09908B0DFh ; constant vector a

UPPER_MASK EQU 080000000h ; most significant w-r bits

LOWER_MASK EQU 07FFFFFFFh ; least significant r bits

_BSS SEGMENT

mti dd ? ; mti==N+1 means mt[N] is not initialized

mt dd N dup(?) ; the array for the state vector

_BSS ENDS

ALIGN 16

init_genrand PROC seed:DWORD

; See Knuth TAOCP Vol2. 3rd Ed. P.106 for multiplier.

; In the previous versions, MSBs of the seed affect

; only MSBs of the array mt[].

; 2002/01/09 modified by Makoto Matsumoto

mov eax, seed

xor ecx, ecx

mov seed, 6C078965h

ALIGN 16

; mt[mti] = (1812433253UL * (mt[mti-1] ^ (mt[mti-1] >> 30)) + mti);

@@: mov [mt + ecx*4], eax

inc ecx

cmp ecx, N

je @F

mov edx, eax

shr eax, 30

xor eax, edx

mul seed

add eax, ecx

jmp @B

@@: mov [mti], -N

ret

init_genrand ENDP

ALIGN 16

; generates a random number on [0,0xffffffff]-interval

genrand_int32 PROC

mov eax, [mti]

inc [mti]

js get_rand

; leave this to the programmer!

; je @F ; if init_genrand() has not been called,

; invoke init_genrand, 5489 ; a default initial seed is used

;@@:

; generate N words at one time

;-----------------------------

; for (kk=0;kk<N-M;kk++) {

; y = (mt[kk] & UPPER_MASK) | (mt[kk+1] & LOWER_MASK);

; mt[kk] = mt[kk+M] ^ (y >> 1) ^ mag01[y & 0x1UL];

; }

mov ecx, -(N-M)

MT EQU <(mt+(N-M)*4)> ; forward through memory

_1: mov edx, [MT + ecx*4]

mov eax, [MT + ecx*4 + 4]

and edx, UPPER_MASK

and eax, LOWER_MASK

;

or eax, edx ; y

;

shr eax, 1

sbb edx, edx

xor eax, [MT + ecx*4 + M*4]

and edx, MATRIX_A

;

xor eax, edx

mov [MT + ecx*4], eax

inc ecx

jne _1

; for (;kk<N-1;kk++) {

; y = (mt[kk]&UPPER_MASK)|(mt[kk+1]&LOWER_MASK);

; mt[kk] = mt[kk+(M-N)] ^ (y >> 1) ^ mag01[y & 0x1UL];

; }

mov ecx, -(M-1) ; last one wraps around

MT EQU <(mt+N*4-4)> ; forward through memory

_2: mov edx, [MT + ecx*4]

mov eax, [MT + ecx*4 + 4]

and edx, UPPER_MASK

and eax, LOWER_MASK

;

or eax, edx ; y

;

shr eax, 1

sbb edx, edx

xor eax, [MT + ecx*4 + (M-N)*4]

and edx, MATRIX_A

;

xor eax, edx

mov [MT + ecx*4], eax

inc ecx

jne _2

; y = (mt[N-1]&UPPER_MASK)|(mt[0]&LOWER_MASK);

; mt[N-1] = mt[M-1] ^ (y >> 1) ^ mag01[y & 0x1UL];

mov edx, [mt + N*4 - 4]

mov eax, [mt]

and edx, UPPER_MASK

and eax, LOWER_MASK

;

or eax, edx ; y

;

shr eax, 1

sbb edx, edx

xor eax, [mt + (M-1)*4]

and edx, MATRIX_A

;

xor eax, edx

mov [mt + N*4 - 4], eax

mov eax, -N

mov [mti], -(N - 1)

get_rand:

; EAX in interval [-N, -1]

MT EQU <(mt+N*4)> ; forward through memory

mov eax, [MT + eax*4]

mov edx, eax

shr eax, 11

xor eax, edx

mov edx, eax

shl eax, 7

and eax, 09D2C5680h

xor eax, edx

mov edx, eax

shl eax, 15

and eax, 0EFC60000h

xor eax, edx

mov edx, eax

shr eax, 18

xor eax, edx

ret

genrand_int32 ENDP

; generates a random number on [0,0x7fffffff]-interval

genrand_int31 MACRO

call genrand_int32

shr eax, 1

ENDM

...I haven't tuned it up, yet.
Make sure to initialize the array:

```
invoke init_genrand, 19730905
```

invoke genrand_int32

I'd like to optimise the loops, but I'm not sure that the following code doesn't adversely effect the randomness of the algorithm:

```
_1: mov edx, [MT + ecx*4]
```

mov eax, [MT + ecx*4 + 4]

add edx, edx

rcr eax, 1

sbb edx, edx

xor eax, [MT + ecx*4 + M*4]

and edx, MATRIX_A

xor eax, edx

mov [MT + ecx*4], eax

inc ecx

jne _1

Anyone know a way to test? My math skills are lacking in this regaurd.
Here are some macros for other types of random number ranges with floats:

```
; generates a random number on [0,0x7fffffff]-interval
```

genrand_int31 MACRO

invoke genrand_int32

shr eax, 1

ENDM

; generates a random number on [0,1)-real-interval

genrand_real0 MACRO

invoke genrand_int32

push 0

push eax

fild QWORD PTR [esp]

; = int31 / 2^32

fmul fpc(REAL8 2.3283064365386962890625e-10)

add esp, 8

ENDM

; generates a random number on [0,1]-real-interval

genrand_real1 MACRO

invoke genrand_int32

push 0

push eax

fild QWORD PTR [esp]

; = int31 / (2^32 - 1)

fmul fpc(REAL8 2.3283064370807973754314699618685e-10)

add esp, 8

ENDM

; generates a random number on [0,1] with 63-bit resolution

genrand_res63 MACRO

invoke genrand_int32

push eax

invoke genrand_int32

shr DWORD PTR [esp], 1

push eax

fild QWORD PTR [esp]

; = int63 / 2^63

fmul fpc(REAL8 1.0842021724855044340074528008699e-19)

add esp, 8

ENDM

; generates a random number on [0,1) with 63-bit resolution

genrand_res63_b MACRO

sub esp, 12

invoke genrand_int32

mov [esp+4], eax

invoke genrand_int32

mov DWORD PTR [esp], eax

or DWORD PTR [esp+4], 80000000h ; must set high bit

mov DWORD PTR [esp+8], 3FFFh

fld1

fld TBYTE PTR [esp]

fsubr

add esp, 12

ENDM

I did a little testing. 2^26 random numbers takes just over
a sec on a 1.3Ghz Athlon processor (~20 cycles each).

...and I did an MMX version that is >15% faster (needs some more work).

Thanks for the translation bitRAKE, now I can interpret the algorithm in good old M(othertongue)ASM ;)

I was able to do some testing with it and the results look good. To start with, it's roughly twice as fast as the Park-Miller algo. I tested your code of the updated Mersenne Twister MT19937 algo, the Park-Miller (NaN's code), and the older MT19937 version in the Sterling code I mentioned above (but with the improved

I measured the execution speed using lpPerformanceCount of a larger proc that used each of these PRNGs to produce a random population of 10,000 dword hex strings, results averaged over 100 generations.

MT19937 genrand_int32 - 10.9 ms

older MT19937 Rand - 11.0 ms

Park-Miller -

I'm not sure how best to test for "randomness", but I tried to take some kind of measurement by testing for duplication of a random byte in a single byte position in each of these 10,000 dwords. The hex strings were limited to 4 byte ascii strings from 20h to 7Ah, so only 90 possible ascii characters were possible. Averaged over 100 generations, there were roughly 110 out of 10,000 dwords in the array with a single duplicated byte at a particular byte location. This number seemed to be very slightly higher in the Park-Miller algo over the MT, the difference becoming a bit more noticeable as the size of the array increased.

LOL, That said, I'm not sure if this number means anything in terms of a test for "randomness" though, I was only looking for comparative differences, and it certainly wasn't rigorous. I think there was a thread on graphically representing randomness recently, perhaps a pattern test might show something?

For all intents your code is great unless you find an error, and the Mersenne Twister is certainly much faster than the Park-Miller. Thanks, I'm sure many will find it useful :)

Regards,

Kayaker

I was able to do some testing with it and the results look good. To start with, it's roughly twice as fast as the Park-Miller algo. I tested your code of the updated Mersenne Twister MT19937 algo, the Park-Miller (NaN's code), and the older MT19937 version in the Sterling code I mentioned above (but with the improved

*init_genrand*initial seed proc).I measured the execution speed using lpPerformanceCount of a larger proc that used each of these PRNGs to produce a random population of 10,000 dword hex strings, results averaged over 100 generations.

MT19937 genrand_int32 - 10.9 ms

older MT19937 Rand - 11.0 ms

Park-Miller -

**22.4 ms**I'm not sure how best to test for "randomness", but I tried to take some kind of measurement by testing for duplication of a random byte in a single byte position in each of these 10,000 dwords. The hex strings were limited to 4 byte ascii strings from 20h to 7Ah, so only 90 possible ascii characters were possible. Averaged over 100 generations, there were roughly 110 out of 10,000 dwords in the array with a single duplicated byte at a particular byte location. This number seemed to be very slightly higher in the Park-Miller algo over the MT, the difference becoming a bit more noticeable as the size of the array increased.

LOL, That said, I'm not sure if this number means anything in terms of a test for "randomness" though, I was only looking for comparative differences, and it certainly wasn't rigorous. I think there was a thread on graphically representing randomness recently, perhaps a pattern test might show something?

For all intents your code is great unless you find an error, and the Mersenne Twister is certainly much faster than the Park-Miller. Thanks, I'm sure many will find it useful :)

Regards,

Kayaker

bitRAKE,

Regards,

Lingo

```
OPTION PROLOGUE:NONE
```

OPTION EPILOGUE:NONE

Align 16

init_genrand PROC seed:DWORD

; See Knuth TAOCP Vol2. 3rd Ed. P.106 for multiplier.

; In the previous versions, MSBs of the seed affect

; only MSBs of the array mt[].

; 2002/01/09 modified by Makoto Matsumoto

mov eax, [esp+4] ; seed

xor ecx, ecx ; ecx=0

; mt[mti] = (1812433253UL * (mt[mti-1] ^ (mt[mti-1] >> 30)) + mti);

@@:

add eax, ecx

inc ecx

mov edx, eax

shr eax, 30

mov [mt+ecx*4-4], edx

xor eax, edx

imul eax, 6C078965h ; delay 4

cmp ecx, N ; 1 + 2 clocks for taken jmp and

jl @B ; CPU has a job until imul finish with eax

mov [mti], ecx

ret 4

init_genrand ENDP ;

OPTION PROLOGUE:PROLOGUEDEF ; turn back on the defaults

OPTION EPILOGUE:EPILOGUEDEF ;

Regards,

Lingo