I wish I could have more time to dedicate to this challenge, because as Terab said, it's really amazing stuff.. and even if I'll probably fail to find a better method than NFS, there are many interesting things I will learn in the process. Really worth the efforts in any case.

Also, as soon as I know that my method won't be better than NFS, I will release it to the public domain here (in the

*very*improbable case than it's better than NFS I would do it a bit later ;) ).

in this case its not the final destination that makes it worthwhile, its the journey. i do,however, pity the few trees that were killed in order to provide me a medium for my number scribblings :-)

unfortunately my method has got a m^n function (but its in base 2 - so no problems there) but its got a divide - erk, in fact more than one - so not too good

if for nothing else this exercise was worth it as it made me remember than in stead of subtracting, just add a 2's complement

to the number you wanted to subtract from - somehow forgot that little gem

regards

terab

That was the good news, the bad news is that it's still very complex, and I am not ready to start coding this stuff yet. Now you may think, as some did about Giovanni, that I'm just deluding myself, scribbling numbers until some of the patterns that always exist become visible (though mostly not useful).

Not so! My way of using the patterns is entirely different.

His way is also valid, but relies too much on trial and error correction.

But remember that this was the very reason he first contacted this forum, because he wanted a better method developed. Personally I'm surprised that he could use these patterns at all, in the decimal number system.

I've used boolean algebra to solve a few supersimple cases, thereby gaining some insight into the relationships that make it possible to find factors of a product without having to repeatedly divide it (just as Giovanni claimed). However, the number of terms in the equations for each bit rises rapidly with the bit width of the product. At worst the final equations for a 2048 bit product can contain appx four million terms, PER RESULTANT BIT. And some of those terms will in turn refer to other expressions rather than just simple bit variables.

Still, as compared to 2**1024 divisions that still looks pretty good, right ?

Also, in most cases various simplifications can be made, using the normal rules for boolean algebra, whenever the primal patterns offer such opportunities. But coding a program to recognize and utilize these will be very demanding indeed.

My method uses two synthetic (algebraic) divisions in parallel, one for each factor. These divisions may in some cases simplify to numeric cases, giving bits of the factors directly, but never all of them. Such bits will however help greatly in reducing the complexity of equations for the other bits. The real goal is to build the set of simultaneous equations that need to be solved to produce the numeric form of the factors. Then comes the task of solving these, mainly by combining and redistributing terms to reduce complexity.

At the end of that process all equations should reduce to expressions of one or two boolean variables. One of them simply chooses which of the two factors to be the lower/higher one, while the other chooses between the factors we really want and the useless set where one is equal to the product and the other is equal to 1. In some cases these can be reduced earlier, but in the worst cases they can't be identified until the end. It depends on the primal patterns, as most things.

NB: The division method needs to match that described in the article on precision division, pointed to by a URL in an earlier post (by bitRAKE I think), except that you need to use base 2. Only then will all patterns clarify, and only then can you always find the modular inverse without problems (since only 1 digit value exists).

Btw:

I suppose this means that the race is on, and fairly soon someone(s) will claim all the RSA prizes. And knowing myself (I always prefer precision over speed) it will probably not be me. Oh, well... that's life too I suppose, and at least I didn't fall for the temptation to break my principles. (I hate knowledge 'hogging'.)

Not so! My way of using the patterns is entirely different.

His way is also valid, but relies too much on trial and error correction.

But remember that this was the very reason he first contacted this forum, because he wanted a better method developed. Personally I'm surprised that he could use these patterns at all, in the decimal number system.

I've used boolean algebra to solve a few supersimple cases, thereby gaining some insight into the relationships that make it possible to find factors of a product without having to repeatedly divide it (just as Giovanni claimed). However, the number of terms in the equations for each bit rises rapidly with the bit width of the product. At worst the final equations for a 2048 bit product can contain appx four million terms, PER RESULTANT BIT. And some of those terms will in turn refer to other expressions rather than just simple bit variables.

Still, as compared to 2**1024 divisions that still looks pretty good, right ?

Also, in most cases various simplifications can be made, using the normal rules for boolean algebra, whenever the primal patterns offer such opportunities. But coding a program to recognize and utilize these will be very demanding indeed.

My method uses two synthetic (algebraic) divisions in parallel, one for each factor. These divisions may in some cases simplify to numeric cases, giving bits of the factors directly, but never all of them. Such bits will however help greatly in reducing the complexity of equations for the other bits. The real goal is to build the set of simultaneous equations that need to be solved to produce the numeric form of the factors. Then comes the task of solving these, mainly by combining and redistributing terms to reduce complexity.

At the end of that process all equations should reduce to expressions of one or two boolean variables. One of them simply chooses which of the two factors to be the lower/higher one, while the other chooses between the factors we really want and the useless set where one is equal to the product and the other is equal to 1. In some cases these can be reduced earlier, but in the worst cases they can't be identified until the end. It depends on the primal patterns, as most things.

NB: The division method needs to match that described in the article on precision division, pointed to by a URL in an earlier post (by bitRAKE I think), except that you need to use base 2. Only then will all patterns clarify, and only then can you always find the modular inverse without problems (since only 1 digit value exists).

Btw:

I suppose this means that the race is on, and fairly soon someone(s) will claim all the RSA prizes. And knowing myself (I always prefer precision over speed) it will probably not be me. Oh, well... that's life too I suppose, and at least I didn't fall for the temptation to break my principles. (I hate knowledge 'hogging'.)

I wish you the best.. but I suggest you to be prudent. More than one time I was going to post here that my method works great.. today after the latest improvements during the first tests I even thought it increased computation at ~sqrt(N), but then on a bit tougher tests I saw it rises exponentially+sqrt(N). Actually it solves 10147 in 5 steps (was 6 steps before the latest improvement to my algorithm), and (a relatively inefficient, very unfortunate case.. but which *may* happen anyway, so I concentrate my efforts mostly on these exceptions) is the product 1198413947 solved in 5871 steps (was 23468 before the latest improvement to my algorithm). This sucks, if you read about the performance of the NFS method, the current state of the art.

If I think of how complex was the algorithm I wrote today, I think "wow, what a work!".. too bad it didn't perform how I hoped. One step in the right direction, though.

But it's all very intriguing, so I've just a newer idea to test as soon as I've some free time.. the only thing that will impress me anyway is when computation requirements will scale slower than N increases. Otherwise, there's no hope.

If I think of how complex was the algorithm I wrote today, I think "wow, what a work!".. too bad it didn't perform how I hoped. One step in the right direction, though.

But it's all very intriguing, so I've just a newer idea to test as soon as I've some free time.. the only thing that will impress me anyway is when computation requirements will scale slower than N increases. Otherwise, there's no hope.

**I suppose this means that the race is on, and fairly soon someone(s) will claim all the RSA prizes. And knowing myself (I always prefer precision over speed) it will probably not be me. Oh, well... that's life too I suppose, and at least I didn't fall for the temptation to break my principles. (I hate knowledge 'hogging').**

**I doubt that the RSA is truly in risk.. but I enjoy the race. Also, my latest method is almost purely binary, ultra efficient.. but this is not what matters, on such big numbers it's the algorithm that matters. Even if each step in the NFS took 1 billions times the CPU cycles that my method takes for one step, when we talk about numbers with 300 digits, how optimized is your code is less than irrilevant, but it's how the algorithm scales that counts.. only that. So e.g. 1 CPU cycle per step but that requires exp(N) will be infinitely slower than 1 billion of billions of billions of billions of CPU cycles per step on a algorithm that requires log(N).**

So it will be probably you the winner. ;)So it will be probably you the winner. ;)

**
**

The good news for us all:

Before working on something, I've the habit to make some research on the best possible performance obtainable at least theoretically.. so to then aim directly at that one, and have an idea about the real status of my work.

I also research stuff before working on anything - been burnt badly not doing that before.

Actually there is a way of factorizing a N digit number very quickly. Only thing is we dont have the hardware to do it - quantum computers can easily do it as a 'qubit' (quantum bit) contains both 1 and 0 at the same time, build a N qubits (which can contain 2^N values at the same time) then apply an operation, the operation will accur on 2^N value all at the same time....

so now using a egg timer / toaster and off course the microwave oven I'm constructing a quantum computer :grin: :grin:

radlanor, if it's small and/or fast, post it :)

Sounds good to me , please post.

I have had 3 deadlines in the last 2 weeks so havent played with it much, but no matter what method i think of for solving the problem the worst case is always close to a brute force method.

Good luck radlanor, sound like you are proceding well

I assume you refer to the division functions of the BIGNUM lib. There are two mistakes you can make with them, that would make it appear as if they don't work at all.

1: BIGNUM arguments must be pointers to BIGNUM structures. If you pass the structures themselves instead, then there will be errors.

2: The first argument is the divisor, and the second is the dividend. If you reverse them, then the result will normally be zero. (Assuming intended divisor < intended dividend.)

Doublecheck your usage with regard to the above, and if you still get errors, please come back with a specific example that I can test for you.

Please, let's not 'muddy the waters' again. Giovanni doesn't have an 'easy RSA' algorithm, as you already know. In fact there is no *easy* method to solve RSA factors, and I'm sure Giovanni too regrets his original choice of words for his original post, in that regard.

However, the 'patterns' he spoke of have been identified, and may yet lead to a functional algorithm. That will not be easy to implement, as should be evident from my earlier posts on boolean algebra methods. But if/when such an implementation is completed, it will be quite easy to solve factors of any RSA modulus which doesn't exceed the maximum size of that implementation (mainly due to growing storage requirements).

Hi Maverick,

like I said in my post, I don't think I'll be the one to do it, because I can't cope with the complexity of the equations needed, at least not yet. It will take a major effort to produce a program that can solve this kind of boolean algebra. And I'm not likely to be the one to invent those algorithms first. So I think I'm being plenty prudent here.

As for your comments on scaling, I agree completely. The most important thing is to get rid of that 2**bits dependency. I'm not sure how this non-factoring method scales, but I think the complexity of equations grows linearly with the bit position, while the total number of equations equals two times the square of the total bit count. This should produce a dependency like k*bits**3, which is certainly better than 2**bits.

Of course, I may be mistaken, but that remains to be seen. In any case, whether it can be as fast as hoped for or not, this is a very interesting method to investigate.

like I said in my post, I don't think I'll be the one to do it, because I can't cope with the complexity of the equations needed, at least not yet. It will take a major effort to produce a program that can solve this kind of boolean algebra. And I'm not likely to be the one to invent those algorithms first. So I think I'm being plenty prudent here.

As for your comments on scaling, I agree completely. The most important thing is to get rid of that 2**bits dependency. I'm not sure how this non-factoring method scales, but I think the complexity of equations grows linearly with the bit position, while the total number of equations equals two times the square of the total bit count. This should produce a dependency like k*bits**3, which is certainly better than 2**bits.

Of course, I may be mistaken, but that remains to be seen. In any case, whether it can be as fast as hoped for or not, this is a very interesting method to investigate.

Hi guys,

I think as maverick says prudence i very much advisable here.

My method too was working really well until i realized on some numbers it takes more than order 4^N - erk (quite a big term on 2048 bits :)

Clearly not an option.

The only way to find this thing is to find the pattern, an algorithm even if it solves 10^10 possibilities a second will still take a VERY long time - consider estimated age of universe in seconds is

3 * 10^17 (from memory could be a bit out)

But as fast as I discard one method, i come up with 2 others based on the previous method. I do , however, agree with Maverick that I dont believe that RSA is in any real danger - but the learning we are doing here is more than worth it.

I think as maverick says prudence i very much advisable here.

My method too was working really well until i realized on some numbers it takes more than order 4^N - erk (quite a big term on 2048 bits :)

Clearly not an option.

The only way to find this thing is to find the pattern, an algorithm even if it solves 10^10 possibilities a second will still take a VERY long time - consider estimated age of universe in seconds is

3 * 10^17 (from memory could be a bit out)

But as fast as I discard one method, i come up with 2 others based on the previous method. I do , however, agree with Maverick that I dont believe that RSA is in any real danger - but the learning we are doing here is more than worth it.

The good news for us all:

Before working on something, I've the habit to make some research on the best possible performance obtainable at least theoretically.. so to then aim directly at that one, and have an idea about the real status of my work.

Well, I just concluded my research (I asked several skilled mathematicians around the world.. what a wonderful thing that is Internet), and there's no theoretical "barrier" on the performance achievable by a (known or unknown) factoring method.

It means, NFS takes O( exp(c*ln(n)^1/3 * ln(ln(n))^2/3 ) ) computation steps.. and the good news is that there's no "physical law" that denies that another method may be invented with performance down to O ( 1 ). This is very important, and surely encouraging (i.e. that NFS can be seriously beaten).

Unfortunately it has not been neither proven the opposite, i.e. that we can go down to O ( 1 ) or

So the race is

Happy factors hunting. :)

Before working on something, I've the habit to make some research on the best possible performance obtainable at least theoretically.. so to then aim directly at that one, and have an idea about the real status of my work.

Well, I just concluded my research (I asked several skilled mathematicians around the world.. what a wonderful thing that is Internet), and there's no theoretical "barrier" on the performance achievable by a (known or unknown) factoring method.

It means, NFS takes O( exp(c*ln(n)^1/3 * ln(ln(n))^2/3 ) ) computation steps.. and the good news is that there's no "physical law" that denies that another method may be invented with performance down to O ( 1 ). This is very important, and surely encouraging (i.e. that NFS can be seriously beaten).

Unfortunately it has not been neither proven the opposite, i.e. that we can go down to O ( 1 ) or

*any*other "value". Given the lack of knowledge humans have in this field, it may even be that O( exp(c*ln(n)^1/3 * ln(ln(n))^2/3 ) ) is the real limit.. but this is very unlikely to be.So the race is

**really**open. Now I just wish I had more free time to dedicate to this..Happy factors hunting. :)

The good news for us all:

Before working on something, I've the habit to make some research on the best possible performance obtainable at least theoretically.. so to then aim directly at that one, and have an idea about the real status of my work.

I also research stuff before working on anything - been burnt badly not doing that before.

Actually there is a way of factorizing a N digit number very quickly. Only thing is we dont have the hardware to do it - quantum computers can easily do it as a 'qubit' (quantum bit) contains both 1 and 0 at the same time, build a N qubits (which can contain 2^N values at the same time) then apply an operation, the operation will accur on 2^N value all at the same time....

so now using a egg timer / toaster and off course the microwave oven I'm constructing a quantum computer :grin: :grin:

Hi guys,

Maverick:

I'm glad you agree that it is possible, but you overlook the fact that I'm not just talking of something that may be possible. I'm talking of a method that can be implemented as soon as we have the tools to handle the boolean equations. For simple cases that tool already exists, in the human brain, and for such cases the method works. Not in some indeterminate future, but right now.

Admittedly, this isn't very useful yet, but it will become so once software to let a computer do this stuff is developed.

Terab:

You said

But that problem exists because you analyze the patterns linearly, by inspecting the modulus as it is. This makes your full pattern consist of the modulus itself, thus depending on its size of 2^bits.

The method I described inspects bit relationships, which breaks down the full pattern into manageable chunks. As described in earlier posts, that should reduce the problem to something like (k*bits^3).

As for the quantum stuff, let's keep serious, please.

No hard feelings, but I think we need to stay serious here.

I seriously proposed a functional method to be implemented.

If you can't implement it, like I myself can't do yet, that is still no reason to laugh it away.

Maverick:

I'm glad you agree that it is possible, but you overlook the fact that I'm not just talking of something that may be possible. I'm talking of a method that can be implemented as soon as we have the tools to handle the boolean equations. For simple cases that tool already exists, in the human brain, and for such cases the method works. Not in some indeterminate future, but right now.

Admittedly, this isn't very useful yet, but it will become so once software to let a computer do this stuff is developed.

Terab:

You said

The only way to find this thing is to find the pattern, an algorithm even if it solves 10^10 possibilities a second will still take a VERY long time - consider estimated age of universe in seconds is 3 * 10^17 (from memory could be a bit out)

But that problem exists because you analyze the patterns linearly, by inspecting the modulus as it is. This makes your full pattern consist of the modulus itself, thus depending on its size of 2^bits.

The method I described inspects bit relationships, which breaks down the full pattern into manageable chunks. As described in earlier posts, that should reduce the problem to something like (k*bits^3).

As for the quantum stuff, let's keep serious, please.

No hard feelings, but I think we need to stay serious here.

I seriously proposed a functional method to be implemented.

If you can't implement it, like I myself can't do yet, that is still no reason to laugh it away.

Hi there,

I'm glad you agree that it is possible, but you overlook the fact that I'm not just talking of something that may be possible.

As for the quantum stuff, let's keep serious, please.

No hard feelings, but I think we need to stay serious here.

I seriously proposed a functional method to be implemented.

If you can't implement it, like I myself can't do yet, that is still no reason to laugh it away.

Sorry if you misunderstood, I am not laughing it away at all. I am truly impressed that you found such a pattern, i still dont see the pattern in the bit.....

As for the quantum stuff, it really does exists - I was recently reading a paper on how a quantum computer would make breaking RSA's of any bit length a trivial operation.

The problem is the quantum compuiters dont exists yet - hence the little joke about the egg timer.

I find this whole thing truly fascinating and was in no way dimishing your achievement.

Regards

terab

I'm glad you agree that it is possible, but you overlook the fact that I'm not just talking of something that may be possible.

As for the quantum stuff, let's keep serious, please.

No hard feelings, but I think we need to stay serious here.

I seriously proposed a functional method to be implemented.

If you can't implement it, like I myself can't do yet, that is still no reason to laugh it away.

Sorry if you misunderstood, I am not laughing it away at all. I am truly impressed that you found such a pattern, i still dont see the pattern in the bit.....

As for the quantum stuff, it really does exists - I was recently reading a paper on how a quantum computer would make breaking RSA's of any bit length a trivial operation.

The problem is the quantum compuiters dont exists yet - hence the little joke about the egg timer.

I find this whole thing truly fascinating and was in no way dimishing your achievement.

Regards

terab

Hi Terab,

>>Sorry if you misunderstood, I am not laughing it away at all.

Good. I wasn't certain about that, and I had to make sure.

>>I am truly impressed that you found such a pattern, i still dont see the pattern in the bit.....

Then you are probably just looking at numerical bits, while you perform numerical arithmetic. The real patterns are in the bit relationships that can best be expressed in boolean algebra.

Since the bits themselves are derived from those relationships, some patterns will remain in any numeric representation too, but only in a vague and seemingly random fashion. That is why I was so impressed that Giovanni could not only 'sense' their presence, like most people can, but could actually manipulate them in the decimal number system.

NB: The patterns of importance here are not patterns for finding primes in general, but the patterns that relate a product to its factors. These patterns are not at all vague and in no way random. But for products with a large number of bits, the boolean equations for high bits do become very complex.

I have barely started to design (still in my head) the basic software methods needed to record and process the boolean equations. It will probably be a while before I can code some equation solver that really works, even for small products. Since this is an entirely new approach AFAIK, solving specific numerical problems by constructing and then solving a matching set of boolean equations, I will probably have to design most stuff from scratch.

Ideas and suggestions are welcomed from all...

>>Sorry if you misunderstood, I am not laughing it away at all.

Good. I wasn't certain about that, and I had to make sure.

>>I am truly impressed that you found such a pattern, i still dont see the pattern in the bit.....

Then you are probably just looking at numerical bits, while you perform numerical arithmetic. The real patterns are in the bit relationships that can best be expressed in boolean algebra.

Since the bits themselves are derived from those relationships, some patterns will remain in any numeric representation too, but only in a vague and seemingly random fashion. That is why I was so impressed that Giovanni could not only 'sense' their presence, like most people can, but could actually manipulate them in the decimal number system.

NB: The patterns of importance here are not patterns for finding primes in general, but the patterns that relate a product to its factors. These patterns are not at all vague and in no way random. But for products with a large number of bits, the boolean equations for high bits do become very complex.

I have barely started to design (still in my head) the basic software methods needed to record and process the boolean equations. It will probably be a while before I can code some equation solver that really works, even for small products. Since this is an entirely new approach AFAIK, solving specific numerical problems by constructing and then solving a matching set of boolean equations, I will probably have to design most stuff from scratch.

Ideas and suggestions are welcomed from all...

Some revisions and a lot of new functions added. They are all tested and benchmarked, and only the best have been left :)

Giovanni

Giovanni

Hi guys,

I've recently completed (for now anyway) my BIGNUM lib, and will post it here on request (if none, I won't bother). Having played around with it for a while, I still see no practical method to use normal arithmetic to solve the large RSA moduli (576 bits and up). Having huge numerical capacity, as with BIGNUM, just isn't enough, as no known numerical method can avoid the exponential relationship between the bit count and the time needed.

However, I am still convinced that the method I described recently, based on boolean algebra, will be able to do it. I have thus started development of a boolean algebra lib, which will be used with a net of precision division bit cells to resolve factor bits into expressions of modulus bits.

The bit cell net structure is already well defined, but the BA_lib is barely started. That will be a very demanding project, so it will be some time before I can even start using it seriously, and even longer before I can expect it to be efficient.

Later I will probably have to redesign all of it again, to get around the data storage problems. A net for 2048-bit moduli will require over 2 million net cells, each containing 4 algebraic bits, each of which may need thousands of bytes of storage (algebraic references). That s too much for my present system, so I'll have to upgrade significantly to try that... Whatever, I'll aim for more modest results first. (eg: 576 bits should be possible as-is.)

As before, suggestions and ideas are welcomed.

PS: For anyone interested in my BIGNUM lib, I must add that it's still in C++builder hybrid format, though most of the code is now asm, except for a few functions.

DS.

I've recently completed (for now anyway) my BIGNUM lib, and will post it here on request (if none, I won't bother). Having played around with it for a while, I still see no practical method to use normal arithmetic to solve the large RSA moduli (576 bits and up). Having huge numerical capacity, as with BIGNUM, just isn't enough, as no known numerical method can avoid the exponential relationship between the bit count and the time needed.

However, I am still convinced that the method I described recently, based on boolean algebra, will be able to do it. I have thus started development of a boolean algebra lib, which will be used with a net of precision division bit cells to resolve factor bits into expressions of modulus bits.

The bit cell net structure is already well defined, but the BA_lib is barely started. That will be a very demanding project, so it will be some time before I can even start using it seriously, and even longer before I can expect it to be efficient.

Later I will probably have to redesign all of it again, to get around the data storage problems. A net for 2048-bit moduli will require over 2 million net cells, each containing 4 algebraic bits, each of which may need thousands of bytes of storage (algebraic references). That s too much for my present system, so I'll have to upgrade significantly to try that... Whatever, I'll aim for more modest results first. (eg: 576 bits should be possible as-is.)

As before, suggestions and ideas are welcomed.

PS: For anyone interested in my BIGNUM lib, I must add that it's still in C++builder hybrid format, though most of the code is now asm, except for a few functions.

DS.

radlanor, if it's small and/or fast, post it :)

radlanor, if it's small and/or fast, post it :)

Sounds good to me , please post.

I have had 3 deadlines in the last 2 weeks so havent played with it much, but no matter what method i think of for solving the problem the worst case is always close to a brute force method.

Good luck radlanor, sound like you are proceding well

... looking at other's code often helps to wind to another and better direction. The 5% of everybodies genius could be summed, fortunately, and the summed 95% of work could be reduced, hopefully, by the efford of all :)

My PC is a bit stuck with my optimized Fermat algo, since it is working now for over a week and could not find a single leak :) It has computed over 14 million different Big Numbers and has not found a Carmichael number yet. The actual bases are 2, 3, 5, 7, 11, 13 and 17 (base^(n-1)(mod n) = 1). I'll let it go for another while, because if it works as I believe it will, 7 iterations will be enough to determine if a number is prime or not. If there is a leak, other bases have to be added and more calcs done.

In the mean time I do study a lot on your proposal and ideas, to find out how they develop with big numbers and their limts, if any. I'm especially into the different number base analysis, which is very interesting. In fact, it does matter a lot which base you choose, and base 10 is not really the best, but not even the worst.

Giovanni

My PC is a bit stuck with my optimized Fermat algo, since it is working now for over a week and could not find a single leak :) It has computed over 14 million different Big Numbers and has not found a Carmichael number yet. The actual bases are 2, 3, 5, 7, 11, 13 and 17 (base^(n-1)(mod n) = 1). I'll let it go for another while, because if it works as I believe it will, 7 iterations will be enough to determine if a number is prime or not. If there is a leak, other bases have to be added and more calcs done.

In the mean time I do study a lot on your proposal and ideas, to find out how they develop with big numbers and their limts, if any. I'm especially into the different number base analysis, which is very interesting. In fact, it does matter a lot which base you choose, and base 10 is not really the best, but not even the worst.

Giovanni

f0dder, Terab and Giovanni:

Ok, you asked for it, and now you're gonna get it ;)

My lib that is...:

( Assuming the attaching works right.)

Giovanni:

Unfortunately it doesn't help how quickly you can determine the primeness or unprimeness, as the number of potential factors to test still remains 2^(m_bits/2). Eliminating most wrong ones by finding them to be non-prime still takes too long, even if the primal test only takes a fraction of a second. Your methods are still of interest, but for other and more general reasons.

(Good prime testers are worthwhile in themselves.)

The main hope of the boolean algebra method stems from the fact that it doesn't rely on any factor guessing. Instead it calculates the factors from bit relationships inherent in the modulus, as mentioned before.

The hitch is that those relationships can be very complex, of course, and if complexity growth per bit level becomes exponential, then the problem becomes effectively insoluble.

(Except for small numbers that is.)

In purely algebraic form the complexity may grow exponentially, but in the mixed form I plan to use it shouldn't. A lot of the bit values are predetermined by the modulus itself and known relationships of all prime products.

Eg.1: In purely algebraic form for a 1024-bit modulus, there are a total of 3072 unknown modulus and factor bits, which can be reduced so as to express either of the factors in bits of the modulus, but only after an exponential time calculation. The last means that it is in practice impossible to achieve.

Eg.2: In mixed algebraic form for the same modulus, we know not only all of the modulus bits, but also 513 bits of the lower factor, as well as 2 bits of the upper factor. This leaves us with a grand total of 1533 unknown bits, all related through purely numerical modulus bits. I think this allows the problem to be reduced from exponential time domain to polynomial time domain.

But it still remains unproven...

As for using various bases to analyze primes (which is not an option for my boolean algebra method), the 'quality' of a given base depends on several things. One of those things is speed of

calculation, obviously, and for that nothing can beat a base which is a binary power.

These bases also allow very simple calculation of the modular inverse of any odd number (all odd numbers are inversible here), as you can see in the attached lib.

Ok, you asked for it, and now you're gonna get it ;)

My lib that is...:

( Assuming the attaching works right.)

Giovanni:

Unfortunately it doesn't help how quickly you can determine the primeness or unprimeness, as the number of potential factors to test still remains 2^(m_bits/2). Eliminating most wrong ones by finding them to be non-prime still takes too long, even if the primal test only takes a fraction of a second. Your methods are still of interest, but for other and more general reasons.

(Good prime testers are worthwhile in themselves.)

The main hope of the boolean algebra method stems from the fact that it doesn't rely on any factor guessing. Instead it calculates the factors from bit relationships inherent in the modulus, as mentioned before.

The hitch is that those relationships can be very complex, of course, and if complexity growth per bit level becomes exponential, then the problem becomes effectively insoluble.

(Except for small numbers that is.)

In purely algebraic form the complexity may grow exponentially, but in the mixed form I plan to use it shouldn't. A lot of the bit values are predetermined by the modulus itself and known relationships of all prime products.

Eg.1: In purely algebraic form for a 1024-bit modulus, there are a total of 3072 unknown modulus and factor bits, which can be reduced so as to express either of the factors in bits of the modulus, but only after an exponential time calculation. The last means that it is in practice impossible to achieve.

Eg.2: In mixed algebraic form for the same modulus, we know not only all of the modulus bits, but also 513 bits of the lower factor, as well as 2 bits of the upper factor. This leaves us with a grand total of 1533 unknown bits, all related through purely numerical modulus bits. I think this allows the problem to be reduced from exponential time domain to polynomial time domain.

But it still remains unproven...

As for using various bases to analyze primes (which is not an option for my boolean algebra method), the 'quality' of a given base depends on several things. One of those things is speed of

calculation, obviously, and for that nothing can beat a base which is a binary power.

These bases also allow very simple calculation of the modular inverse of any odd number (all odd numbers are inversible here), as you can see in the attached lib.

I had a very short look at it and it's simply great! At home I'll need to transform it to ASM, since I have C disinstalled, after RadASM became perfect :) and have a try.

In particular I like the different approach with mul/div BN.

Giovanni

In particular I like the different approach with mul/div BN.

Giovanni

I couldn't get the division running. Multiplying is fine *and* eccellent. Is there any upgrade i sight?

Giovanni

Giovanni

my biglib (http://www.effervescence.com) has multiplication and division if it can help you, it's very slow, but apparently you have a 'revolutionnary' factorization algorithm, so who cares 'bout speed ? =)

**Giovanni:**

I assume you refer to the division functions of the BIGNUM lib. There are two mistakes you can make with them, that would make it appear as if they don't work at all.

1: BIGNUM arguments must be pointers to BIGNUM structures. If you pass the structures themselves instead, then there will be errors.

2: The first argument is the divisor, and the second is the dividend. If you reverse them, then the result will normally be zero. (Assuming intended divisor < intended dividend.)

Doublecheck your usage with regard to the above, and if you still get errors, please come back with a specific example that I can test for you.

**Roy:**

Please, let's not 'muddy the waters' again. Giovanni doesn't have an 'easy RSA' algorithm, as you already know. In fact there is no *easy* method to solve RSA factors, and I'm sure Giovanni too regrets his original choice of words for his original post, in that regard.

However, the 'patterns' he spoke of have been identified, and may yet lead to a functional algorithm. That will not be easy to implement, as should be evident from my earlier posts on boolean algebra methods. But if/when such an implementation is completed, it will be quite easy to solve factors of any RSA modulus which doesn't exceed the maximum size of that implementation (mainly due to growing storage requirements).