I won't go into legthy explanations of why I need this, but I'm hoping it can be done. Of necessity, I need to squeese a value of no greater than 32-bit (0FFFFFFFFh) into the space of 16-bits (0FFFFh). This means I have to somehow divide or use a square root, or something in that mannor, and store the remainder or who knows. However, I have to mathmatically encode it to result in the original 32-bit value. The 16-bit value will never be treated as a number, the routine will just convert it back to it's 32-bit before it's used. Is there anyone in this group who might have an idea how this might be accomplished? Thanks in Advance, _Shawn
You can't map 4 billion numbers to 64 thousand numbers with out a loss of information! I need more info before I can provide any advice. This message was edited by bitRAKE, on 4/27/2001 7:58:22 PM
I've seen it done once, I just don't know how. It is mathmatically encoded so that you just multply this number by that, etc, and add this to that, and then you have you're full 32-bit number. Obviously, you can't fit 32 bits in 16. But you can break it down into squares, or factorials, or whatever, and decode it when necessary. If not 32 bit into 16 bits, 64 into 32 bits will work okay. But nothing else. Thanks, Shawn
Hi Shawn, You need to tell us something about what you are doing. Is the 32 bit number random, or generated under your control? Does the 32 bit number really encompass all of the numbers from 0 to 4294967295, or just a subset? In order to get 32 bits into 16 one has to eliminate 16 bits of resolution, in order to get the correct 32 bit number back assumptions need to be made, which means it really isn't a 32 bit number. IP addresses work with subnets/ supranets, which allows more or less actual addresses than the 32 bit number would really allow, however, the extra information about how to handle this resides in the subnet mask (there is still a loss of resolution).
Well, the number can by any 32 bit number. I wish I could explain why but I can't at this point. Just need to compress x number of bits into a unit of data exactly x / 2. 32 bits in particular. The biggest reason is because I want to learn how to encode large numbers into mathmaticals which will equate itself, however, the other restriction is that it has to fit a 4-byte value into 2 bytes. _Shawn
It can only be done if you have a model for the data. It's like compressing a database with a date field - you know the type of the data, and the range of values possible. You must constrain the value because it's not a 1:1 mapping. Note: If the general case was possible, then all compressors would be able to get a 50% compression ratio. Then send it back through until you have one byte! :P This message was edited by bitRAKE, on 4/27/2001 11:53:57 AM
once up on a time, I had the idea to use the number PI for a compressor. Since PI is not periodic and endless, it contains theoretically all possible combinations of digits. So every existing digital data, f.e. an image file, must be present in the stream of the digits from PI. For compressing the data you only need as many digits of PI you can get, and then you just have to store the offset and the length! But after some short thinking about it, I came to the conclusion, that the offset is at least as large as the data itself :| So I again missed the Nobel prize for informatics :D beaster.
Very good beaster. I too have missed the nobel prize many times. :P This is the first thing I thought of when I learned about fractals - I was learning data compression algos at the same time. If you don't mind lossiness this works, but it's very expensive in time. You just search through a set of models and store the one that requires the least number of bits, or store the original data if it's smaller. Your set of models should be optimal for the data that your compressing.
_Shawn, if your not prepared to loose precision then it can't be done. Its a mathmatical impossibility if you could compress a 32bit number to 16 then why not that 16bit to 8, then -> 4, -> 2, -> 1. If that were possible any number could be represented in 1 bit. Probablty the only way would be using fractal representations of the bit patterns, but theres no way these equations could be stored in 16 bits. This would only work for huges numbers in which patterns in the bits could be found. If your happy to loose percision the the square root method is the best way to go. Do all your calculations in 32bits then save only the square root this would always be 16 or less. Square this value before using it again. I'd love to know what you want this for, it sounds bizzare but if you tell there may be a better way that hasn't been thought of.
I was thinking about this for a while and I'm glad to say it can't be done, not now nor ever. Very simply, if you have 4 billion different numbers as 32bits sortof equates to then even using fractal methods you would need 4 billion different equations, therefore still needing 32bits. Well there we go, the nature of the universe has once again been stabalised and I can sleep easy. So your left with a couple of choices 1) Use the sqrt method 2) Take the first 16bits 3) Take the last 16bits 4) Take every second bit If the number must be preserved then none of these methods will work (except under certain conditions). Once again I must say I'd love to know what you want this for. Good Luck