▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Need to make your prop design easier or secure? Get a PropMod has crystal, eeprom, and programing header in a 40 pin dip 0.7" pitch module with uSD reader, and RTC options.
Dennis: Look at it like this, we toss a coin and get a head or a tail. For head we make a step east, for tail we make a same size step west. If we do this many times where are we? Well if our coin is fair on average over a lot of throws there are as many heads as tails. Therefore as many steps east as west and we are back where we started from. Of course this could take a long time.
The two dimensional case is a bit harder. The classic description is the "drunkards walk problem" a drunk takes a number of equal size steps "n" each in a random direction. How far away from his starting point is he after those "n" steps ? On average that is. We have to do this experiment many times.
Turns out the answer is the square root of n. As first determined by Einstein no less. This has a lot to do with Brownian Motion and Thermodynamics.
The two dimensional case will probably never get you back to your starting point as you want to be back to both x=0 and y=0 at the same time which is crazy unlikely !!
Did you check the "Randomness Test Report"? There the results of several randomness tests for the Quantis are documented AND inside of this document you find a link to a randomness test suite. So, maybe someone wants to check the real random results with these tests?
If the results can be compared with the Quantis results we can make a few hundred bugs with selling the propeller as random number generator as well.
Yes, I had a look at the test report. The problem with RNGs that use noise and similar techniques is that they are susceptible to external influences, which isn't the case with an RNG using the properties of single photons.
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Suzuki SV1000S motorcycle
is not the schematic given measuring the quantum shot noise in the pn junction of the bottom left diode.
yes it is succeptable to noise that is why I plan to shield and build 2 of them.
don't have $600 to spend on random number generator. though they are cool looking
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Need to make your prop design easier or secure? Get a PropMod has crystal, eeprom, and programing header in a 40 pin dip 0.7" pitch module with uSD reader, and RTC options.
Being secure of external influences is relative. If you have a black hole close to the Quantis it'll be influenced as well ;o) But serious: what's the problem in hacking the interface to the Quantis? USB, PCI. They sell this to lottery companies.
no single photon is better the question is what is the best we can do in 6" cube for under $50
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Need to make your prop design easier or secure? Get a PropMod has crystal, eeprom, and programing header in a 40 pin dip 0.7" pitch module with uSD reader, and RTC options.
circuits shielded on all sides with prop to analyze and write result to sd
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Need to make your prop design easier or secure? Get a PropMod has crystal, eeprom, and programing header in a 40 pin dip 0.7" pitch module with uSD reader, and RTC options.
heater said...
Dennis: Look at it like this, we toss a coin and get a head or a tail. For head we make a step east, for tail we make a same size step west. If we do this many times where are we? Well if our coin is fair on average over a lot of throws there are as many heads as tails. Therefore as many steps east as west and we are back where we started from. Of course this could take a long time.
The two dimensional case is a bit harder. The classic description is the "drunkards walk problem" a drunk takes a number of equal size steps "n" each in a random direction. How far away from his starting point is he after those "n" steps ? On average that is. We have to do this experiment many times.
Turns out the answer is the square root of n. As first determined by Einstein no less. This has a lot to do with Brownian Motion and Thermodynamics.
The two dimensional case will probably never get you back to your starting point as you want to be back to both x=0 and y=0 at the same time which is crazy unlikely !!
Sorry heater you're still not getting it. But we're arguing based on different mathematical models: If you're talking about averages which is SUM / SAMPLE_SIZE then yes, but the original poster is just doing a SUM! No division by sample size. Yes if you take the average of N flips of a coin, with one side representing +1 and the other representing -1, then the expected average would be zero. But if your total number of coin flips is extremely large, then even though there are equal chances of the sum going up or down, the range of *possible* places you could end up is very large. And even though there is greater chance of ending up within 1 standard deviation of zero than there is of ending up within 2 or within 3 standard deviations of it, the salient question is HOW BIG are the standard distributions we're talking about? If your N is quite large, as it would be if we're summing random numbers as fast as we can for several minutes, then the area under that bell curve is as big as Texas. So you could end up with some very large numbers while still being within one standard deviation here.
In your own post, you said we expect on average to find the drunkard has ended up the square root of N steps from his starting point. Let's say we talk 1,000,000 steps, the square root of that is 1000. Doesn't look too bad so far. But now remember, the original poster is taking random "steps" as large as +/-2^15. Assuming the size of the expected distance from the starting point scales with the size of the step, and assuming the *average* step size is 1/2 the size of the maximum step size, then we expect to find him about 1000 * 2^15 / 2 steps away.
That's 16,384,000. (Five hundred times his largest step size.)
But of course Phil had it right at the very beginning and no one noticed. [noparse]:)[/noparse]
Phil said...
There's no reason to expect that the sum of a length-n series of uniform random variates will converge toward n times the mean. In fact, the standard deviation of the sum increases as the square root of n. That said, the sample means (sum / n) should converge to the mean as n increases.
In other words, divide your sum by your sample size. It should converge to zero as the sample size increases.
here is a question can real random be modified to run in 3 cogs
2 generating random numbers with a 3rd combining together in this way
keep bit a0 if a0 xor b0=1 and a0 xor a1=1 where a and b are the rng and 0 and 1 are 2 sample bits from each
all cogs locked that way noise from voltage temp and crystal can be removed
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Need to make your prop design easier or secure? Get a PropMod has crystal, eeprom, and programing header in a 40 pin dip 0.7" pitch module with uSD reader, and RTC options.
I'm no expert on random numbers (those who are experts tend to be very strange people! <g>), but I have studied the issue a lot. It's a fascinating topic because people spend many hours trying to figure out a way to get true randomness (whatever that is), but in practice (from Solitaire to Monte Carlo simulations) really need only something that "looks" random at a glance. That is to say, if the distribution differs from Guassian a little bit, nobody is going to care because nobody is going to run a Monte Carlo simulation for, say, 10,000,000,000 cases. Because they're time consuming, the real number is closer to 10-100, where it's very difficult to tell if the statistics are correct or not.
There are two main ways to generate random numbers. One is to use some physical device, such as a Zener diode or shot noise or, as in Chip's case, PLL jitter. The other is to use some "deterministic" math algorithm, like a PRN generator or power-residue method. See "Numerical Recipes in C" for the best of the latter.
I would _STRONGLY_ recommend against using some hardware source (sorry, Chip). I've seen lots of people try it. Most didn't achieve results that passed rigorous tests, and the mean --> zero is one of the problems.
My main problem with hardware-based approaches is the very fact that they're "too random." When I'm building something like a simulation of a system with noise sources, it's vitally important that I know that my own math is right. How can I debug the software if it behaves differently each time I run it? For debugging, I need a predictable
result. But turning off the random generator is not a good idea, because I also need to verify that my random numbers remain random during the process.
OTOH, using a random-number math algorithm is problematic, too. Chip pointed out the first problem: Every random-number generator I've seen requires an initial seed. If that seed is hard-coded into a ROM, say, then Chip's point is very valid. You are not going to get a separate sequence each time you run the program. You are going to get the same sequence, over and over. Not good.
Every successful simulation I've seen requires that the user give a seed -- a starting value -- for the generator. But how do you get the user to give you this number?
Monte Carlo simulators that I've seen (and built) ask the user to give them a seed. That's the best way, and the repeatable nature of the "random" sequence <!> lets me debug the program by using the same seed each time.
Unfortunately, users tend to get lazy, and use the same seed every time, or even hard-code it as their SSN, or something equally stupid.
The second problem is the one that started this thread: Making sure that the sequence has a mean of zero. That's not hard to do, but it does require some thought. First consider the range of a random (unsigned) integer. For 16 bits, that's 0x0000 through 0xffff (0..65535). The average is 32767.5, so if you just naively subtract 32768 (0x8000), your average is going to be off by 1/2 bit, and the error will accumulate linearly with repetition.
Fortunately, every algo I've seen doesn't allow 0 as a possibility. A power residue method can generate only odd numbers, and a PRN shift register can't be initialized to 0 (otherwise, it stays there). So the real range is 0x0001
to 0xffff, with an average of 0x8000. If you now treat the number as a signed integer, it will have a range of
-32767..+32767, including zero, and will be balanced.
Here's the best way to think of the math-based random-number generators. Whatever the word length of the word being used, the generator is going to generate every possible number in range, once and only once, before repeating. Imagine a giant "Wheel of Fortune" wheel with all the numbers arranged
randomly. Starting with any given number, the generated sequence is going to be absolutely repeatable -- it's the next number around the wheel. The sequence is absolutely determined by the algorithm; The only thing that CAN change is the initial state of the wheel.
Bottom line: I very much favor PRN generators, but Chip's point is very well taken: Someone needs to set the initial position of the wheel, and it can't come from ROM.
Another source of bias is the commonly-accepted use of the modulo operator to limit the range of a random number, especially when the modulus is large. For example if my random number generator produces uniformly distributed variates with 16-bit precision (zero included), and I want a number between 0 and 999, it's tempting to do:
····x := rand // 1000
But you have to look at how many ways you can get 0 from this, say, and how many ways 999. Zeroes come from
So there are 1.5% more zeroes than 999s. What you have to do to eliminate this bias is find an upper limit above which you will reject rand and cast for a new value. For sixteen bit numbers like those described above, this is:
····limit := 65535 - (65535 // m) - 1, where m is the modulus.
For m == 1000, limit == 64999. In the case that rand can never be zero, eliminate the "- 1" from the above equation.
Comments
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Need to make your prop design easier or secure? Get a PropMod has crystal, eeprom, and programing header in a 40 pin dip 0.7" pitch module with uSD reader, and RTC options.
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Suzuki SV1000S motorcycle
The two dimensional case is a bit harder. The classic description is the "drunkards walk problem" a drunk takes a number of equal size steps "n" each in a random direction. How far away from his starting point is he after those "n" steps ? On average that is. We have to do this experiment many times.
Turns out the answer is the square root of n. As first determined by Einstein no less. This has a lot to do with Brownian Motion and Thermodynamics.
The two dimensional case will probably never get you back to your starting point as you want to be back to both x=0 and y=0 at the same time which is crazy unlikely !!
I refer you to "The Drunkard's Walk - How Randomness Rules Our Lives" by Leonard Mlodinow www.dangerousbooksforboys.com.au/book.asp?bid=70
I've never read it but looks like I should.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
For me, the past is not over yet.
Did you check the "Randomness Test Report"? There the results of several randomness tests for the Quantis are documented AND inside of this document you find a link to a randomness test suite. So, maybe someone wants to check the real random results with these tests?
If the results can be compared with the Quantis results we can make a few hundred bugs with selling the propeller as random number generator as well.
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Suzuki SV1000S motorcycle
yes it is succeptable to noise that is why I plan to shield and build 2 of them.
don't have $600 to spend on random number generator. though they are cool looking
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Need to make your prop design easier or secure? Get a PropMod has crystal, eeprom, and programing header in a 40 pin dip 0.7" pitch module with uSD reader, and RTC options.
Electrical noise is a quantum effect, ultimately, as is everything, but it isn't the same as using the properties of a single photon.
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Suzuki SV1000S motorcycle
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Need to make your prop design easier or secure? Get a PropMod has crystal, eeprom, and programing header in a 40 pin dip 0.7" pitch module with uSD reader, and RTC options.
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Suzuki SV1000S motorcycle
Post Edited (Leon) : 4/11/2009 4:26:27 PM GMT
www.cryogenius.com/hardware/rng/
circuits shielded on all sides with prop to analyze and write result to sd
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Need to make your prop design easier or secure? Get a PropMod has crystal, eeprom, and programing header in a 40 pin dip 0.7" pitch module with uSD reader, and RTC options.
Sorry heater you're still not getting it. But we're arguing based on different mathematical models: If you're talking about averages which is SUM / SAMPLE_SIZE then yes, but the original poster is just doing a SUM! No division by sample size. Yes if you take the average of N flips of a coin, with one side representing +1 and the other representing -1, then the expected average would be zero. But if your total number of coin flips is extremely large, then even though there are equal chances of the sum going up or down, the range of *possible* places you could end up is very large. And even though there is greater chance of ending up within 1 standard deviation of zero than there is of ending up within 2 or within 3 standard deviations of it, the salient question is HOW BIG are the standard distributions we're talking about? If your N is quite large, as it would be if we're summing random numbers as fast as we can for several minutes, then the area under that bell curve is as big as Texas. So you could end up with some very large numbers while still being within one standard deviation here.
In your own post, you said we expect on average to find the drunkard has ended up the square root of N steps from his starting point. Let's say we talk 1,000,000 steps, the square root of that is 1000. Doesn't look too bad so far. But now remember, the original poster is taking random "steps" as large as +/-2^15. Assuming the size of the expected distance from the starting point scales with the size of the step, and assuming the *average* step size is 1/2 the size of the maximum step size, then we expect to find him about 1000 * 2^15 / 2 steps away.
That's 16,384,000. (Five hundred times his largest step size.)
But of course Phil had it right at the very beginning and no one noticed. [noparse]:)[/noparse]
2 generating random numbers with a 3rd combining together in this way
keep bit a0 if a0 xor b0=1 and a0 xor a1=1 where a and b are the rng and 0 and 1 are 2 sample bits from each
all cogs locked that way noise from voltage temp and crystal can be removed
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Need to make your prop design easier or secure? Get a PropMod has crystal, eeprom, and programing header in a 40 pin dip 0.7" pitch module with uSD reader, and RTC options.
There are two main ways to generate random numbers. One is to use some physical device, such as a Zener diode or shot noise or, as in Chip's case, PLL jitter. The other is to use some "deterministic" math algorithm, like a PRN generator or power-residue method. See "Numerical Recipes in C" for the best of the latter.
I would _STRONGLY_ recommend against using some hardware source (sorry, Chip). I've seen lots of people try it. Most didn't achieve results that passed rigorous tests, and the mean --> zero is one of the problems.
My main problem with hardware-based approaches is the very fact that they're "too random." When I'm building something like a simulation of a system with noise sources, it's vitally important that I know that my own math is right. How can I debug the software if it behaves differently each time I run it? For debugging, I need a predictable
result. But turning off the random generator is not a good idea, because I also need to verify that my random numbers remain random during the process.
OTOH, using a random-number math algorithm is problematic, too. Chip pointed out the first problem: Every random-number generator I've seen requires an initial seed. If that seed is hard-coded into a ROM, say, then Chip's point is very valid. You are not going to get a separate sequence each time you run the program. You are going to get the same sequence, over and over. Not good.
Every successful simulation I've seen requires that the user give a seed -- a starting value -- for the generator. But how do you get the user to give you this number?
Monte Carlo simulators that I've seen (and built) ask the user to give them a seed. That's the best way, and the repeatable nature of the "random" sequence <!> lets me debug the program by using the same seed each time.
Unfortunately, users tend to get lazy, and use the same seed every time, or even hard-code it as their SSN, or something equally stupid.
The second problem is the one that started this thread: Making sure that the sequence has a mean of zero. That's not hard to do, but it does require some thought. First consider the range of a random (unsigned) integer. For 16 bits, that's 0x0000 through 0xffff (0..65535). The average is 32767.5, so if you just naively subtract 32768 (0x8000), your average is going to be off by 1/2 bit, and the error will accumulate linearly with repetition.
Fortunately, every algo I've seen doesn't allow 0 as a possibility. A power residue method can generate only odd numbers, and a PRN shift register can't be initialized to 0 (otherwise, it stays there). So the real range is 0x0001
to 0xffff, with an average of 0x8000. If you now treat the number as a signed integer, it will have a range of
-32767..+32767, including zero, and will be balanced.
Here's the best way to think of the math-based random-number generators. Whatever the word length of the word being used, the generator is going to generate every possible number in range, once and only once, before repeating. Imagine a giant "Wheel of Fortune" wheel with all the numbers arranged
randomly. Starting with any given number, the generated sequence is going to be absolutely repeatable -- it's the next number around the wheel. The sequence is absolutely determined by the algorithm; The only thing that CAN change is the initial state of the wheel.
Bottom line: I very much favor PRN generators, but Chip's point is very well taken: Someone needs to set the initial position of the wheel, and it can't come from ROM.
Jack
····x := rand // 1000
But you have to look at how many ways you can get 0 from this, say, and how many ways 999. Zeroes come from
····rand == 0, 1000, 2000, ... , 63000, 64000, 65000 (i.e. 66 different places).
999s come from
····rand == 999, 1999, 2999, ... , 63999, 64999 (i.e. 65 different places).
So there are 1.5% more zeroes than 999s. What you have to do to eliminate this bias is find an upper limit above which you will reject rand and cast for a new value. For sixteen bit numbers like those described above, this is:
····limit := 65535 - (65535 // m) - 1, where m is the modulus.
For m == 1000, limit == 64999. In the case that rand can never be zero, eliminate the "- 1" from the above equation.
-Phil