Well, here's the source as compiled ... PractRand pukes on it at 1kB.
[snip]
Wait, it does? Did I screw something up, maybe in the minimal adaption I did when copying it in to this thread? The code looks right... examining your test.out, the data even looks right, well, the first 16 bytes match my first 16 bytes anyway, haven't checked further but that's probably enough. 1 KB failure on PractRand implies that something is very very wrong in a way that should probably be obvious looking at the data, but I can't see anything. Hm... you didn't, by any chance, test the ASCII hexadecimal representation of output like in test.out instead of the binary representation, did you? That's the only thing I can think of at the moment that would produce that big a discrepancy in our results.
I think the first test to do next should be aimed at improving bit 1 as shown in the last line above. We know that 16-bit parity improves bit 0 and it might do the same for bit 1. We can't keep going to the parity well after that because 15-bit parity or less won't be random.
If [15:0] and [7:0] scores are the same, bit 1 will need testing on its own, before and after, to detect any difference.
Evan, please try the above. Change is sum[1] = sum[16:1] parity.
Scro says he intentionally biased it. What that implies I'm not sure. Maybe it's a sort of cheat to beat up on PractRand and it ain't quite as good as the score suggests.
Scro says he intentionally biased it. What that implies I'm not sure. Maybe it's a sort of cheat to beat up on PractRand and it ain't quite as good as the score suggests.
Okay. I see what that is doing. It does twice as many bit adders and maybe twice as many XORs as our current XORO32 implementation. But it does score 32 times better, right?
Scro, thank you for showing us that. It's very interesting. What constitutes the bias within it, though?
What we have now is much more resource efficient. I wonder how much better we could do if we improved bit 1's quality?
There must be more to it than that. Sampling the msByte should be better than 1GB but it isn't. I think all we'll ever achieve is to bring bit 1 up to this level. That was all we were wanting before of course.
What we have now is much more resource efficient. I wonder how much better we could do if we improved bit 1's quality?
There must be more to it than that. Sampling the msByte should be better than 1GB but it isn't. I think all we'll ever achieve is to bring bit 1 up to this level. That was all we were wanting before of course.
If we could get 1G for word, top byte, bottom byte and top bit we'd have consistency.
The scoring is taken from the PractRand failure size, not the capacity of the RNG. Practrand only detects the failure after the poor quality data has been generated.
So half every score to get a better idea of quality working range of each RNG tested.
XORing bit 0 of the sum with higher parity could, in effect, add a carry input into the bit 0 sum, which it would not have otherwise. Bit 1 already has a carry from bit 0, so higher parity might cancel that out and results could be worse.
100% coverage parity is the only one that is a contender. Anything else is at least 32x sub-par. I don't think there is any way to make up a second bonus bit with this approach.
The fundamental point about using parity is that provided the number as a whole is random then its parity will be random. Once we start ignoring bits what's left is not random.
Comments
should have been
Maybe I should stop using "test" as a name for my quick hacks.
Anyway, it scores 32GB now. On a 32-bit state! Very nice.
Evan, please try the above. Change is sum[1] = sum[16:1] parity.
Thanks for trying this, Evanh. All your code looks correct. So, that idea did not work.
What was this other algorithm that got to 32GB-quality in 32 bits of state? That is the best yet!
Here's the original as supplied by Scro - http://forums.parallax.com/discussion/comment/1424224/#Comment_1424224
Scro says he intentionally biased it. What that implies I'm not sure. Maybe it's a sort of cheat to beat up on PractRand and it ain't quite as good as the score suggests.
Okay. I see what that is doing. It does twice as many bit adders and maybe twice as many XORs as our current XORO32 implementation. But it does score 32 times better, right?
Scro, thank you for showing us that. It's very interesting. What constitutes the bias within it, though?
The showstopper is that we don't have enough time in a clock cycle to do two 32-bit additions, one after the other.
What we have now is much more resource efficient. I wonder how much better we could do if we improved bit 1's quality?
Bits [15:8] and [15] should have same scores as before. If not, something is wrong.
But there are only 4G states in 32 bits, right?
If each state outputs 4 bytes, it seems that only a 16GB score should be possible.
---- unless maybe it was taking in hexadecimal, so each byte became two ASCII bytes, winding the score up to 32GB?
If we could get 1G for word, top byte, bottom byte and top bit we'd have consistency.
So half every score to get a better idea of quality working range of each RNG tested.
Okay. Now I understand. Thanks.
Here's the source that I'm not sure about and gives messy scores:
Here's the source that works well and only has the weak bit 1:
PS: I'm off the bed.