Shop OBEX P1 Docs P2 Docs Learn Events
Random/LFSR on P2 - Page 53 — Parallax Forums

Random/LFSR on P2

1505153555692

Comments

  • evanh wrote: »
    They're there, minus the s16's that is, in the above attachment - https://forums.parallax.com/discussion/comment/1442445/#Comment_1442445

    Thanks Evan, filenames are much better now.
  • evanhevanh Posts: 15,916
    Point #2 was covered extensively when I previously went over this with you and you've even re-quoted my posted corrected code long after we verified it all. I was a little confused about what was being asked when you quoted my code. So, yes, this point is well sorted.

    Point #1 seems to be exactly what I've just changed above. Although my version is written as a truncated version of the full loop code.

  • evanhevanh Posts: 15,916
    edited 2018-07-22 12:28
    Ah, oops, one tiny difference in point #1. In my implementation the final zfreq[0]++; line is outside the if() condition.

    Is that bad? All the data you've just picked up was run this way.

  • TonyB_TonyB_ Posts: 2,178
    edited 2018-07-22 12:29
    evanh wrote: »
    	// read pair array and increment freq values
    	irng = 0;
    	do {
    		// pair freq values
    		ifreq = pairs[irng++];  // 8-bit limited
    		pfreq[ifreq]++;
    
    		// zero-run freq values
    		if( ifreq == 0 )  ztally++;
    		else {
    			if( ztally > 255 )  ztally=255;
    			if( ztally != 0 )  zfreq[ztally]++;
    			zfreq[0]++;
    			ztally = 0;
    		}
    	} while( irng < (1UL<<(SAMPLE_SIZE*2))-1 );
    
    	// pair freq values
    	ifreq = pairs[irng];
    	pfreq[ifreq]++;
    	// zero-run freq values
    	if( ifreq == 0 )  ztally++;
    	if( ztally > 255 )  ztally=255;
    	if( ztally != 0 )  zfreq[ztally]++;
    	zfreq[0]++;
    

    The code at the end seems to deal with the last sample on its own. Does this make the main loop quicker?

    This new code doesn't seem to address my points. The zfreq[0]++ should only occur if(ztally != 0) and you need an additional check after the end of your code, as mentioned in my previous post. The reason is that zfreq[ztally] is only incremented if( ztally != 0 ) but if the last sample is zero there won't be any more non-zero samples to end the zero run.
  • evanhevanh Posts: 15,916
    Oh, I see now, you've gone back to an older version of the conditions where the zfreq[0] is only incremented in conjunction with zfreq[ztally]. Hmm, didn't notice it was an actual change of mind.

    Ok, that definitely makes the all the zfreq[0] entries wrong.

  • evanh wrote: »
    Ah, oops, one tiny difference in point #1. In my implementation the final zfreq[0]++; line is outside the if() condition.

    Is that bad? All the data you've just picked up was run this way.

    Yes, zfreq[0]++ and zfreq[ztally]++ should occur together.
  • evanhevanh Posts: 15,916
    Rerun it all again with the zfreq[0] code fix then?

  • evanhevanh Posts: 15,916
    edited 2018-07-22 13:35
    TonyB_ wrote: »
    The code at the end seems to deal with the last sample on its own. Does this make the main loop quicker?

    It was originally to handle a situation with double iteration PRNG where the index datatype could end up being too big when testing for loop exit. But has conveniently suited this difference with the final cycle.

    EDIT: Err, it wasn't just the datatype and it wasn't double-iterated PRNG at all. It was to handle all engine/shifter sizes. The original code could only do the s16's. I needed to add the compare less-than to handle loop iteration lengths other than max indexable length of the datatype. And that change meant that max indexable length was no longer doable without inserting yet more conditional compiling.

    Since execution time of that part of the code is far less than the main pair counting part I decided not to clutter it up with conditional compiling.

  • evanhevanh Posts: 15,916
    Here's the updated code:
    	// read pair array and increment freq values
    	irng = 0;
    	do {
    		// pair freq values
    		ifreq = pairs[irng++];  // 8-bit limited
    		pfreq[ifreq]++;
    
    		// zero-run freq values
    		if( ifreq == 0 )  ztally++;
    		else
    		{
    			if( ztally > 255 )  ztally=255;
    			if( ztally != 0 )
    			{
    				zfreq[ztally]++;
    				zfreq[0]++;
    				ztally = 0;
    			}
    		}
    	} while( irng < (1UL<<(SAMPLE_SIZE*2))-1 );
    
    	// pair freq values
    	ifreq = pairs[irng];
    	pfreq[ifreq]++;
    	// zero-run freq values
    	if( ifreq == 0 )  ztally++;
    	if( ztally > 255 )  ztally=255;
    	if( ztally != 0 )
    	{
    		zfreq[ztally]++;
    		zfreq[0]++;
    	}
    
  • evanhevanh Posts: 15,916
    Here's the quick run s8-s10 distribution data.
  • TonyB_TonyB_ Posts: 2,178
    edited 2018-07-22 13:23
    zfreq(x) is the number of times there is zero run of length x, but x = 0 has no meaning and it is handy if zfreq[0] = total number of zero runs. The C code is correct if:

    1. Sum of zfreq[x] for x > 0 = zfreq[0]
    2. Sum of x * zfreq[x] for x > 0 = pfreq[0]
  • evanh wrote: »
    Go have another cup of coffee! Many constants are two digit. Best I could do is modify the existing short names from aabbcc.pb (and .zb) to aabbccdd.pb (and .zb) for uncombined naming.

    aabbccp.bin and aabbccz.bin are other options for combined files.
  • evanhevanh Posts: 15,916
    s11 to s14 distribution data attached. s15 takes a few hours so I've skipped it and running s16's now.
  • evanhevanh Posts: 15,916
    edited 2018-07-22 17:08
    Tony,
    Have you dealt with x86 64-bit programming model with anything? I've got an oldish bit of generic Unix source that I suspect has been GCCised and some x86 Nasm assembly thrown in over the years. It comes with executables prebuilt for Windoze command line and Linux shell. The prebuilt exe's work as intended but I wanted to try and improve it.

    I can cleanly compile/assemble everything, however the .o file from the assembly refuses to link with the rest. The error:
    /usr/bin/x86_64-linux-gnu-ld: cpuida64.o: relocation R_X86_64_32 against symbol `eaxCode1' can not be used when making a PIE object; recompile with -fPIC
    /usr/bin/x86_64-linux-gnu-ld: final link failed: Nonrepresentable section on output
    The original archive comes with some prebuilt .o files for both 32-bit and 64-bit Linux models but don't help with the error.

    I've tried using -fPIC on the compile options but of course that is no help with Nasm. And I get the impression what that is suggesting would require extra lines of assembly anyway. Yet the author obviously didn't need them.

  • TonyB_TonyB_ Posts: 2,178
    edited 2018-07-22 17:13
    evanh wrote: »
    Tony,
    Have you dealt with x86 64-bit programming model with anything? I've got an oldish bit of generic Unix source that I suspect has been GCCised and some x86 Nasm assembly thrown in over the years. It comes with executables prebuilt for Windoze command line and Linux shell. The prebuilt exe's work as intended but I wanted to try and improve it.

    No, 'fraid not. Latest code looks fine to me and it could be checked as mentioned above.
  • evanhevanh Posts: 15,916
    I think I've found something - https://stackoverflow.com/questions/43367427/32-bit-absolute-addresses-no-longer-allowed-in-x86-64-linux it links when throwing those options at gcc but the resulting exe now crashes. :( Bed time for me.

  • evanhevanh Posts: 15,916
    edited 2018-07-23 06:43
    And s16 data:
  • evanhevanh Posts: 15,916
    edited 2018-07-23 09:10
    Tony,
    I've been experimenting with a couple of s17 cases today. However, the results don't feel right at first glance. Here's the latest tweaked source code and one candidate's data. Of note is the way the centre bulge of the pfreq data is at elements 3&4. And the zfreq data is all bunched up tight.

    I've seen this trait before when experimenting on double iterated generator and its associated larger output word. Back then the only way to manage the array allocation sizes was to nerf the sampling aperture. One of the side effects of doing that was the central bulge in the pfreq data would shift away from elements 0&1 in exponential steps according to the difference between generator size and aperture size.

    A centre around elements 3&4 meant the sampling aperture size was one less than the generator output word size. A centre around 15&16 meant a difference of two, a centre around 63&64 meant a difference of three, and so on ...

    I've checked those numbers and they're equal in this s17 case. The run time is about right at four times an s16 case. I can't see anything wrong with the code.

    I guess the question now is, is this centre bulge shifting to be expected at higher sizes like this?


    PS: And the zfreq data also bunched up the same way with the difference in output and aperture sizes too. Eventually 100% in element zero.
  • evanhevanh Posts: 15,916
    Oops, didn't include the Xoroshiro generator code with the sources.
  • TonyB_TonyB_ Posts: 2,178
    edited 2018-07-23 10:18
    Evan, thanks for the results. I'll have a closer look at s17 later. Could you do all the xoroshiro+ s16 so we can see the difference?

    I found a formula last night for "closeness" to an known distribution:
    Sum of (Actual - Expected) squared / Expected
    

    This is related to Chi-Square and is different from my |Actual - Expected| / Expected quality measure. Lowest sum is the winner and stop summing when Expected = 0, obviously. I haven't tried this yet.
  • evanhevanh Posts: 15,916
    That is the whole set. See the difference in what?

  • That is the whole ++ set. The difference between ++ and +.
  • TonyB_TonyB_ Posts: 2,178
    edited 2018-07-23 16:39
    TonyB_ wrote: »
    Evan, thanks for the results. I'll have a closer look at s17 later. Could you do all the xoroshiro+ s16 so we can see the difference?

    I found a formula last night for "closeness" to an known distribution:
    Sum of (Actual - Expected) squared / Expected
    

    This is related to Chi-Square and is different from my |Actual - Expected| / Expected quality measure. Lowest sum is the winner and stop summing when Expected = 0, obviously. I haven't tried this yet.

    Here are a few results:

    Pair frequency
    (Actual-Expected)^2/Expected
    # a,  b,  c,  d,   pfreq0,   pfreq1,   pfreq2,   pfreq3,   pfreq4,   pfreq5,   pfreq6,   pfreq7,   pfreq8,   pfreq9,   pfreq10,  pfreq11,  pfreq12,    total, total0-3
      3,  2,  6,  5,   129.71,   138.85,    55.84,    24.91,   137.88,   101.25,    40.91,    27.55,     4.65,     2.63,      0.04,     1.60,     0.33,   666.15,   349.31
     14,  2,  7,  5,  7184.00,  7286.87,  3483.69,  1197.57,  7724.14,  6948.33,  3192.41,  1142.14,   307.31,    38.42,      9.12,     0.03,     0.00, 38514.03, 19152.13
     14,  2,  7,  6,   304.16,   315.10,   140.78,    51.93,   342.43,   246.52,   145.09,    26.00,    27.81,     3.65,      0.28,     1.60,     1.33,  1606.68,   811.97
    
    total = pfreq0 + pfreq1 + ... + pfreq12 
    total0-3 = pfreq0 + pfreq1 + pfreq2 + pfreq3
    
    

    The new formula looks good. The squaring gives less weight to small values (the higher frequencies in this case) and magnifies any differences. I have included a total for pfreq0-pfreq3 because pfreq4-pfreq6 might have a little too much weight; expected pfreq6 is 1/720th of pfreq0 and pfreq1, pfreq5 is 1/120th and pfreq4 is 1/24th (pfreqx is 1/x!th.) If the sorted [a,b,c,d] order is the same for both totals then we could use either one.

    For the ideal binomial distribution:

    37% of the possible outputs occur 0 times.
    63% of the possible outputs occur 1+ times.
    98% of the possible outputs occur 0-3 times.
    92% of the actual outputs occur 1-3 times.
  • evanhevanh Posts: 15,916
    edited 2018-07-23 12:36
    TonyB_ wrote: »
    That is the whole ++ set. The difference between ++ and +.

    It's amazing how many times I miss your intent in what you've written. I hadn't worked it out without that clarification.


    EDIT: Huh, the single summing (+) scrambler can be easily integrated into the same run as double summing (++) in the automation. I'll merge them now if you don't mind. It's easy as at this end. It occupies the d=0 case.

    So, single summing will sit as the first data array in each of the collated files.

  • evanh wrote: »
    TonyB_ wrote: »
    That is the whole ++ set. The difference between ++ and +.

    It's amazing how many times I miss your intent in what you've written. I hadn't worked it out without that clarification.


    EDIT: Huh, the single summing (+) scrambler can be easily integrated into the same run as double summing (++) in the automation. I'll merge them now if you don't mind. It's easy as at this end. It occupies the d=0 case.

    So, single summing will sit as the first data array in each of the collated files.

    Great idea to use d = 0 for the + scrambler.
  • evanhevanh Posts: 15,916
    Eek, the combined zip file has grown more than expected. s15's are still left out.
  • TonyB_TonyB_ Posts: 2,178
    edited 2018-07-24 12:36
    TonyB_ wrote: »
    I found a formula last night for "closeness" to an known distribution:
    Sum of (Actual - Expected) squared / Expected
    

    This is related to Chi-Square and is different from my |Actual - Expected| / Expected quality measure.

    I have the full results for xoroshiro32+ and xoroshiro32++ but unfortunately I can't attach files using this PC so there will be a delay in posting them.

    I calculated the sums as follows.
    Let a = Actual and b = Expected, then
    (a-b)^2/b = (a^2 - 2ab + b^2)/b = a^2/b - 2a + b = a(a/b - 2) + b


  • evanhevanh Posts: 15,916
    edited 2018-07-25 01:06
    Intriguing, what's the computer you use? Has small amount RAM, is x86 of some sort, but can't handle the webpage attachments. The restricted webpage handling tells me you must be using a web browser that's at least a decade old, probably older.

    EDIT; I must admit it is annoying when Parallax changed the forum software that they used so much javascript in it. Attachments is one place it was used for no apparent useful purpose.
  • TonyB_TonyB_ Posts: 2,178
    edited 2018-07-25 13:08
    evanh wrote: »
    Intriguing, what's the computer you use? Has small amount RAM, is x86 of some sort, but can't handle the webpage attachments. The restricted webpage handling tells me you must be using a web browser that's at least a decade old, probably older.

    EDIT; I must admit it is annoying when Parallax changed the forum software that they used so much javascript in it. Attachments is one place it was used for no apparent useful purpose.

    Intel Pentium III 800 MHz, 512MB, Windows 98SE + KernelEx, Firefox 3.6.28 (2010). KernelEx is kernel extension that allows some 2000/XP programs to run on 98/ME - truly excellent. 384MB of RAM is configured as a RAM drive with Windows swap file on, which gives me a super fast system. In the past I've run 98SE entirely from a compressed file on the RAM drive, booting from a local drive or a server. For quite a few months after I joined this forum paragraphs were displayed as a single line and I had to do a lot of tedious horizontal scrolling, until I found a suggestion to tell Firefox I'm using an iPhone. The mobile version of this site is great - no more sideways scrolling! :)
  • Evan, how big is a .7z file with all the xoroshiro32++ grid scores?
Sign In or Register to comment.