I found the problem - a classic C buffer overflow issue, but one that seems to have only caused the process to crash catastrophically when it was started by 'make', and not when started from the command line. Possibly just a very slight difference in the way memory is allocated and used.
Thanks to @ke4pjw for the tip to use Process Explorer, which showed me (eventually) that the cryptic error messages reported by 'make' had sent me down entirely the wrong path.
Apologies to Micro$oft - this one wasn't their fault!
Just thought I'd add an update. Progress has been slow recently because I've been busy on other things, but there has been some.
I've added an option for Catalina to use a caching version of the file system, which caches file accesses in spare PSRAM. This reduces compile times by up to 35%, and could also be used by other programs. Using it also has the benefit of extending the life of SD cards.
Self-hosted Catalina is never going to set the world on fire, but here are the current compile times:
The main thing holding up the non-beta release is that I have found it impossible to get my SD card driver to work 100% reliably on all my SD cards. On the larger compilations I get occasional sector write failures. Everything works quite reliably on 6 of my 8 test cards, but the other 2 routinely fail at some point. It may be bad sectors on the cards themselves, but when I test them using various SD card test programs they pass, so I am not sure. Possibly, it is the sustained writing of larger files that these cards don't like (the cards that work reliably tend to be larger capacity and/or higher speed rating).
I will probably give up on this soon and just release it with a warning note to use good quality SD cards if you want to use self-hosted Catalina, because I can't find any issues in the SD card driver itself. The code I use was originally written by @Cluso99 and seems quite reliable when used for other purposes. But if anyone has some low-level PASM (not SPIN) SD card sector read/write functions, please let me know and I will try them in place of the code I am currently using.
I did a Pasm2 patch for Eric's low level (SPI interface) block driver. But doubt it'll help you with your errors. I think you're right in thinking it'll be a write buffer limitation within the cards. Probably around checking of the card's status during or after writes.
I don't have any old uSD cards. Never had any prior to the Prop2. And even my full sized SD cards are newish after most of the old ones cracked after being sat on.
Here's said block writing assembly from include/filesys/fatfs/sdmm.cc:
__asm const { // "const" prevents use of FCACHE
dirl PIN_DI // reset tx smartpin, clears excess data
setq #1
rdlong bc2, buff // fetch first data
rev bc2
movbyts bc2, #0x1b // endian swap
wypin bc2, PIN_DI // first data to tx shifter
mov bc2, bc
shr bc, #2 wz // longword count (rounded down)
shl bc2, #3 // bit count (exact)
wypin bc2, PIN_CLK // begin SPI clocks
dirh PIN_DI // liven tx buffer, continuous mode
add buff, #8
rev d
movbyts d, #0x1b // endian swap
tx_loop
if_nz wypin d, PIN_DI // data to tx buffer
if_nz rdlong d, buff // fetch next data
if_nz add buff, #4
if_nz rev d
if_nz movbyts d, #0x1b // endian swap
tx_wait
if_nz testp PIN_DI wc // wait for tx buffer empty
if_nc_and_nz jmp #tx_wait
if_nz djnz bc, #tx_loop
// Wait for completion
tx_wait2
testp PIN_CLK wc
if_nc jmp #tx_wait2
dirl PIN_DI // reset tx smartpin to clear excess data
wypin ##-1, PIN_DI // TX 0xFF, continuous mode
dirh PIN_DI
}
I plan to do more investigation and testing when I get more time, and I will try checking the card status register again. But when I have done this in the past, it has not returned anything useful. Typically, the write does not "fail" as such, so the card does not return any particular error - the card just fails to respond.
Although, maybe a BUSY (away-writing-the-blocks-into-flash) signal will prevent a response other than the solid busy state. Looking at sdmm.cc it has a wait_ready() function that is called before new commands, via select(), and also before each write block sent during a write command. Looks like a BUSY is continuous low on incoming data pin.
Although, maybe a BUSY (away-writing-the-blocks-into-flash) signal will prevent a response other than the solid busy state. Looking at sdmm.cc it has a wait_ready() function that is called before new commands, via select(), and also before each write block sent during a write command. Looks like a BUSY is continuous low on incoming data pin.
Yes, commonly what I see is just a continuously busy signal. But I have tried upping the timeout to several seconds, and the card never completes the operation.
Quote from SD spec SPI interface mode 7.2.4: "While the card is busy, resetting the CS signal will not terminate the programming process. The card will release the DataOut line (tri-state) and continue with programming. If the card is reselected before the programming is finished, the DataOut line will be forced back to low and all commands will be rejected."
@evanh said:
Quote from SD spec SPI interface mode 7.2.4: "While the card is busy, resetting the CS signal will not terminate the programming process. The card will release the DataOut line (tri-state) and continue with programming. If the card is reselected before the programming is finished, the DataOut line will be forced back to low and all commands will be rejected."
If I disabled the timeout altogether, the operation might eventually complete - but I did try that once, and it didn't seem to. But even if it did, if writing each sector takes a few seconds, a large compile would take hours to write the output from each step of the compilation, so what would be the point?
Hmm, dunno, I've not encountered such with my FlexC based speed tests. The tester overwrites a few files five times each with new randomised data each pass. Largest file at 212000 bytes long. The size was mostly limited by keeping a copy of the random data to compare on read-back. There wasn't any motivation to go larger.
clkfreq = 240000000 clkmode = 0x1000bfb
addr1 = 0xe3cc addr2 = 0x26a6c Randfill ticks = 225061
Written 100000 of 100000 bytes at 2867 kB/s
Read 100000 of 100000 bytes at 4105 kB/s Matches! :)
addr1 = 0xe3cc addr2 = 0x26a6c Randfill ticks = 225061
Written 100000 of 100000 bytes at 2891 kB/s
Read 100000 of 100000 bytes at 4138 kB/s Matches! :)
addr1 = 0xe3cc addr2 = 0x26a6c Randfill ticks = 225061
Written 100000 of 100000 bytes at 2826 kB/s
Read 100000 of 100000 bytes at 4176 kB/s Matches! :)
addr1 = 0xe3cc addr2 = 0x26a6c Randfill ticks = 225061
Written 100000 of 100000 bytes at 2902 kB/s
Read 100000 of 100000 bytes at 4177 kB/s Matches! :)
addr1 = 0xe3cc addr2 = 0x41fec Randfill ticks = 477061
Written 212000 of 212000 bytes at 3205 kB/s
Read 212000 of 212000 bytes at 4243 kB/s Matches! :)
addr1 = 0xe3cc addr2 = 0x41fec Randfill ticks = 477061
Written 212000 of 212000 bytes at 3203 kB/s
Read 212000 of 212000 bytes at 4241 kB/s Matches! :)
addr1 = 0xe3cc addr2 = 0x41fec Randfill ticks = 477061
Written 212000 of 212000 bytes at 3184 kB/s
Read 212000 of 212000 bytes at 4263 kB/s Matches! :)
addr1 = 0xe3cc addr2 = 0x41fec Randfill ticks = 477061
Written 212000 of 212000 bytes at 3182 kB/s
Read 212000 of 212000 bytes at 4263 kB/s Matches! :)
addr1 = 0xe3cc addr2 = 0x41fec Randfill ticks = 477061
Written 212000 of 212000 bytes at 3179 kB/s
Read 212000 of 212000 bytes at 4242 kB/s Matches! :)
addr1 = 0xe3cc addr2 = 0x131ec Randfill ticks = 45061
Written 20000 of 20000 bytes at 1877 kB/s
Read 20000 of 20000 bytes at 3737 kB/s Matches! :)
addr1 = 0xe3cc addr2 = 0x131ec Randfill ticks = 45061
Written 20000 of 20000 bytes at 1592 kB/s
Read 20000 of 20000 bytes at 3749 kB/s Matches! :)
addr1 = 0xe3cc addr2 = 0x131ec Randfill ticks = 45061
Written 20000 of 20000 bytes at 1602 kB/s
Read 20000 of 20000 bytes at 3499 kB/s Matches! :)
addr1 = 0xe3cc addr2 = 0x131ec Randfill ticks = 45061
Written 20000 of 20000 bytes at 1881 kB/s
Read 20000 of 20000 bytes at 3740 kB/s Matches! :)
addr1 = 0xe3cc addr2 = 0x131ec Randfill ticks = 45061
Written 20000 of 20000 bytes at 1834 kB/s
Read 20000 of 20000 bytes at 3740 kB/s Matches! :)
addr1 = 0xe3cc addr2 = 0xeb9c Randfill ticks = 4565
Written 2000 of 2000 bytes at 278 kB/s
Read 2000 of 2000 bytes at 2085 kB/s Matches! :)
addr1 = 0xe3cc addr2 = 0xeb9c Randfill ticks = 4565
Written 2000 of 2000 bytes at 356 kB/s
Read 2000 of 2000 bytes at 2371 kB/s Matches! :)
addr1 = 0xe3cc addr2 = 0xeb9c Randfill ticks = 4565
Written 2000 of 2000 bytes at 353 kB/s
Read 2000 of 2000 bytes at 2389 kB/s Matches! :)
addr1 = 0xe3cc addr2 = 0xeb9c Randfill ticks = 4565
Written 2000 of 2000 bytes at 261 kB/s
Read 2000 of 2000 bytes at 2353 kB/s Matches! :)
addr1 = 0xe3cc addr2 = 0xeb9c Randfill ticks = 4565
Written 2000 of 2000 bytes at 365 kB/s
Read 2000 of 2000 bytes at 2393 kB/s Matches! :)
addr1 = 0xe3cc addr2 = 0xe494 Randfill ticks = 509
Written 200 of 200 bytes at 44 kB/s
Read 200 of 200 bytes at 672 kB/s Matches! :)
addr1 = 0xe3cc addr2 = 0xe494 Randfill ticks = 509
Written 200 of 200 bytes at 34 kB/s
Read 200 of 200 bytes at 480 kB/s Matches! :)
addr1 = 0xe3cc addr2 = 0xe494 Randfill ticks = 509
Written 200 of 200 bytes at 45 kB/s
Read 200 of 200 bytes at 672 kB/s Matches! :)
addr1 = 0xe3cc addr2 = 0xe494 Randfill ticks = 509
Written 200 of 200 bytes at 43 kB/s
Read 200 of 200 bytes at 684 kB/s Matches! :)
addr1 = 0xe3cc addr2 = 0xe494 Randfill ticks = 509
Written 200 of 200 bytes at 34 kB/s
Read 200 of 200 bytes at 672 kB/s Matches! :)
@evanh said:
Hmm, dunno, I've not encountered such with my FlexC based speed tests.
I should emphasize that the problems only happen on a couple of my older/slower/smaller SD cards. Newer/faster/larger capacity SD cards seem to all work fine.
I just need a few more tests to convince myself that this is a card failure rather than a software failure.
I guess the question I have now is does a FlexC compile of your failing tests produce the same outcome on the same cards?
EDIT: Oh, it's a self-hosted compile run that fails, I see. Have you made a built binary, something that can be compiled with FlexC, that gets any errors?
@evanh said:
I guess the question I have now is does a FlexC compile of your failing tests produce the same outcome on the same cards?
EDIT: Oh, it's a self-hosted compile run that fails, I see. Have you made a built binary, something that can be compiled with FlexC, that gets any errors?
AFAIK, FlexC could not compile these programs, so no. But I may be able to use the FlexC SD card code in Catalina. I will investigate.
Was there any resolution to the SD Card compatibility question? I don't have any old SD cards that haven't been mechanically destroyed. FlexC's SD init code does a bunch of mode checks for SD v1.0 type cards and can, for example, handle configuring their block sizes.
It's actually something I'm interested in because I'm in the middle of converting that very init code to stay in the default SD interface mode and not switch over to SPI interface mode.
@evanh said:
Was there any resolution to the SD Card compatibility question? I don't have any old SD cards that haven't been mechanically destroyed. FlexC's SD init code does a bunch of mode checks for SD v1.0 type cards and can, for example, handle configuring their block sizes.
It's actually something I'm interested in because I'm in the middle of converting that very init code to stay in the default SD interface mode and not switch over to SPI interface mode.
Not really sure. I managed to get the current code working reliably via a combination of timing tweaks, retrying failures, and also adding the SD cache (which massively reduces the number of SD card accesses).
The cards that failed were both older and lower capacity, and even they worked until I really hammered them with a particularly large C compilation. All my newer/higher capacity cards always worked quite reliably.
If anyone wants to try their SD card and let me know, unpack one of the pre-compiled versions of Catalina from the 6.0.1 release (e.g. P2_EDGE.ZIP if you have a P2-EC32MB) to a freshly formatted SD card (formatted as FAT32 with a 32kb cluster size) and then try the following compilation:
catalina chimaera.c -lcx -lmc -v
WARNING: This compilation is non-trivial - it is around 6,000 lines of C, which ends up being about 70,000 lines of PASM (once all the library and runtime support code is included) for a final binary file size of 170kb. It will take around 2 hours on a P2-EC32MB (longer on a P2 Evaluation board with HyperRAM which has only 16MB of PSRAM) but smaller compiles always work on all the cards I have - it seems to take a large compile to exhibit any failure. Any failure will be very evident - the program will print a message.
Compile started on an Edge EC32MB via dumb terminal ... hmm, not drawing any current from the power supply ... only thing printed so far is Catalina Version 6.0
Trying again ... difficult to know if the SD card is even trying to boot ... eventually got a prompt after many DTR toggles ... getting power draw now ...
New compile message rm -k /tmp/chimaera.cpp /tmp/chimaera.rcc chimaera.s catalina.s catalina.cmd /chimaera.bin
Oops, stopped watching, don't know how long ago this was but rest of messages were:
It looks like the compilation started ok. It may have been your rebooting at an inopportune time that disrupted it. It takes Catalina some time to generate the necessary auto-execute script, and then it reboots the Prop itself to start it executing. Wait a bit longer before assuming nothing is happening.
Rebooting an auto-execute script makes the script move to the next line, but that can lead to cascading failures if the initial step did not complete. To fully abort an executing script, hold down any key (I usually use ESC) when rebooting. You should see a message asking if you want to continue the auto-execution. Press N to abort the script.
But it is concerning that the script apparently could not find the necessary Catalina executables (e.g. RCC or P2ASM). These should be in the bin directory. Can you print out that directory? Use the following command:
ls -r bin
or ls bin/
Also, try entering one of the commands it said it could not find manually, like:
rcc
or p2asm
Even with no parameters, these should both print some output.
Then perhaps try a smaller compile first:
catalina hello.c -lci -v
Let it run to completion - this should take 7 or 8 minutes.
Okay, I guess the 64 blocks per cluster is important. I've read up the man page for mkfs.fat now ... sudo mkfs.fat -v -F32 -s64 -I /dev/sdg1 -n SAN32 ...
Yep, that did it, RCC and co are working now. Time to try the compiling again I guess ...
You've clearly got a faster SD Card than me. The hello.c compile took about 8 minutes to get to p2asm. 11.5 minutes total.
@evanh said:
Okay, I guess the 64 blocks per cluster is important.
```
Yes, the cluster size needs to be 32k (i.e. 64 sectors) to accommodate the maximum supported executable size ...
' The maximum size of programs that can be loaded by Catalyst is determined
' by the size of the cluster list and the cluster size itself. The size of
' the cluster list is set in constant.inc and catalyst.h (they must
' match!) for a maximum program size of 4Mb but this will only be achieved
' when using a cluster size of 32k. For other cluster sizes, see the table
' below:
'
' Cluster Size Max Program Size
' ============ ================
' 512 bytes 64 kbyte
' 1 kbyte 128 kbyte
' 2 kbyte 256 kbyte
' 4 kbyte 512 kbyte
' 8 kbyte 1 Mbyte
' 16 kbyte 2 Mbyte
' 32 kbyte 4 Mbyte
Catalina's p2asm is over 1mb, and rcc is over 2.5mb - so you need the 32k cluster size to do a compilation.
I should make this more prominent in the documentation.
EDIT: Given that release 6.0.1 is specifically about self-hosting Catalina, I think the need to use a 32k cluster size to make it work is a significant enough omission that I will shortly issue a small "errata" update, which will have no functional changes but will include an update to the Catalyst documentation.
Wow, I just copied the self-hosted install onto a 128 GB EXFAT partitioned SD card and it copied the whole 5000+ files in a few seconds! Reformat in FAT32, did the same and it takes a few minutes instead.
EDIT: Ah-ha, looks like it's just different defaults for FAT32 vs EXFAT. By manually mounting the FAT32 partitioned SD card I can get it written just as quick as the EXFAT partitioned card. sudo mount -o user,uid=evanh,gid=evanh,async,lazytime,discard /dev/sdg1 /media/evanh/SAM128G/
flexspin's vfs has exfat support if you compile it with a flag (will bloat your binary a fair bit). The real issue isn't licensing, it's complexity. Nothing stops you from using ext4, except that it is really complicated. FAT32 is dead simple and has a driver in windows. That's why it's used.
Comments
Ok! This time for sure!
I found the problem - a classic C buffer overflow issue, but one that seems to have only caused the process to crash catastrophically when it was started by 'make', and not when started from the command line. Possibly just a very slight difference in the way memory is allocated and used.
Thanks to @ke4pjw for the tip to use Process Explorer, which showed me (eventually) that the cryptic error messages reported by 'make' had sent me down entirely the wrong path.
Apologies to Micro$oft - this one wasn't their fault!
Ross.
I have removed the "preview" of Catalina 6.0 and replaced it with a complete BETA release here. Currently Windows only.
This release supports both the P2 EDGE and EVAL boards when equipped with either PSRAM or Hyper RAM.
For more details, see the main Catalina thread here.
Ross.
Just thought I'd add an update. Progress has been slow recently because I've been busy on other things, but there has been some.
I've added an option for Catalina to use a caching version of the file system, which caches file accesses in spare PSRAM. This reduces compile times by up to 35%, and could also be used by other programs. Using it also has the benefit of extending the life of SD cards.
Self-hosted Catalina is never going to set the world on fire, but here are the current compile times:
hello.c (5 lines) - 8 minutes (was 10)
othello.c (470 lines) - 10 minutes (was 13)
startrek.c (2200 lines) - 42 minutes (was 56)
chimaera.c (5500 lines) - 110 minutes (was 170)
The main thing holding up the non-beta release is that I have found it impossible to get my SD card driver to work 100% reliably on all my SD cards. On the larger compilations I get occasional sector write failures. Everything works quite reliably on 6 of my 8 test cards, but the other 2 routinely fail at some point. It may be bad sectors on the cards themselves, but when I test them using various SD card test programs they pass, so I am not sure. Possibly, it is the sustained writing of larger files that these cards don't like (the cards that work reliably tend to be larger capacity and/or higher speed rating).
I will probably give up on this soon and just release it with a warning note to use good quality SD cards if you want to use self-hosted Catalina, because I can't find any issues in the SD card driver itself. The code I use was originally written by @Cluso99 and seems quite reliable when used for other purposes. But if anyone has some low-level PASM (not SPIN) SD card sector read/write functions, please let me know and I will try them in place of the code I am currently using.
Ross.
I did a Pasm2 patch for Eric's low level (SPI interface) block driver. But doubt it'll help you with your errors. I think you're right in thinking it'll be a write buffer limitation within the cards. Probably around checking of the card's status during or after writes.
I don't have any old uSD cards. Never had any prior to the Prop2. And even my full sized SD cards are newish after most of the old ones cracked after being sat on.
Here's said block writing assembly from
include/filesys/fatfs/sdmm.cc
:Thanks, @evanh
I plan to do more investigation and testing when I get more time, and I will try checking the card status register again. But when I have done this in the past, it has not returned anything useful. Typically, the write does not "fail" as such, so the card does not return any particular error - the card just fails to respond.
Ross.
SPI mode always acknowledges commands, unlike SD mode, AFAIK.
Although, maybe a BUSY (away-writing-the-blocks-into-flash) signal will prevent a response other than the solid busy state. Looking at
sdmm.cc
it has await_ready()
function that is called before new commands, viaselect()
, and also before each write block sent during a write command. Looks like a BUSY is continuous low on incoming data pin.Yes, commonly what I see is just a continuously busy signal. But I have tried upping the timeout to several seconds, and the card never completes the operation.
Quote from SD spec SPI interface mode 7.2.4: "While the card is busy, resetting the CS signal will not terminate the programming process. The card will release the DataOut line (tri-state) and continue with programming. If the card is reselected before the programming is finished, the DataOut line will be forced back to low and all commands will be rejected."
If I disabled the timeout altogether, the operation might eventually complete - but I did try that once, and it didn't seem to. But even if it did, if writing each sector takes a few seconds, a large compile would take hours to write the output from each step of the compilation, so what would be the point?
Hmm, dunno, I've not encountered such with my FlexC based speed tests. The tester overwrites a few files five times each with new randomised data each pass. Largest file at 212000 bytes long. The size was mostly limited by keeping a copy of the random data to compare on read-back. There wasn't any motivation to go larger.
I should emphasize that the problems only happen on a couple of my older/slower/smaller SD cards. Newer/faster/larger capacity SD cards seem to all work fine.
I just need a few more tests to convince myself that this is a card failure rather than a software failure.
Ross.
I guess the question I have now is does a FlexC compile of your failing tests produce the same outcome on the same cards?
EDIT: Oh, it's a self-hosted compile run that fails, I see. Have you made a built binary, something that can be compiled with FlexC, that gets any errors?
AFAIK, FlexC could not compile these programs, so no. But I may be able to use the FlexC SD card code in Catalina. I will investigate.
Catalina's self-hosted C development capability now has an official release - see here.
Was there any resolution to the SD Card compatibility question? I don't have any old SD cards that haven't been mechanically destroyed. FlexC's SD init code does a bunch of mode checks for SD v1.0 type cards and can, for example, handle configuring their block sizes.
It's actually something I'm interested in because I'm in the middle of converting that very init code to stay in the default SD interface mode and not switch over to SPI interface mode.
Not really sure. I managed to get the current code working reliably via a combination of timing tweaks, retrying failures, and also adding the SD cache (which massively reduces the number of SD card accesses).
The cards that failed were both older and lower capacity, and even they worked until I really hammered them with a particularly large C compilation. All my newer/higher capacity cards always worked quite reliably.
If anyone wants to try their SD card and let me know, unpack one of the pre-compiled versions of Catalina from the 6.0.1 release (e.g. P2_EDGE.ZIP if you have a P2-EC32MB) to a freshly formatted SD card (formatted as FAT32 with a 32kb cluster size) and then try the following compilation:
catalina chimaera.c -lcx -lmc -v
WARNING: This compilation is non-trivial - it is around 6,000 lines of C, which ends up being about 70,000 lines of PASM (once all the library and runtime support code is included) for a final binary file size of 170kb. It will take around 2 hours on a P2-EC32MB (longer on a P2 Evaluation board with HyperRAM which has only 16MB of PSRAM) but smaller compiles always work on all the cards I have - it seems to take a large compile to exhibit any failure. Any failure will be very evident - the program will print a message.
Ross.
Compile started on an Edge EC32MB via dumb terminal ... hmm, not drawing any current from the power supply ... only thing printed so far is
Catalina Version 6.0
ls worked:
Trying again ... difficult to know if the SD card is even trying to boot ... eventually got a prompt after many DTR toggles ... getting power draw now ...
New compile message
rm -k /tmp/chimaera.cpp /tmp/chimaera.rcc chimaera.s catalina.s catalina.cmd /chimaera.bin
Oops, stopped watching, don't know how long ago this was but rest of messages were:
Hello @evanh
It looks like the compilation started ok. It may have been your rebooting at an inopportune time that disrupted it. It takes Catalina some time to generate the necessary auto-execute script, and then it reboots the Prop itself to start it executing. Wait a bit longer before assuming nothing is happening.
Rebooting an auto-execute script makes the script move to the next line, but that can lead to cascading failures if the initial step did not complete. To fully abort an executing script, hold down any key (I usually use ESC) when rebooting. You should see a message asking if you want to continue the auto-execution. Press N to abort the script.
But it is concerning that the script apparently could not find the necessary Catalina executables (e.g. RCC or P2ASM). These should be in the bin directory. Can you print out that directory? Use the following command:
ls -r bin
or
ls bin/
Also, try entering one of the commands it said it could not find manually, like:
rcc
or
p2asm
Even with no parameters, these should both print some output.
Then perhaps try a smaller compile first:
catalina hello.c -lci -v
Let it run to completion - this should take 7 or 8 minutes.
Ross.
Okay, I guess the 64 blocks per cluster is important. I've read up the man page for mkfs.fat now ...
sudo mkfs.fat -v -F32 -s64 -I /dev/sdg1 -n SAN32
...Yep, that did it, RCC and co are working now. Time to try the compiling again I guess ...
You've clearly got a faster SD Card than me. The hello.c compile took about 8 minutes to get to
p2asm
. 11.5 minutes total.Yes, the cluster size needs to be 32k (i.e. 64 sectors) to accommodate the maximum supported executable size ...
Catalina's p2asm is over 1mb, and rcc is over 2.5mb - so you need the 32k cluster size to do a compilation.
I should make this more prominent in the documentation.
EDIT: Given that release 6.0.1 is specifically about self-hosting Catalina, I think the need to use a 32k cluster size to make it work is a significant enough omission that I will shortly issue a small "errata" update, which will have no functional changes but will include an update to the Catalyst documentation.
EDIT: Errata now released - see here.
Ross.
Wow, I just copied the self-hosted install onto a 128 GB EXFAT partitioned SD card and it copied the whole 5000+ files in a few seconds! Reformat in FAT32, did the same and it takes a few minutes instead.
EDIT: Ah-ha, looks like it's just different defaults for FAT32 vs EXFAT. By manually mounting the FAT32 partitioned SD card I can get it written just as quick as the EXFAT partitioned card.
sudo mount -o user,uid=evanh,gid=evanh,async,lazytime,discard /dev/sdg1 /media/evanh/SAM128G/
It would be nice to have a modern, open source, universal, license-free and otherwise unencumbered file system format.
I doubt I will live that long
Ross.
I don't think there is a barrier there. A little surprisingly, M$ donated their EXFAT filesystem patents as open-source https://www.zdnet.com/article/exfat-is-on-its-way-to-the-linux-kernel/ and Samsung wrote an open-source implementation for the Linux kernel sources. https://www.phoronix.com/news/Linux-5.7-New-exFAT-Lands I haven't read any license for it but it must be GPL compliant to be upstreamed like that.
Not sure what sort of backporting is happening but I don't see any licensing roadblocks.
flexspin's vfs has exfat support if you compile it with a flag (will bloat your binary a fair bit). The real issue isn't licensing, it's complexity. Nothing stops you from using ext4, except that it is really complicated. FAT32 is dead simple and has a driver in windows. That's why it's used.
Well, it seems to have come along a bit license-wise since I last looked at it - an MIT licensed version is now available.
https://github.com/greiman/SdFat
However, the memory footprint still looks too large for a Propeller 1 - that version of exFAT says ~15kb, whereas DOSFS (FAT16/FAT32) is under 4kb.
So while I may have a play with exFAT at some point, I doubt Catalina will adopt it. I don't want to have to support two different file systems.
Doesn't worry me. I've worked out now it's not the filesystem that makes FAT32 slow on Linux. It's the default mounting options.
enlighten me on that please
From above:
sudo mount -o user,uid=evanh,gid=evanh,async,lazytime,discard /dev/sdg1 /media/evanh/SAM128G/
I'm assuming it's the
async
option. I haven't tried the combinations, just unmounted it and mounted again using this.