FAT Filesystem question
DavidZemon
Posts: 2,973
Is it possible to use a FAT (16 or 32) filesystem on a storage device using block sizes other than 512-bytes?
Does anyone know of an example of a storage device that uses blocks but not 512-byte blocks?
Thanks,
David
Does anyone know of an example of a storage device that uses blocks but not 512-byte blocks?
Thanks,
David
Comments
Block sizes of 128 bytes and 1024 bytes have been used with floppy disks in the past, but that's all. The 1024 byte blocks were split into two logical blocks and I think the 128 byte blocks were grouped in multiples of 4 to provide a 512 byte logical block.
Floppy disks were soft-sectored but SD memory is arranged in 512 byte blocks or sectors if you like, and in the case of SDHC the addressing is at the block level, not byte. Seeing that this is a Propeller forum rather than a ask anything and it will be answered forum it might be nice to state why you are asking and what you are hoping to achieve.
A new hard disks use 4096 bytes sectors
CD-ROM use 2048 bytes per sector
It is possible to use FAT file system on these devices using a software or hardware layer which can read non standard sectors and make them visible to FAT filesystem driver as standard 512-bytes sectors.
The FAT file system that is used by PropGCC interfaces to the hardware through sector read and write functions. These functions can be used to adapt any physical block size to 512-byte sector sizes.
I am completely refactoring the way PropWare handles SD cards. I first wrote it in C a couple years ago as part of a class project for the 8051 (it was easier to write it for the prop and then port to the 8051 than write directly for the 8051). Since then, I've learned A LOT and it's time for massive changes. The question was brought up because I am extracting the code into three new classes and as many interfaces. I will now have the following structure:
One thing I have managed to do with all of PropWare so far is avoid dynamic allocation - partly to reduce code size, partly to avoid poor memory management by users. To do that, I have to statically allocate buffers but I also want to allow a user to define their own buffers for increased performance. So now you might see my problem.... how do I statically allocate buffers when the buffer size is dependent on the BlockStorage implementation? It could be implemented by SD, or CD, or Floppy... anything (that's the idea anyway). I could just implement a hard rule that says BlockStorage only works with devices that use 512-byte blocks. Or I could give up my no-dynamic-allocation rule. Or I could... not sure. Any other ideas? Right now I have 512 hard-coded in the BlockStorage interface... but I don't like that
Also, SD cards have a multiblock write and read commands which are very fast. Just make the commands transfer the buffer size in one go. For example, if you make the in memory buffer 4KB then you can use the multiblock read to read in more 4KB much faster than doing 8 512 byte reads back to back.
One thing to remember is that this is Flash, not magnetic, and what might work well for a PC as in FAT32 will not always be best for Flash especially with updating directory entires for each and every byte, that will guarantee that your SD will wear so very much sooner which will result in slower operation and eventually failure. SDs work well for digital cameras because they just write one big file all at once each time and the files are not fragmented and in 8.3 format.
The optimizations you propose are good ones, but not ones I'm willing to use in PropWare's SD implementation. I'd like to fully support fragmented files and, ideally, long filenames (though that hasn't happened yet). Perhaps another implementation of SD (called SDFast?) could be written which uses your suggestions and implements the PropWare interfaces.
As for the extra wear on an SD card, no worries . Data is only written to the SD card when the buffer needs to be overwritten, or the user calls the unmount() method. Of course, Kye's suggestion of a variable buffer size would help even more.
Kye,
I will keep this in mind when I get to it. A 4k buffer would certainly help. I already (optionally) have two buffers - one for generic use and one for the FAT (so that it doesn't have to be swapped in and out all the time).
But... I still am not sure how I'm going to have a buffer that can vary in size at compile time depending on the implementation of the BlockStorage device. Preprocessor macros might be usable in combination with command-line definitions... that would mean another step pre-compile, which I'm not a fan of... but it's at least an option. It would also not allow for connecting two devices of different block sizes.
Of course the implications of long file names is a bit more when you consider that you not only have to follow long names but search and match on long names along with the extra memory required. To keep 32 bytes in RAM for each open file is nothing and there is no advantage in using LFNs in embedded systems. My philosophy is to not let the resource poor Prop be burdened with the MS baggage, let MS sort out it's own mess. By treating a file as flat memory the application can access the file as with standard random read and write addresses rather than the "traditional" (had to/used to do it that way) method. It's a breath of fresh air.