What SD card size (GB) to buy for Prop BOE?S
HShanko
Posts: 402
Seems that 2 GB has been the smallest size recently. Is there any reason to get larger size for the Prop BOE soon to arrive?
Does the SD card drivers allow for any size, or are there limits?
Thanks for any guidance.
Does the SD card drivers allow for any size, or are there limits?
Thanks for any guidance.
Comments
I've use 1, 2 and 8 GB cards a lot with the Prop(I probably have used a dozen different cards with the Prop). I've had very few (only one) that wouldn't work with it (the one that wouldn't work was 256KB SD card). I have had to reformat (slow format) a couple of cards in order to get them to work. Kye's driver is picky about any errors on the card.
I've been really pleased how easy it is to use Kye's SD driver.
Edit: I can't see how you'd ever use more than 1GB with a Prop unless you have a bunch of graphic files (like movies that can play on the Prop). I suppose wav files could take up some room but I'd still think 1GB would be plenty for any audio files you might want.
What's a few orders of magnitude with SD cards. It was a 256MB card.
Above the 2GB barrier you enter into the SDHC sort which I think Kye's drivers are ok with but I don't know if all the others are. I found that buying generic cards and using them in the cameras etc was ok and then I could hoard the Sandisk ones for Prop stuff (mostly DracBlades).
(I do have an 8MB SD card here)
Thank you, everyone.
Parallax stocks #32317 Samsung OEM microSD Card 1 GB and we have 236 units in stock. I'm not sure why it's not listed on our web site but I'll find out. Of interest is that David Carrier chose the size and source, and he also participated in the Propeller BOE design so I'm sure there's some reasoning.
I think they'll sell at $5-10, but I'm not sure. I'll report back when I have more information.
I've tried searching for '32317' and 'SD Card' and only get a 'No product found' message. Thanks for the reference. Hope the link can be fixed.
These cards are not properly formatted for the Propeller BOE. They'll work with Spinneret, however.
If there are any SD card experts out there willing to help I'd be glad to put you in contact with Jessica and David for more information.
We're looking into getting a pre-formatted card for the Propeller BOE next.
Ken Gracey
I was wondering what the best size SD micro card would be. OK, I'll just wait for the pre-formatted card.
I used a 4G to benchmark and it worked fine. Ironically, a 64MB MMC card didn't work.
The joke about SD is that it replaced MMC because of the S in SD, yet I haven't seen any consumer application take advantage of that. Every application I've seen uses SD cards just like the MMC spec.
is a sweet spot, reducing metadata reads/writes.
If you know you don't need much space or many files, smaller cards also work well, but in general a
2GB card will be close to optimal (unless you need more space than this).
it as FAT16/32K cluster/2GB will make it faster than using it as a native FAT32 card.
Thank you for that 2/4 GB information above.
When reading or writing data, block management needs to be done---keeping track of
which clusters are free, where the next cluster is, and so on and so forth.
FAT12, FAT16, and FAT32 uses a cluster pointer arrangement, where every cluster
has a single integer in the FAT. In FAT12, the integer is 12 bits; in FAT16, it is
16 bits; in FAT32 it is 32 bits.
The number of FAT entries per block is 512/#; for FAT16, this is 256; for FAT32,
this is 128.
The amount of space spanned by a single FAT block (and thus the amount of
storage that can be managed with a single metadata buffer, without needing to
flush it in and out all the time) is the cluster size times this value. So with a
32K cluster (the largest "standard" cluster size) on FAT16, this means 32K*256
or 8MB; with FAT32 it is only 32K*128 or 4MB.
(A cluster size of 64K on FAT16 is possible but not all systems support this.)
And worse, on FAT32 the default block size is typically 8K, so each FAT block
covers only 8K*128 or 1MB. (You can change the cluster size by setting the
appropriate options when you format the card.)
In "normal" computers none of this matters too much; a few dozen FAT buffers
typically suffice for remembering all the metadata needed, but the prop only
has 32K, and each buffer is a good fraction of this total memory.
Which brings up a good question: has anyone tested 64K clusters with the
two different filesystems? It would be nice to know if those are compatible.
If they are, then a 4GB card formatted FAT16 with 64K clusters would be a
sweet spot, with each FAT block covering 16MB of real data.
(The extra space wasted due to fragmentation because of the large
cluster size is probably not an issue for anything realistic being done with
the prop.)
-tom
Is the Prop BOE SD card socket accept a Micro SD card directly or does it require a SD to micro SD adapter?
I do have one of Rayman's boards w/3.5" LCD and SD socket. That appears to be larger than maybe what the Prop BOE uses. I'd guess it doesn't need the adapter.
The rub comes when you have a large cluster size and a lot of little files. You effectively waste 32K - filesize * number of files. This is significant in some circumstances.
A 2GB SD card with 32K clusters can store a maximum of 65536 files. Directories are considered files too. You also have to take into account appending to files, if you have a logging application. You need to read the directory to see where the data stops, then read-modify-write the cluster. I think that a 32K cluster could be a big problem for performance when used with applications that append small amount of data to a file, then close the file. You effectively have to read and write 32K for every transaction.
In a logging application it would be better to have smaller clusters or to write the application to have buffering that flushes a block periodically. This brings up a whole host of other issues I won't get into.
So, in summary, block size is not only dependent on your card size, but your application too.
Linux has used a 4K block size for a long time, as that seems to be the sweet spot and corresponds with the x86 page size, so there is good reason that was chosen initially. Using FAT32 with a 4K page size means a lot less I/O for disk writes when doing an open/append/close operation.
Based on the picture, it accepts a microSD card directly. You would only need an adapter if you were reading it on a PC and the reader doesn't have a microSD socket.
Thank you for that observation. I couldn't really tell from the Prop BOE ad picture.
I suppose though if I wanted to move the micro SD card to a full size SD socket on another board, only then would I need the adapter.
This is called fragmentation, and it is solely a matter of wasted space. With the prop, most of the
things we do when we have many files, do not come close to filling a 2GB card. But if you do plan
to write many thousands of tiny files, then this becomes an issue. (The 512-file limit of the root
directory on FAT16, plus the fact that fsrw does not support subdirectories, will be an issue long
before this. Also, note that if you really do have thousands of files, it will take some amount of
time to open one of them because you need to scan the entire directory until you find your file.)
So for the Prop, the fragmentation is seldom a concern, and bigger block sizes are a win.
This is completely incorrect; you do not need to read/modify/write an entire cluster. FSRW
does not, and I'm sure Kye's implementation does not. Only the blocks that actually change
need to be read and/or written.
You are correct that if you are opening, appending a small amount of data, and closing, all
the time, things will be somewhat slow, but it's not affected by the cluster size. It's strictly
a matter of all the other work that needs to be done: scan the directory for the file and its
size, open the file, calculate the appropriate offset, follow the FAT chain to the end (this could
take a long time if the file is big), then if the file size is not an exact multiple of 512 read in
the old data block; write some data (update the block); write the block out; update the
directory entry; if a new cluster is allocated, also update the FAT.
In this case, if you have a small cluster size, you'll be updating the FAT more frequently, so
a large cluster size actually *helps* you reduce I/O and time.
The block size for SD cards is 512. (Newer cards support other block sizes, but I don't believe
either fsrw nor Kye's driver, nor most other SD applications, use block sizes other than 512.)
Don't get the block size and the cluster size confused; they are very different.
Linux's 4K block size is indeed right now a major nightmare. There is a lot of work going on to
support larger block sizes (and larger VM page sizes as well); 4K block sizes make no sense at
all in a world with 2TB disks and 4GB main memory commonplace. It was chosen back in the
days when a hard disk was 20MB and main memory was 2MB. You would be amazed at how
much real-world performance is lost because of this small page size/block size on Linux, not only
on I/O but also on TLB misses and leaf-level cache misses in the page table. On a machine
with 16GB of memory, the leaf-level page table is 32MB alone because of the 4K page size!
If you doubt me, try it: compare FAT32 with a 4K cluster size, vs FAT16 with 32K clusters, when
doing a lot of open/append/close operations. FAT16 will win easily, especially as the files get
larger and larger.
-tom
Actually that isn't fragmentation, that's just wasted space. Fragmentation is when a file is not stored contiguously on the filesystem because the cluster right behind it is in use and you have to skip around to find an empty cluster. Traditionally, fragmentation is worsened by smaller block sizes, but in the FAT FS it has more to do with an overly simplistic FS than block size. EXT2 does fine with 4K block sizes and has low fragmentation.
If the FAT is updated during write, yes you will incur a penalty, but if the FAT is cached in a 512 byte block, it's much less intensive.
4K was chosen as the block size of the EXT FS (aka cluster) because it made sense. The main reason for picking 4K was the page size, bigger didn't make sense because you couldn't shuffle any more data at once. You can blame intel, don't blame "linux" for any performance woes of 4K. The biggest problem of big FSes isn't block size, that "fix" is more of a cheat. The real problem is the 32 bit pointers for all the filesystem offsets. There are plenty of filesystems that do more than 16TB and have better performance than the EXT series, see XFS.
Also, application has much to do with filesystems. Specifically, if you have a 20TB filesystem, you want to allocate in larger groups of blocks by having less inodes -- the analog of a distributed FAT table. Filesystem performance at very large sizes is often dictated by the number of blocks per inode that are allocated at FS build. For archival systems 1 inode per MB is common, for news or mail spools, more inodes than normal are allocated to account for lots of little files. This is especially true for logging filesystems because the more blocks per inode, the more often the log buffer fills up and the more often you have to flush it. XFS is hard hit when "default" values are used to create large filesystems because the log volume is too small and block allocations are too fine grained.
Consider the use case scenario of an embedded device. If you are dealing with small files, a small cluster size is more appropriate. I still say 32K is a waste, and it's always been a waste, except for storing lots of big files.
I can see your point about pre-allocating more space, but I have to argue that cluster size should be dictated by the size of the files on the card, not the size of the card.
Not to nitpick here... but its called "internal fragmentation". You are talking about "external fragmentation"... At least, this is what they taught me in college.
True, but in order not to use up all the prop chip's memory the FAT is read... modified... and then written back out in FSRW and my driver. Thus, less FAT access translate into more speed.
I'm specifically referring to the exact use case you outlined: opening files and appending to them and closing them.
For the close, the FAT must be updated (if necessary) and the directory entry must be updated. In this case, the
FAT caching doesn't help; every time a new cluster is allocated the FAT table must be updated.