Shop OBEX P1 Docs P2 Docs Learn Events
What SD card size (GB) to buy for Prop BOE?S — Parallax Forums

What SD card size (GB) to buy for Prop BOE?S

HShankoHShanko Posts: 402
edited 2012-03-01 05:06 in Propeller 1
Seems that 2 GB has been the smallest size recently. Is there any reason to get larger size for the Prop BOE soon to arrive?

Does the SD card drivers allow for any size, or are there limits?

Thanks for any guidance.
«1

Comments

  • Duane DegnDuane Degn Posts: 10,588
    edited 2012-02-25 16:10
    Kye's recent drivers allow cards of pretty much any size. There is some limit but it's crazy high (terabyte?)

    I've use 1, 2 and 8 GB cards a lot with the Prop(I probably have used a dozen different cards with the Prop). I've had very few (only one) that wouldn't work with it (the one that wouldn't work was 256KB SD card). I have had to reformat (slow format) a couple of cards in order to get them to work. Kye's driver is picky about any errors on the card.

    I've been really pleased how easy it is to use Kye's SD driver.

    Edit: I can't see how you'd ever use more than 1GB with a Prop unless you have a bunch of graphic files (like movies that can play on the Prop). I suppose wav files could take up some room but I'd still think 1GB would be plenty for any audio files you might want.
  • Cluso99Cluso99 Posts: 18,069
    edited 2012-02-25 16:34
    I am using 1GB & 2GB cards. Generally comments on the various cards comes to the conclusion that SanDisk cards seem to have the least problems with the various drivers used on the prop.
  • KyeKye Posts: 2,200
    edited 2012-02-25 17:33
    256KB SD card??? Whoa! O_o.
  • Duane DegnDuane Degn Posts: 10,588
    edited 2012-02-25 21:32
    Kye wrote: »
    256KB SD card??? Whoa! O_o.

    What's a few orders of magnitude with SD cards. It was a 256MB card.
  • Toby SeckshundToby Seckshund Posts: 2,027
    edited 2012-02-26 01:46
    Of all the various SD cards that I have lying around, the only ones that I have never had any problems with were 2GB Sandisk ones.

    Above the 2GB barrier you enter into the SDHC sort which I think Kye's drivers are ok with but I don't know if all the others are. I found that buying generic cards and using them in the cameras etc was ok and then I could hoard the Sandisk ones for Prop stuff (mostly DracBlades).

    (I do have an 8MB SD card here)
  • KyeKye Posts: 2,200
    edited 2012-02-26 07:57
    My driver works better with newer SD cards. Older ones have a larger variance in how they work. They follow the SD1.0 protocol which allowed for a lot of complications. Newer cards follow the SD2.0 and SD3.01 protocol. In the SD2.0 and SD3.01 protocol a lot of uniformity was imposed.
  • HShankoHShanko Posts: 402
    edited 2012-02-26 08:06
    Thanks for the responses. Good guidance.

    Thank you, everyone.
  • Ken GraceyKen Gracey Posts: 7,400
    edited 2012-02-26 13:04
    @Harley - coming in a bit late to be useful but I thought I'd add this for future readers.

    Parallax stocks #32317 Samsung OEM microSD Card 1 GB and we have 236 units in stock. I'm not sure why it's not listed on our web site but I'll find out. Of interest is that David Carrier chose the size and source, and he also participated in the Propeller BOE design so I'm sure there's some reasoning.

    I think they'll sell at $5-10, but I'm not sure. I'll report back when I have more information.
  • HShankoHShanko Posts: 402
    edited 2012-02-27 11:37
    @ Ken,

    I've tried searching for '32317' and 'SD Card' and only get a 'No product found' message. Thanks for the reference. Hope the link can be fixed.


  • Ken GraceyKen Gracey Posts: 7,400
    edited 2012-02-27 12:01
    @Harley, I figured out what's doing on internally.

    These cards are not properly formatted for the Propeller BOE. They'll work with Spinneret, however.

    If there are any SD card experts out there willing to help I'd be glad to put you in contact with Jessica and David for more information.

    We're looking into getting a pre-formatted card for the Propeller BOE next.

    Ken Gracey
  • HShankoHShanko Posts: 402
    edited 2012-02-27 12:16
    @ Ken,

    I was wondering what the best size SD micro card would be. OK, I'll just wait for the pre-formatted card.
  • pedwardpedward Posts: 1,642
    edited 2012-02-27 12:35
    The size that fits your budget. 4GB cards are available for $10 on sale most places.
  • vanmunchvanmunch Posts: 568
    edited 2012-02-27 14:06
    I had thought that they had to be 2G or less so this is good to know just because it's hard to find a 2G or less card out there. I can't wait to try one with Kye's WAV player... :)
  • vanmunchvanmunch Posts: 568
    edited 2012-02-27 14:09
    BTW you can get 2G micro SD cards with adapters for $5 on Amazon, but I can't find a place to get bulk orders. Any ideas for 100-1,000 micro or regular SD cards?
  • pedwardpedward Posts: 1,642
    edited 2012-02-27 15:53
    vanmunch wrote: »
    I had thought that they had to be 2G or less so this is good to know just because it's hard to find a 2G or less card out there. I can't wait to try one with Kye's WAV player... :)

    I used a 4G to benchmark and it worked fine. Ironically, a 64MB MMC card didn't work.
  • KyeKye Posts: 2,200
    edited 2012-02-27 17:48
    Sorry, I didn't have access to a lot of older SD and MMC cards when writing my driver. You should have perfect results for all SD2.0 & SD3.01 cards. E.g. newer ones.
  • pedwardpedward Posts: 1,642
    edited 2012-02-27 20:38
    Kwabena, don't sweat it, I bought this card back in 2001 to do some mCu interfacing. It's 64MB and slow, more of a curiosity than anything else.

    The joke about SD is that it replaced MMC because of the S in SD, yet I haven't seen any consumer application take advantage of that. Every application I've seen uses SD cards just like the MMC spec.
  • rokickirokicki Posts: 1,000
    edited 2012-02-28 09:59
    The best size SD card for the Prop is a 2GB card formatted with a 32K cluster size and FAT16. This
    is a sweet spot, reducing metadata reads/writes.

    If you know you don't need much space or many files, smaller cards also work well, but in general a
    2GB card will be close to optimal (unless you need more space than this).
  • rokickirokicki Posts: 1,000
    edited 2012-02-28 10:35
    I'll even take this one step further. If you don't need more than 2GB, buying a 4GB card (or larger) and *formatting*
    it as FAT16/32K cluster/2GB will make it faster than using it as a native FAT32 card.
  • HShankoHShanko Posts: 402
    edited 2012-02-28 11:00
    @ rokicki.

    Thank you for that 2/4 GB information above.
  • rokickirokicki Posts: 1,000
    edited 2012-02-28 13:33
    Just for completeness, let me explain why this is so.

    When reading or writing data, block management needs to be done---keeping track of
    which clusters are free, where the next cluster is, and so on and so forth.

    FAT12, FAT16, and FAT32 uses a cluster pointer arrangement, where every cluster
    has a single integer in the FAT. In FAT12, the integer is 12 bits; in FAT16, it is
    16 bits; in FAT32 it is 32 bits.

    The number of FAT entries per block is 512/#; for FAT16, this is 256; for FAT32,
    this is 128.

    The amount of space spanned by a single FAT block (and thus the amount of
    storage that can be managed with a single metadata buffer, without needing to
    flush it in and out all the time) is the cluster size times this value. So with a
    32K cluster (the largest "standard" cluster size) on FAT16, this means 32K*256
    or 8MB; with FAT32 it is only 32K*128 or 4MB.

    (A cluster size of 64K on FAT16 is possible but not all systems support this.)

    And worse, on FAT32 the default block size is typically 8K, so each FAT block
    covers only 8K*128 or 1MB. (You can change the cluster size by setting the
    appropriate options when you format the card.)

    In "normal" computers none of this matters too much; a few dozen FAT buffers
    typically suffice for remembering all the metadata needed, but the prop only
    has 32K, and each buffer is a good fraction of this total memory.

    Which brings up a good question: has anyone tested 64K clusters with the
    two different filesystems? It would be nice to know if those are compatible.
    If they are, then a 4GB card formatted FAT16 with 64K clusters would be a
    sweet spot, with each FAT block covering 16MB of real data.

    (The extra space wasted due to fragmentation because of the large
    cluster size is probably not an issue for anything realistic being done with
    the prop.)

    -tom
  • HShankoHShanko Posts: 402
    edited 2012-02-28 15:34
    Another question from a SD card noob.

    Is the Prop BOE SD card socket accept a Micro SD card directly or does it require a SD to micro SD adapter?

    I do have one of Rayman's boards w/3.5" LCD and SD socket. That appears to be larger than maybe what the Prop BOE uses. I'd guess it doesn't need the adapter.
  • pedwardpedward Posts: 1,642
    edited 2012-02-28 16:17
    Something to point out, each file requires at least 1 cluster. Cluster size (or block size in traditional filesystem parlance) is dictated by the most common file size. If you store mostly files < 8K, an 8K cluster is appropriate. If you store a lot of big files, 32K is appropriate.

    The rub comes when you have a large cluster size and a lot of little files. You effectively waste 32K - filesize * number of files. This is significant in some circumstances.

    A 2GB SD card with 32K clusters can store a maximum of 65536 files. Directories are considered files too. You also have to take into account appending to files, if you have a logging application. You need to read the directory to see where the data stops, then read-modify-write the cluster. I think that a 32K cluster could be a big problem for performance when used with applications that append small amount of data to a file, then close the file. You effectively have to read and write 32K for every transaction.

    In a logging application it would be better to have smaller clusters or to write the application to have buffering that flushes a block periodically. This brings up a whole host of other issues I won't get into.

    So, in summary, block size is not only dependent on your card size, but your application too.

    Linux has used a 4K block size for a long time, as that seems to be the sweet spot and corresponds with the x86 page size, so there is good reason that was chosen initially. Using FAT32 with a 4K page size means a lot less I/O for disk writes when doing an open/append/close operation.
  • pedwardpedward Posts: 1,642
    edited 2012-02-28 16:19
    HShanko wrote: »
    Another question from a SD card noob.

    Is the Prop BOE SD card socket accept a Micro SD card directly or does it require a SD to micro SD adapter?

    I do have one of Rayman's boards w/3.5" LCD and SD socket. That appears to be larger than maybe what the Prop BOE uses. I'd guess it doesn't need the adapter.

    Based on the picture, it accepts a microSD card directly. You would only need an adapter if you were reading it on a PC and the reader doesn't have a microSD socket.
  • HShankoHShanko Posts: 402
    edited 2012-02-28 16:35
    @ pedward,

    Thank you for that observation. I couldn't really tell from the Prop BOE ad picture.

    I suppose though if I wanted to move the micro SD card to a full size SD socket on another board, only then would I need the adapter.
  • pedwardpedward Posts: 1,642
    edited 2012-02-28 16:44
    Another use for the cheap adapters is cheap sockets. It's not always easy to get an SD socket, since it's surface mount, but you can take an adapter, solder pins to it, and make a PTH microSD socket out of it.
  • rokickirokicki Posts: 1,000
    edited 2012-02-28 17:24
    pedward wrote: »
    Something to point out, each file requires at least 1 cluster. Cluster size (or block size in traditional filesystem parlance) is dictated by the most common file size. If you store mostly files < 8K, an 8K cluster is appropriate. If you store a lot of big files, 32K is appropriate.

    This is called fragmentation, and it is solely a matter of wasted space. With the prop, most of the
    things we do when we have many files, do not come close to filling a 2GB card. But if you do plan
    to write many thousands of tiny files, then this becomes an issue. (The 512-file limit of the root
    directory on FAT16, plus the fact that fsrw does not support subdirectories, will be an issue long
    before this. Also, note that if you really do have thousands of files, it will take some amount of
    time to open one of them because you need to scan the entire directory until you find your file.)

    So for the Prop, the fragmentation is seldom a concern, and bigger block sizes are a win.
    pedward wrote: »
    You also have to take into account appending to files, if you have a logging application. You need to read the directory to see where the data stops, then read-modify-write the cluster. I think that a 32K cluster could be a big problem for performance when used with applications that append small amount of data to a file, then close the file. You effectively have to read and write 32K for every transaction.

    In a logging application it would be better to have smaller clusters or to write the application to have buffering that flushes a block periodically. This brings up a whole host of other issues I won't get into.

    This is completely incorrect; you do not need to read/modify/write an entire cluster. FSRW
    does not, and I'm sure Kye's implementation does not. Only the blocks that actually change
    need to be read and/or written.

    You are correct that if you are opening, appending a small amount of data, and closing, all
    the time, things will be somewhat slow, but it's not affected by the cluster size. It's strictly
    a matter of all the other work that needs to be done: scan the directory for the file and its
    size, open the file, calculate the appropriate offset, follow the FAT chain to the end (this could
    take a long time if the file is big), then if the file size is not an exact multiple of 512 read in
    the old data block; write some data (update the block); write the block out; update the
    directory entry; if a new cluster is allocated, also update the FAT.

    In this case, if you have a small cluster size, you'll be updating the FAT more frequently, so
    a large cluster size actually *helps* you reduce I/O and time.
    pedward wrote: »
    So, in summary, block size is not only dependent on your card size, but your application too.

    Linux has used a 4K block size for a long time, as that seems to be the sweet spot and corresponds with the x86 page size, so there is good reason that was chosen initially. Using FAT32 with a 4K page size means a lot less I/O for disk writes when doing an open/append/close operation.

    The block size for SD cards is 512. (Newer cards support other block sizes, but I don't believe
    either fsrw nor Kye's driver, nor most other SD applications, use block sizes other than 512.)
    Don't get the block size and the cluster size confused; they are very different.

    Linux's 4K block size is indeed right now a major nightmare. There is a lot of work going on to
    support larger block sizes (and larger VM page sizes as well); 4K block sizes make no sense at
    all in a world with 2TB disks and 4GB main memory commonplace. It was chosen back in the
    days when a hard disk was 20MB and main memory was 2MB. You would be amazed at how
    much real-world performance is lost because of this small page size/block size on Linux, not only
    on I/O but also on TLB misses and leaf-level cache misses in the page table. On a machine
    with 16GB of memory, the leaf-level page table is 32MB alone because of the 4K page size!

    If you doubt me, try it: compare FAT32 with a 4K cluster size, vs FAT16 with 32K clusters, when
    doing a lot of open/append/close operations. FAT16 will win easily, especially as the files get
    larger and larger.

    -tom
  • pedwardpedward Posts: 1,642
    edited 2012-02-28 18:55
    rokicki wrote: »
    This is called fragmentation, and it is solely a matter of wasted space

    Actually that isn't fragmentation, that's just wasted space. Fragmentation is when a file is not stored contiguously on the filesystem because the cluster right behind it is in use and you have to skip around to find an empty cluster. Traditionally, fragmentation is worsened by smaller block sizes, but in the FAT FS it has more to do with an overly simplistic FS than block size. EXT2 does fine with 4K block sizes and has low fragmentation.
    In this case, if you have a small cluster size, you'll be updating the FAT more frequently, so
    a large cluster size actually *helps* you reduce I/O and time.

    If the FAT is updated during write, yes you will incur a penalty, but if the FAT is cached in a 512 byte block, it's much less intensive.
    Linux's 4K block size is indeed right now a major nightmare. There is a lot of work going on to
    support larger block sizes (and larger VM page sizes as well); 4K block sizes make no sense at
    all in a world with 2TB disks and 4GB main memory commonplace. It was chosen back in the
    days when a hard disk was 20MB and main memory was 2MB. You would be amazed at how
    much real-world performance is lost because of this small page size/block size on Linux, not only
    on I/O but also on TLB misses and leaf-level cache misses in the page table. On a machine
    with 16GB of memory, the leaf-level page table is 32MB alone because of the 4K page size!

    4K was chosen as the block size of the EXT FS (aka cluster) because it made sense. The main reason for picking 4K was the page size, bigger didn't make sense because you couldn't shuffle any more data at once. You can blame intel, don't blame "linux" for any performance woes of 4K. The biggest problem of big FSes isn't block size, that "fix" is more of a cheat. The real problem is the 32 bit pointers for all the filesystem offsets. There are plenty of filesystems that do more than 16TB and have better performance than the EXT series, see XFS.

    Also, application has much to do with filesystems. Specifically, if you have a 20TB filesystem, you want to allocate in larger groups of blocks by having less inodes -- the analog of a distributed FAT table. Filesystem performance at very large sizes is often dictated by the number of blocks per inode that are allocated at FS build. For archival systems 1 inode per MB is common, for news or mail spools, more inodes than normal are allocated to account for lots of little files. This is especially true for logging filesystems because the more blocks per inode, the more often the log buffer fills up and the more often you have to flush it. XFS is hard hit when "default" values are used to create large filesystems because the log volume is too small and block allocations are too fine grained.
    If you doubt me, try it: compare FAT32 with a 4K cluster size, vs FAT16 with 32K clusters, when
    doing a lot of open/append/close operations. FAT16 will win easily, especially as the files get
    larger and larger.

    -tom

    Consider the use case scenario of an embedded device. If you are dealing with small files, a small cluster size is more appropriate. I still say 32K is a waste, and it's always been a waste, except for storing lots of big files.

    I can see your point about pre-allocating more space, but I have to argue that cluster size should be dictated by the size of the files on the card, not the size of the card.
  • KyeKye Posts: 2,200
    edited 2012-02-28 20:57
    Actually that isn't fragmentation, that's just wasted space. Fragmentation is when a file is not stored contiguously on the filesystem because the cluster right behind it is in use and you have to skip around to find an empty cluster. Traditionally, fragmentation is worsened by smaller block sizes, but in the FAT FS it has more to do with an overly simplistic FS than block size. EXT2 does fine with 4K block sizes and has low fragmentation.

    Not to nitpick here... but its called "internal fragmentation". You are talking about "external fragmentation"... At least, this is what they taught me in college.
    If the FAT is updated during write, yes you will incur a penalty, but if the FAT is cached in a 512 byte block, it's much less intensive.

    True, but in order not to use up all the prop chip's memory the FAT is read... modified... and then written back out in FSRW and my driver. Thus, less FAT access translate into more speed.
  • rokickirokicki Posts: 1,000
    edited 2012-02-29 10:14
    pedward wrote: »
    If the FAT is updated during write, yes you will incur a penalty, but if the FAT is cached in a 512 byte block, it's much less intensive.

    I'm specifically referring to the exact use case you outlined: opening files and appending to them and closing them.
    For the close, the FAT must be updated (if necessary) and the directory entry must be updated. In this case, the
    FAT caching doesn't help; every time a new cluster is allocated the FAT table must be updated.
    pedward wrote: »
    4K was chosen as the block size of the EXT FS ...

    This is getting far afield; if you want to continue this, we can do it by email.
    pedward wrote: »
    Consider the use case scenario of an embedded device. If you are dealing with small files, a small cluster size is more appropriate. I still say 32K is a waste, and it's always been a waste, except for storing lots of big files.

    Absolutely not. The only case where a smaller cluster size helps is if the amount of space wasted by
    internal fragmentation is too much for your application. Now, let's focus on a 2GB card, which is probably
    the smallest being made today. How many applications on the prop are actually going to come close to
    using 2GB? Probably not very many. A 2GB card at 32K cluster size gives you up to 65K files.
    I don't think there are any applications on the propeller that come close to needing that many files.
    A more typical usage is a few dozen or hundred files, in which case the maximum wasted space due
    to internal fragmentation is (32K * #files). In order for this to get to be 50% of the total space, you'd
    need to have 32,000 files---an incredibly large number of files. Just creating this many files on the
    prop will take quite some time (unless you structure the directory substructure carefully, this is an
    O(n^2) operation, due to the necessary directory scanning).

    Is the Prop's 32K a waste if the Prop is used in an application that only needs 8K? Just because a
    resource goes unused doesn't mean it is wasted. What if I only need five cores; are the other three
    cores wasted? No, they are just unused.

    Indeed, I would argue that if you are doing something that actually uses a significant fraction of a
    card, you want to be very careful, because FAT operations themselves can become quite slow,
    especially the operation of scanning for a free block. In this case it is the size of the FAT that
    affects performance, and this is inversely related to your cluster size. For a 2GB card in FAT16
    at 32K clusters, the entire FAT is 131KB, so a scan for a free cluster might have to read that
    much data. A 2GB card in FAT32 using 4K clusters has a FAT of 2MB, so a scan for a free cluster
    might have to scan that much---sixteen times as much FAT, just to find a free cluster. This is
    why I say, for embedded systems such as the prop, getting as much raw data per FAT block
    is important; the 2GB/FAT16/32K case gives you 8MB/FAT block, the 2GB/FAT32/4K only
    gives you 512KB/FAT block. If you indeed are using a large fraction of the card, *and* have a
    lot of files (the only cases where internal fragmentation even matters at all), your directory and
    FAT operations are going to be quite slow.

    Engineering decisions such as cluster size are quite different in the embedded world than they
    are in the computer systems world due to the significantly different resource constraints.
Sign In or Register to comment.