Is there a way to check the amount of space left on an sd card using Tom Rokicki's sd card code? how hard would this be to add? any ideas on how to go about this?
I'm sending large amounts of data(XYZ coordinates to translate a camera over time) in from a host computer to the sd card on the propeller. I just need the host to know how many coordinates it can store before before the card if full. If your interested in adding a way to determine space left on the card to the sd driver that would be great!
Okay. If I write it in Spin, to calculate free space will probably take on the order of a second.
(I need to read in 131KB of FAT table and then individually examine about 65,000 words for
specific values.) It's possible it will take significantly longer than a second; I haven't tried it
yet. If it takes that long, is it still useful for you?
On seek, sure, anything's possible. (Indeed, there's already a certain amount of seek code
in place for the "append" mode.) I just need to be motivated. Also, everything I add to
fsrw makes fsrw larger, and there still isn't an officially blessed way to do conditional
compilation or dead code elimination. I may add some stuff like this this month.
Yes I think it would still be very useful. The time it takes to determine space available isn't a big deal to what I'm doing, I really need to check it just once before dumping data into it. Seems that it will be a great addition to the already great sdcard object.
thanks again
My pleasure; should be very easy to add and test. I'll do it by this weekend most likely (the free space, not the seek).
On the seek, do you need full seek semantics and r+ and w+ opening, or would just seeking-on-read or seeking-on-write
do? What do you need seek for anyway (motivate me please)? Because seeks need to follow FAT chains and FAT chains
don't fit in memory, it's *possible* that seeks can be quite slow (in particular, backward seeks need to scan from the
front of the file all over again; on a large file and depending on how contiguous the file is this could take quite some time.)
In a degenerate case (random FAT chains), even recoding things in assembly won't speed it up. The worst case is probably
a backwards one-byte seek requiring 65,000 block reads (so 65 seconds, roughly) if the FAT chain is completely random.
In real life the FAT chain will *probably* be contiguous, so things won't typically be this bad, but seeking on large files can
take a while just because of the number of FAT entries that need to be examined in Spin.
Comments
Probably not quite what you're looking for, but it might start by·pointing you in a direction.
·
I'm curious what you need it for. I'd be happy to write it and add it in.
-tom
thanks for the help
Owen
(I need to read in 131KB of FAT table and then individually examine about 65,000 words for
specific values.) It's possible it will take significantly longer than a second; I haven't tried it
yet. If it takes that long, is it still useful for you?
On seek, sure, anything's possible. (Indeed, there's already a certain amount of seek code
in place for the "append" mode.) I just need to be motivated. Also, everything I add to
fsrw makes fsrw larger, and there still isn't an officially blessed way to do conditional
compilation or dead code elimination. I may add some stuff like this this month.
thanks again
Owen
On the seek, do you need full seek semantics and r+ and w+ opening, or would just seeking-on-read or seeking-on-write
do? What do you need seek for anyway (motivate me please)? Because seeks need to follow FAT chains and FAT chains
don't fit in memory, it's *possible* that seeks can be quite slow (in particular, backward seeks need to scan from the
front of the file all over again; on a large file and depending on how contiguous the file is this could take quite some time.)
In a degenerate case (random FAT chains), even recoding things in assembly won't speed it up. The worst case is probably
a backwards one-byte seek requiring 65,000 block reads (so 65 seconds, roughly) if the FAT chain is completely random.
In real life the FAT chain will *probably* be contiguous, so things won't typically be this bad, but seeking on large files can
take a while just because of the number of FAT entries that need to be examined in Spin.