lonesock & rokicki,
Currently I use the fsrwFemto & sdspiFemto to access the microSD card on my TriBlade board.
To get the sector number of the beginning of the file(s) I do the following...
r := sd.start(@ioControl) 'passes the control block address
r := sd.mount(DO,Clk,DI,CS)
r := sd.popen(string("filename.ext","r") 'locate the filename
r := sd.pread(@buff,1) 'read the first character of the file
sdsector := ioControl[noparse][[/noparse]0] - 1 'get the sector no of the first data block on the SD card
r := sd.stopSDcard 'tri-state prop pins and DO on the SD card
This leaves the pins used for accessing the card in a tri-state condition on the prop and DO on the SD card.
Now, to access a block (maybe 128 bytes or 512 bytes from the beginning of the sector number) I use..
[size=2][code]
r := sd.initSDcard(DO,Clk,DI,CS)
r := sd.readSDcard(sdsector,@buff,count) 'read or | count=128 or 512 or anything as long as @buff was big enough??
r := sd.writeSDcard(sdsector,@buff,count) 'write
r := sd.stopSDcard 'tri-state prop pins and DO on the SD card
[/code][/size]
I have looked at fswr and mb_spi.
In fsrw...
After I do a pread will datablock return the sector address on the SD card for the next sector??
In mb_spi...
start_explicit(DO,Clk,DI,CS) will replace my sd.initSDcard(DO,Clk,DI,CS)
readblock(block_index,buffer_address) and writeblock(..) will replace my readSDcard(sdsector,@buff,count) and writeSDcard(..), and will do 512 byte read & writes.
release will replace stopSDcard to force DO to be tri-stated by the SD card. However, I will also need to tri-state the prop bus (DI, CLK, CS).
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔ Links to other interesting threads:
I'm working with the LM9033A LCD Screens from Brilldea and am using the FSRW driver to pull pre rendered screens from the SD drive. My project will have an interface that will need to pull several different screens to make a UI.
I have this working now as separate files, but I'm worried about running up against the 1000 file limit in a FAT32 in the root folder.
As a test, I have been able to store 100 screens into one file and bring them in by reading and ignoring the bytes before the screen I want.
Pri LoadScreen(Num):I
sd.popen(String("All"),"r")
Repeat I from 1 to num
sd.pread(@VideoMemory1,_screenSize*4)
sd.pclose
LCD1.screenUpdate(1,@VideoMemory1)
The lower screens seem to load very quickly, but as I advance past maybe 50 or so screens the time it takes to read to the screen I want starts to slow things down a tad. My screens are 3072 bytes each, so by the 100th screen I'm like 300k into the file.
I looked a bit at the popen sub under the 'a' mode and it seems to do the same as a seek. I'm just not what parts I would need to pull to implement my own seek command.
For now, even a forward only seek would be very helpful, as I only want to open the file and get to the data I want then close.
Seek shouldn't be too hard; I just haven't gotten a round tuit yet. I had a good week or so to hack on this stuff about a month
or so ago and that was pretty productive; I'll try to find another period of time (perhaps late next week or maybe Monday).
In any case, seek shouldn't be too hard; the real difficulty is in the testing. And it is the next thing on the list for me.
Thanks for the kick. Squeaky wheel and all that.
If you want to try to hack it together yourself, that would be cool; all the code is up at sourceforge and you can easily check it
out and make changes; I'd be happy to add you as a committer. Start with the in-block seek code I posted on the PropEdit
thread (let me know if you can't find it), and simply add the necessary code to follow the cluster chain from the start if the
new seek position is not within the cluster. To do this you'll need to add another variable, the first cluster number, to the
state for a file. I'm always available (this username at gmail) for private discussion.
This looks great.· I need to play wav files from an SD card.· I'll check it out.
What type/source are you using for your SD Card sockets?· I need to get one and parallax does not sell them.
Shane
rokicki said...
[noparse][[/noparse]fsrw22 replaced by fsrw23; readahead and writebehind gives better speed;
multiple file support.]
Another test release of fsrw23; this one supports multiple files (see serial_terminal.spin
for an example), and it's substantially faster when using pread/pwrite.
On my 100MHz demo board, I now get 1.3MB/sec writes, 1.6MB/sec reads when using
pread/pwrite; the raw reads and writes are above 2MB/sec.
We are approaching a blessed release. Give this one a spin and see if you can break it.
I use a microSD from Digikey WM17115-ND. It is an smt version that can easily be soldered with a fine tip, good eyes and a steady hand onto a pcb specifically designed for it.
If you just want to attach one to an existing PCB that has 0.1" spaced pads, I suggest you just buy an SD to microSD converter and glue/solder the SD adapter. There have been articles published on how to do this including photos on this forum somewhere. Try searching for SD articles in the Google Search link in my signature below (it searches this forum). Usually the adapters come with both a microSD to SD and a microSD to miniSD adapters (about $5), so you can wire up 2 sockets although the miniSD adapter has finer pins.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔ Links to other interesting threads:
Shane: If what you really want is an SD breakout board, check out: http://ucontroller.com
They have one for $12.99. You could also look at Sparkfun.com...
Any SD card experts know about SD card wear levelling ?
Having googled around a bit about this I can't really get an idea of how it works apart from vague descriptions of how heavily used blocks can get swapped for unused ones so as to "spread the load" as it were.
But what constitutes an unused block ?
If I use a tool like "dd" under Unix/Linux or WinHex for Windows I can write to every block available to me without any regard for any file system, just raw blocks. Or I can do that from the Propeller, say.
So now it looks like all the blocks are used.
So how can wear levelling find any unused blocks to swap around?
Is there a pool of blocks in reserve somewhere?
How could I possibly tell the card that I don't care about those blocks any more so they can be free for future wear levelling?
What goes on?
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
For me, the past is not over yet.
The actual method used for wear-leveling is left up to the individual manufacturer. If you write to block 0 four times in succession, there is no guarantee that the same flash memory is being overwritten, in fact it is likely not. If you write to every block on the card, then wear-leveling can't help you.
Jonathan
edit: you know that the erase blocks are significantly larger than the 512-byte blocks we write & read. The card really only needs to keep track of how many times an erase-block has been erased to perform wear leveling.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
lonesock
Piranha are people too.
If I write to every block once, then there are no blocks free for wear levelling.
But now I want to delete all that an make all the blocks free again.
How do I do that?
Looks to me like if the file system deletes a file, the SD card doesn't know any thing about the fact that the blocks where the file was are now free for wear levelling again.
What I'm getting at is this:
If my application constantly updates the same block (or erase block if you like) I could ideally expect its life time to be the number of erase cycles a block can handle multiplied by the number of unused blocks on the card that are available for wear level swapping.
BUT if I have ever written to all the blocks on the card, just once, then the life time of my application is only the the number of erase cycles a block can handle because there is nowhere for wear levelling to swap with. Many thousands of times less !
Unless I can tell the card to free up those once written blocks which I don't need any more.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
For me, the past is not over yet.
Ah, I see what you are getting at. The SD (and MMC) spec have an "erase block" concept. Specifically, I can use CMD32 and CMD33 to set the range of blocks to erase, and CMD38 to perform the actual erasure. I don't know how to trigger this in windows or linux. I could add it to the fsrw block layer's code, I guess, though it probably won't get used very often. I could leave it commented out by default, leaving it up to a power user to enable and call the functionality. Alternatively we could write a simple SPIN-only "erase_full_card" application. I think I would prefer to go this route.
What do you think?
Jonathan
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
lonesock
Piranha are people too.
In the ZiCog emulator we have one or more 8Mb CP/M disk images sitting in the FAT file system on SD as files.
Now if someone comes up with a CP/M application that hammers on it's files inside that image it would be nice to know that the rest of the SD space, which is never going to be used, is in such a state that it is free for wear levelling.
This was worrying Cluso as we have now packed 4 CP/M sectors of 128 bytes into each 512 byte SD block, potentially leading to 4 times more write activity on the card.
What you are suggesting sounds like just the ticket.
An standalone erase application would be just fine.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
For me, the past is not over yet.
The SecureDigital cards contain an internal controller that is not only responsible for, but generally
does an admirable job of, wear leveling. It's not something you need to worry about.
I believe the cards themselves contain sufficient extra capacity over that advertised where they
can handle the wear leveling (and remap around bad sectors as well) even on a full card.
That said, a fully erased card *might* be somewhat faster on writes than one not fully erased.
But in general, you don't have to worry about it.
Yes, rokicki is correct. The wear levelling issue is handled internally to the SD card and we should not try and defeat that algorithm.
My concern was that we were writing to the same 512 block 4 times to do an update. Now this 512 block is in fact some part of a larger block (cluster if you like) that is being rewritten. What is the exact impact of packing the 512 with 4x128. Not sure, it will be higher, but on reflection, who cares. If we get to a point when the wear levelling fails and we need a new card, so what. It is ~ $10.
Likewise, I do not know how smart the card is. Maybe it waits a small time before writing to ensure another adjacent sector is being written???
Maybe someone would like to destroy a card by reading and writing the same sector until failure and see
(a) how long it takes (days/weeks/months/years) ?
(b) how many writes it takes?
I suspect we would not get results for some time.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔ Links to other interesting threads:
rokicki: "It's not something you need to worry about"
Reality is I'm not going to worry about it so much as I don't imagine having an app that would want to perform so many 100s of thousands of writes.
Thing is I'd like to have some idea about why I should not worry about it rather than just accepting the hand waving that comes out of the SD manufacturers. Engineers like to have specifications for such things and in this case it seems to be impossible to get hold of.
So for example does my scenario of writing "once" to every block shorten the available life as I describe? If not why not? Is there extra capacity for swapping? How much? etc etc. Does lonesocks plan for an "erase all" help at all? Or does it make it worse?
Cluso: I thought about doings some testing as well but it seems impractical:
1) It might take a long time as you say.
2) Cards are probably very variable so you would have to test many to get some average. More time, a lot of expense.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
For me, the past is not over yet.
I wondered about this as well fro things like unix log files, that can zip out of control in short order if you have 2 chatty errors.
I certainly wouldn't want to use the things for "virtual memory" like unix swap.
It would be worth burning a card for, but only if times were tracked also. I would suspect that you would see write performance of a block decrease gradually, until it was swapped and then decrease again.
I would use a small one though. a 2GB card would take quite a while.
That brings up a question. Can these things be partitioned?
I'm sure under Linux you can partition SDs with fdisk (or some graphical UI tool nowadays).
Actually I was just trying to put a new file system onto an SD with fdisk. It seemed to be quite happy. Sadly that SD seems to be dead anyway, neither Windows or my phone or Linux can do anything with it.
I have worked on many embedded Linux systems that run off of FLASH memory and Comptact FLASH cards. Just take care to turn off swap, and disable all but the most urgent log output. Look out for temp files and such as well.
Hundreds of such units have been working for many years now.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
For me, the past is not over yet.
The way I understand it, it works something like this:
Flash is organized as blocks of some size, which may be larger than a 512 byte sector.· There are typically more blocks than necessary.· (So an 8GB card may have 9GB of blocks.)
Flash writes are done on a per-block basis.· So a change to one sector in the block means the whole block is erased and re-written.
Instead of erasing and writing to the same block, instead one of the spare blocks is written, then the old block is erased and used for future writes.· (Each block has some additional data containing the external address.)
Blocks may be pooled, so there may be only a small number of spare blocks for a set of active blocks.
"Wear Leveling" is a natural side effect of having spare blocks.· I believe more sophisticated flash will also rotate in blocks which have not been overwritten, to ensure a more even distribution of use.· The spare blocks also provide the excess capacity to replace any failed blocks.
It should also be obvious that it's much better (and faster) to write multiple sectors at once so the flash block doesn't have to be re-written for each sector.
From the flash POV, it doesn't care that a file has been deleted and the space is available for re-use.· Once the sector is re-used (by the OS), then the block gets rotated out into the spare blocks.
From the OS POV, the flash takes care of wear leveling.· What the OS can do to help is to spread writes over the entire "disk", instead of repeatedly updating the same sectors.· In this respect FAT is bad for flash because the FAT and root directory sectors are regulary rewritten.· However, at least it doesn't track last access time like NTFS normally does (which changes file reads into metadata writes).· There are some "flash friendly" filesystems which attempt to use the "disk" as a circular buffer.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Composite NTSC sprite driver: Forum
NTSC & PAL driver templates: ObExForum
OnePinTVText driver: ObExForum
"From the flash POV, it doesn't care that a file has been deleted and the space is available for re-use. Once the sector is re-used (by the OS), then the block gets rotated out into the spare blocks."
Here is exactly my problem/lack of understanding. As I see it, from the FLASH POV it has no idea that an OS has no longer any use for a particular block. It has no idea if the data in that block needs to be kept. Therefore, if all blocks get written to just once, say I fill up the thing by writing 512 byte blocks from the Prop, it has no idea if or when it can reuse those blocks for wear levelling. That only leaves any "hidden" spare blocks that the card keeps around for the purpose.
So it looks like the life of the card can be dramatically reduced by filling it once, unless there is a way to do a "total erase" and free everything up again.
Now it just so happens that I have a fairly new little used SD in front of me that I can no longer write to, read is fine. And it just happens that I was playing around with copying /dev/zero to it under Linux recently which would surely touch every block I can reach.
Coincidence ?
"What the OS can do to help is to spread writes over the entire "disk", instead of repeatedly updating the same sectors."
This also gives me a problem. If what you say is true - "Instead of erasing and writing to the same block, instead one of the spare blocks is written, " Then it makes no difference if the OS constantly hammers on the same block. Because underneath it all it is not doing so. In fact it looks like the OS cannot help in wear levelling.
Now I have uncovered an another under specified feature of all this: When you write a block some algorithm starts up in the SD and figures out what to erase and where to copy old data combined with the new etc, the wear levelling.
To the OS it looks like the write is "done" but in actual fact the work must be continuing for some time. So how long from a write operation to power down is required to maintain data integrity?
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
For me, the past is not over yet.
I just reformatted a 2GB card. Interestingly only 1.83GB is available. I know it will reserve some for the FAT16 but this is quite a bit less that what I would expect. Maybe this is explaining that some sectors are kept from external view???
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔ Links to other interesting threads:
Cluso99 said...
I just reformatted a 2GB card. Interestingly only 1.83GB is available. I know it will reserve some for the FAT16 but this is quite a bit less that what I would expect. Maybe this is explaining that some sectors are kept from external view???
No. This is because usbkeys/mmc/sd(/and IIRC also HDD) producers have a K unit o 1000 while the PC operating system have a K unit of 1024: 2000000000/(1024^3)*2000000000=1.86G
The cards are spec'ed such that, once the write data has been "accepted", and so long as power is maintained for a short period of time
(less than a second), the data will be written, no matter how much work the controller has to do. In general, the controller says when it
will take the data; it can delay that "acceptance" of the data for some time directly in the protocol, so it can spend whatever time it needs
at that point.
The SD manufacturers are pretty secretive with their details (although some SD controller chip datasheets have leaked out). So to get
definitive answers, you can either try to do so empirically (and then the results may not apply to other cards), or you can pester the
manufacturers. I believe the data for the flash chips themselves are available and well understood, so there's not a lot to that part.
But actually divining the flash algorithms may be tricky.
On the failed card, it would be useful to hook it up to fsrw and look at the error code that is coming back from the card on writes.
I would be surprised if the problem was actually due to flash wear (but you never know).
On the sizes, I believe *most* 2GB flash cards are not actually 2GB, but more around 1.9GB, even taking decimal vs binary into
account. Typically a portion of the card is reserved for the "secure" partition (even though this is seldom used these days). Of
course the FAT itself takes some of the space (from about 0.01% on FAT16 on 32K sectors all the way up to 1.6% on FAT32 at
512 byte sectors).
If you want a hard and fast overcapacity issue, just break open a card and look up the flash chips that are used.
I have done some pretty extensive testing on my cards, including full-bore writing for days, and have yet to have a card have
any visible problems.
In any case, investigating all of this would be entertaining but tedious; I for one have not had the patience or time.
Note that these devices are *designed* for cameras, with high-speed pictures and video coming in, written in FAT16 or FAT32,
and for the case of video, at very high rates over and over again. If the cards "wore out" under any sort of reasonable use,
I am sure we would see discussions about that very issue. But in general, the only complaints I ever see is that a particular
card doesn't work at all to start with, not that they fail after time.
dMajo: No. In this case the SD cards marketing speaks in true engineering binary Mebi and Giba.
I have a 1GB SanDisk micro SD card here. If I try to write an infinite supply of zeros to it then it happily accepts 1Gib (1024*1024*1024 bytes or Gibi byte) before giving up.
If I read the entire card I get exactly that back again, 1 Gibi byte.
Cluso: I might conclude the FAT does indeed take a lot of space.
rokicki: Well I found a few minutes to play with my dead card. The card contains about 2Mb of data at the beginning that I cannot alter. I can see from using "dd" and "hexdump" under Linux that those 2M still contain old CP/M disk image data that I put there.
Turns out I can write all over this card, except those first 2Mb.
Using "dd" I have set the entire content of the card to zero and then to 0xFF. In each case verifying the write with dd and hexdump. In each case finding that first 2Mb does not get written.
According to Panasonics SD card formatter program the card is write protected, it is not.
So far with the FSRW demo I only get as far as:
> mount
mount returned
Then it hangs. What to do?
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
For me, the past is not over yet.
heater said...
Here is exactly my problem/lack of understanding. As I see it, from the FLASH POV it has no idea that an OS has no longer any use for a particular block. It has no idea if the data in that block needs to be kept. Therefore, if all blocks get written to just once, say I fill up the thing by writing 512 byte blocks from the Prop, it has no idea if or when it can reuse those blocks for wear levelling. That only leaves any "hidden" spare blocks that the card keeps around for the purpose.· So it looks like the life of the card can be dramatically reduced by filling it once, unless there is a way to do a "total erase" and free everything up again.
You're on the right track.· The flash assumes a block contains valid data until it is written to.· So if only 1 block is used then it and the spare blocks in the same pool will be used while the other blocks will not be used.· Again, some flash controllers try to resolve this inequity automatically.
heater said...
"What the OS can do to help is to spread writes over the entire "disk", instead of repeatedly updating the same sectors."
This also gives me a problem. If what you say is true - "Instead of erasing and writing to the same block, instead one of the spare blocks is written, " Then it makes no difference if the OS constantly hammers on the same block. Because underneath it all it is not doing so. In fact it looks like the OS cannot help in wear levelling.
Ahh, but the spare blocks typically aren't shared across the entire disk.· So if the OS is always writing to the first n sectors (e.g. the FAT) and those sectors are in a single pool, then the blocks in that pool will be used more heavily.· It may be that some flash controllers reorder the address lines to automatically put consecutive sectors in different pools, but that doesn't help if the OS rewrites a single sector.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Composite NTSC sprite driver: Forum
NTSC & PAL driver templates: ObExForum
OnePinTVText driver: ObExForum
Ok I'm with you on the spare blocks being attached to "pools". Still it seems my argument could re-stated in terms of those pools rather than the entire SD. If I have written, just once, to all blocks in a pool the sD has no way to know when they can be used again for wear levelling. It only has "hidden" blocks to play with instead of all of them. Or am I wrong in assuming it ever uses anything but "hidden" blocks for wear levelling.
So I'm with you on the FAT (and other fs) problem now.
Just starts me thinking. Currently my CP/M emulator uses raw SD cards, no FAT, and puts it's CP/M disk images directly into blocks on the card. The biggest image possible under CP/M 2 is 8Mb.
So if CP/M hammers on it's files it will always be working within a small number of these "pools". Not good. But there are another 128 8Mb sized areas on that card. So after some maximum number of writes within one CP/M disk the emulator could just move the whole CP/M disk image to another spot and start wearing that down instead. Bingo 128 times longer life !
Actually I could do that now with my "dead" card that has all but 2M writeable now.
It's unlikely I would ever get into implementing that though, especially as there is a push to put the CP/M images inside FAT (Blech) for convenience.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
For me, the past is not over yet.
heater said...
...
rokicki: Well I found a few minutes to play with my dead card. The card contains about 2Mb of data at the beginning that I cannot alter. I can see from using "dd" and "hexdump" under Linux that those 2M still contain old CP/M disk image data that I put there.
Turns out I can write all over this card, except those first 2Mb.
Using "dd" I have set the entire content of the card to zero and then to 0xFF. In each case verifying the write with dd and hexdump. In each case finding that first 2Mb does not get written.
According to Panasonics SD card formatter program the card is write protected, it is not.
So far with the FSRW demo I only get as far as:
> mount
mount returned
Then it hangs. What to do?
The SD spec allows you to set a "protect" bit for specific data blocks (sized "WP_GRP_SIZE", as defined by the card). SDHC cards do not support this, but for regular SD cards 2GB or less, it is possible to write-protect only certain addresses this way. I have no idea if this is truly the state of your card, or if it is, how it got into that state.
Regarding the mount hanging, which version of the FSRW demo are you using?
Jonathan
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
lonesock
Piranha are people too.
Well, since this card came out of my mobile phone, where it was new and pretty much unused, it has not had a FAT file system on it. As you can gather from my posts it has been subject to a lot of direct block writes either by "dd" under Linux or by the CP/M emulator. The emulator uses sdspiqasm.spin. So guess there has been a lot of scope for abuse.
The version of fswr I have here is v2.1 12 July 2009
Is there any way to un-stick this card that I could try? Sure I could bin it and get a new one but this bugs me as most of it seems to be usable and it's interesting to learn what goes on anyway.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
For me, the past is not over yet.
heater said...
Well, since this card came out of my mobile phone, where it was new and pretty much unused, it has not had a FAT file system on it. As you can gather from my posts it has been subject to a lot of direct block writes either by "dd" under Linux or by the CP/M emulator. The emulator uses sdspiqasm.spin. So guess there has been a lot of scope for abuse.
The version of fswr I have here is v2.1 12 July 2009
Is there any way to un-stick this card that I could try? Sure I could bin it and get a new one but this bugs me as most of it seems to be usable and it's interesting to learn what goes on anyway.
Well, if the issue is actually the protect bit, I could try another all-SPIN app to un-protect every block. I don't really see much documentation on this, though, and honestly I don't believe this feature will be of general use. If you want to try hacking it yourself I could maybe give you a few pointers?
Regarding the mount hanging, maybe you could try it with the latest code? I believe I posted it a page or two back. (Sorry for the lack of an official release.) There is some extra debug code you can uncomment at the end of the SPIN send_cmd_slow routine (and a few other places...which the compiler should be able to highlight for you . Feel free to send me a pm or email my gmail account (same username) with the log values if you go this route.
Jonathan
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
lonesock
Piranha are people too.
lonesock: I now have the files from your post of 7/28/2009.
I have uncommented that debug section in send_cmd_slow and other bits thatthe compiler complained about.
Result is exactly the same. Hangs after "mount returned"
I could try and hack an un-protect if you have some pointers and it's not going to get to long winded.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
For me, the past is not over yet.
Comments
Currently I use the fsrwFemto & sdspiFemto to access the microSD card on my TriBlade board.
To get the sector number of the beginning of the file(s) I do the following...
This leaves the pins used for accessing the card in a tri-state condition on the prop and DO on the SD card.
Now, to access a block (maybe 128 bytes or 512 bytes from the beginning of the sector number) I use..
[/code][/size]
I have looked at fswr and mb_spi.
In fsrw...
After I do a pread will datablock return the sector address on the SD card for the next sector??
In mb_spi...
start_explicit(DO,Clk,DI,CS) will replace my sd.initSDcard(DO,Clk,DI,CS)
readblock(block_index,buffer_address) and writeblock(..) will replace my readSDcard(sdsector,@buff,count) and writeSDcard(..), and will do 512 byte read & writes.
release will replace stopSDcard to force DO to be tri-stated by the SD card. However, I will also need to tri-state the prop bus (DI, CLK, CS).
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Links to other interesting threads:
· Home of the MultiBladeProps: TriBladeProp, RamBlade, TwinBlade,·SixBlade, website
· Single Board Computer:·3 Propeller ICs·and a·TriBladeProp board (ZiCog Z80 Emulator)
· Prop Tools under Development or Completed (Index)
· Emulators: Micros eg Altair, and Terminals eg VT100 (Index) ZiCog (Z80) , MoCog (6809)
· Search the Propeller forums·(uses advanced Google search)
My cruising website is: ·www.bluemagic.biz·· MultiBladeProp is: www.bluemagic.biz/cluso.htm
I have this working now as separate files, but I'm worried about running up against the 1000 file limit in a FAT32 in the root folder.
As a test, I have been able to store 100 screens into one file and bring them in by reading and ignoring the bytes before the screen I want.
The lower screens seem to load very quickly, but as I advance past maybe 50 or so screens the time it takes to read to the screen I want starts to slow things down a tad. My screens are 3072 bytes each, so by the 100th screen I'm like 300k into the file.
I looked a bit at the popen sub under the 'a' mode and it seems to do the same as a seek. I'm just not what parts I would need to pull to implement my own seek command.
For now, even a forward only seek would be very helpful, as I only want to open the file and get to the data I want then close.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Jim Fouch
FOUCH SOFTWARE
or so ago and that was pretty productive; I'll try to find another period of time (perhaps late next week or maybe Monday).
In any case, seek shouldn't be too hard; the real difficulty is in the testing. And it is the next thing on the list for me.
Thanks for the kick. Squeaky wheel and all that.
If you want to try to hack it together yourself, that would be cool; all the code is up at sourceforge and you can easily check it
out and make changes; I'd be happy to add you as a committer. Start with the in-block seek code I posted on the PropEdit
thread (let me know if you can't find it), and simply add the necessary code to follow the cluster chain from the start if the
new seek position is not within the cluster. To do this you'll need to add another variable, the first cluster number, to the
state for a file. I'm always available (this username at gmail) for private discussion.
Anyway, there's more that enough speed to realize a Propeller video player now!
Thanks guys.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
My Prop Info&Apps: ·http://www.rayslogic.com/propeller/propeller.htm
What type/source are you using for your SD Card sockets?· I need to get one and parallax does not sell them.
Shane ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Shane Merem
www.websiteforge.com
www.magnusoft.com
·
If you just want to attach one to an existing PCB that has 0.1" spaced pads, I suggest you just buy an SD to microSD converter and glue/solder the SD adapter. There have been articles published on how to do this including photos on this forum somewhere. Try searching for SD articles in the Google Search link in my signature below (it searches this forum). Usually the adapters come with both a microSD to SD and a microSD to miniSD adapters (about $5), so you can wire up 2 sockets although the miniSD adapter has finer pins.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Links to other interesting threads:
· Home of the MultiBladeProps: TriBlade,·RamBlade, RetroBlade,·TwinBlade,·SixBlade, website
· Single Board Computer:·3 Propeller ICs·and a·TriBladeProp board (ZiCog Z80 Emulator)
· Prop Tools under Development or Completed (Index)
· Emulators: Micros eg Altair, and Terminals eg VT100 (Index) ZiCog (Z80) , MoCog (6809)
· Search the Propeller forums·(uses advanced Google search)
My cruising website is: ·www.bluemagic.biz·· MultiBladeProp is: www.bluemagic.biz/cluso.htm
They have one for $12.99. You could also look at Sparkfun.com...
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
My Prop Info&Apps: ·http://www.rayslogic.com/propeller/propeller.htm
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
My Prop Info&Apps: ·http://www.rayslogic.com/propeller/propeller.htm
Having googled around a bit about this I can't really get an idea of how it works apart from vague descriptions of how heavily used blocks can get swapped for unused ones so as to "spread the load" as it were.
But what constitutes an unused block ?
If I use a tool like "dd" under Unix/Linux or WinHex for Windows I can write to every block available to me without any regard for any file system, just raw blocks. Or I can do that from the Propeller, say.
So now it looks like all the blocks are used.
So how can wear levelling find any unused blocks to swap around?
Is there a pool of blocks in reserve somewhere?
How could I possibly tell the card that I don't care about those blocks any more so they can be free for future wear levelling?
What goes on?
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
For me, the past is not over yet.
Jonathan
edit: you know that the erase blocks are significantly larger than the 512-byte blocks we write & read. The card really only needs to keep track of how many times an erase-block has been erased to perform wear leveling.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
lonesock
Piranha are people too.
Post Edited (lonesock) : 9/10/2009 6:00:41 PM GMT
If I write to every block once, then there are no blocks free for wear levelling.
But now I want to delete all that an make all the blocks free again.
How do I do that?
Looks to me like if the file system deletes a file, the SD card doesn't know any thing about the fact that the blocks where the file was are now free for wear levelling again.
What I'm getting at is this:
If my application constantly updates the same block (or erase block if you like) I could ideally expect its life time to be the number of erase cycles a block can handle multiplied by the number of unused blocks on the card that are available for wear level swapping.
BUT if I have ever written to all the blocks on the card, just once, then the life time of my application is only the the number of erase cycles a block can handle because there is nowhere for wear levelling to swap with. Many thousands of times less !
Unless I can tell the card to free up those once written blocks which I don't need any more.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
For me, the past is not over yet.
What do you think?
Jonathan
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
lonesock
Piranha are people too.
In the ZiCog emulator we have one or more 8Mb CP/M disk images sitting in the FAT file system on SD as files.
Now if someone comes up with a CP/M application that hammers on it's files inside that image it would be nice to know that the rest of the SD space, which is never going to be used, is in such a state that it is free for wear levelling.
This was worrying Cluso as we have now packed 4 CP/M sectors of 128 bytes into each 512 byte SD block, potentially leading to 4 times more write activity on the card.
What you are suggesting sounds like just the ticket.
An standalone erase application would be just fine.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
For me, the past is not over yet.
does an admirable job of, wear leveling. It's not something you need to worry about.
I believe the cards themselves contain sufficient extra capacity over that advertised where they
can handle the wear leveling (and remap around bad sectors as well) even on a full card.
That said, a fully erased card *might* be somewhat faster on writes than one not fully erased.
But in general, you don't have to worry about it.
My concern was that we were writing to the same 512 block 4 times to do an update. Now this 512 block is in fact some part of a larger block (cluster if you like) that is being rewritten. What is the exact impact of packing the 512 with 4x128. Not sure, it will be higher, but on reflection, who cares. If we get to a point when the wear levelling fails and we need a new card, so what. It is ~ $10.
Likewise, I do not know how smart the card is. Maybe it waits a small time before writing to ensure another adjacent sector is being written???
Maybe someone would like to destroy a card by reading and writing the same sector until failure and see
(a) how long it takes (days/weeks/months/years) ?
(b) how many writes it takes?
I suspect we would not get results for some time.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Links to other interesting threads:
· Home of the MultiBladeProps: TriBlade,·RamBlade, RetroBlade,·TwinBlade,·SixBlade, website
· Single Board Computer:·3 Propeller ICs·and a·TriBladeProp board (ZiCog Z80 Emulator)
· Prop Tools under Development or Completed (Index)
· Emulators: Micros eg Altair, and Terminals eg VT100 (Index) ZiCog (Z80) , MoCog (6809)
· Search the Propeller forums·(uses advanced Google search)
My cruising website is: ·www.bluemagic.biz·· MultiBladeProp is: www.bluemagic.biz/cluso.htm
Reality is I'm not going to worry about it so much as I don't imagine having an app that would want to perform so many 100s of thousands of writes.
Thing is I'd like to have some idea about why I should not worry about it rather than just accepting the hand waving that comes out of the SD manufacturers. Engineers like to have specifications for such things and in this case it seems to be impossible to get hold of.
So for example does my scenario of writing "once" to every block shorten the available life as I describe? If not why not? Is there extra capacity for swapping? How much? etc etc. Does lonesocks plan for an "erase all" help at all? Or does it make it worse?
Cluso: I thought about doings some testing as well but it seems impractical:
1) It might take a long time as you say.
2) Cards are probably very variable so you would have to test many to get some average. More time, a lot of expense.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
For me, the past is not over yet.
I certainly wouldn't want to use the things for "virtual memory" like unix swap.
It would be worth burning a card for, but only if times were tracked also. I would suspect that you would see write performance of a block decrease gradually, until it was swapped and then decrease again.
I would use a small one though. a 2GB card would take quite a while.
That brings up a question. Can these things be partitioned?
Thanks,
Doug
Actually I was just trying to put a new file system onto an SD with fdisk. It seemed to be quite happy. Sadly that SD seems to be dead anyway, neither Windows or my phone or Linux can do anything with it.
I have worked on many embedded Linux systems that run off of FLASH memory and Comptact FLASH cards. Just take care to turn off swap, and disable all but the most urgent log output. Look out for temp files and such as well.
Hundreds of such units have been working for many years now.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
For me, the past is not over yet.
- Flash is organized as blocks of some size, which may be larger than a 512 byte sector.· There are typically more blocks than necessary.· (So an 8GB card may have 9GB of blocks.)
- Flash writes are done on a per-block basis.· So a change to one sector in the block means the whole block is erased and re-written.
- Instead of erasing and writing to the same block, instead one of the spare blocks is written, then the old block is erased and used for future writes.· (Each block has some additional data containing the external address.)
- Blocks may be pooled, so there may be only a small number of spare blocks for a set of active blocks.
"Wear Leveling" is a natural side effect of having spare blocks.· I believe more sophisticated flash will also rotate in blocks which have not been overwritten, to ensure a more even distribution of use.· The spare blocks also provide the excess capacity to replace any failed blocks.It should also be obvious that it's much better (and faster) to write multiple sectors at once so the flash block doesn't have to be re-written for each sector.
From the flash POV, it doesn't care that a file has been deleted and the space is available for re-use.· Once the sector is re-used (by the OS), then the block gets rotated out into the spare blocks.
From the OS POV, the flash takes care of wear leveling.· What the OS can do to help is to spread writes over the entire "disk", instead of repeatedly updating the same sectors.· In this respect FAT is bad for flash because the FAT and root directory sectors are regulary rewritten.· However, at least it doesn't track last access time like NTFS normally does (which changes file reads into metadata writes).· There are some "flash friendly" filesystems which attempt to use the "disk" as a circular buffer.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Composite NTSC sprite driver: Forum
NTSC & PAL driver templates: ObEx Forum
OnePinTVText driver: ObEx Forum
"From the flash POV, it doesn't care that a file has been deleted and the space is available for re-use. Once the sector is re-used (by the OS), then the block gets rotated out into the spare blocks."
Here is exactly my problem/lack of understanding. As I see it, from the FLASH POV it has no idea that an OS has no longer any use for a particular block. It has no idea if the data in that block needs to be kept. Therefore, if all blocks get written to just once, say I fill up the thing by writing 512 byte blocks from the Prop, it has no idea if or when it can reuse those blocks for wear levelling. That only leaves any "hidden" spare blocks that the card keeps around for the purpose.
So it looks like the life of the card can be dramatically reduced by filling it once, unless there is a way to do a "total erase" and free everything up again.
Now it just so happens that I have a fairly new little used SD in front of me that I can no longer write to, read is fine. And it just happens that I was playing around with copying /dev/zero to it under Linux recently which would surely touch every block I can reach.
Coincidence ?
"What the OS can do to help is to spread writes over the entire "disk", instead of repeatedly updating the same sectors."
This also gives me a problem. If what you say is true - "Instead of erasing and writing to the same block, instead one of the spare blocks is written, " Then it makes no difference if the OS constantly hammers on the same block. Because underneath it all it is not doing so. In fact it looks like the OS cannot help in wear levelling.
Now I have uncovered an another under specified feature of all this: When you write a block some algorithm starts up in the SD and figures out what to erase and where to copy old data combined with the new etc, the wear levelling.
To the OS it looks like the write is "done" but in actual fact the work must be continuing for some time. So how long from a write operation to power down is required to maintain data integrity?
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
For me, the past is not over yet.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Links to other interesting threads:
· Home of the MultiBladeProps: TriBlade,·RamBlade, RetroBlade,·TwinBlade,·SixBlade, website
· Single Board Computer:·3 Propeller ICs·and a·TriBladeProp board (ZiCog Z80 Emulator)
· Prop Tools under Development or Completed (Index)
· Emulators: Micros eg Altair, and Terminals eg VT100 (Index) ZiCog (Z80) , MoCog (6809)
· Search the Propeller forums·(uses advanced Google search)
My cruising website is: ·www.bluemagic.biz·· MultiBladeProp is: www.bluemagic.biz/cluso.htm
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
· Propeller Object Exchange (last Publications / Updates)
(less than a second), the data will be written, no matter how much work the controller has to do. In general, the controller says when it
will take the data; it can delay that "acceptance" of the data for some time directly in the protocol, so it can spend whatever time it needs
at that point.
The SD manufacturers are pretty secretive with their details (although some SD controller chip datasheets have leaked out). So to get
definitive answers, you can either try to do so empirically (and then the results may not apply to other cards), or you can pester the
manufacturers. I believe the data for the flash chips themselves are available and well understood, so there's not a lot to that part.
But actually divining the flash algorithms may be tricky.
On the failed card, it would be useful to hook it up to fsrw and look at the error code that is coming back from the card on writes.
I would be surprised if the problem was actually due to flash wear (but you never know).
On the sizes, I believe *most* 2GB flash cards are not actually 2GB, but more around 1.9GB, even taking decimal vs binary into
account. Typically a portion of the card is reserved for the "secure" partition (even though this is seldom used these days). Of
course the FAT itself takes some of the space (from about 0.01% on FAT16 on 32K sectors all the way up to 1.6% on FAT32 at
512 byte sectors).
If you want a hard and fast overcapacity issue, just break open a card and look up the flash chips that are used.
I have done some pretty extensive testing on my cards, including full-bore writing for days, and have yet to have a card have
any visible problems.
In any case, investigating all of this would be entertaining but tedious; I for one have not had the patience or time.
Note that these devices are *designed* for cameras, with high-speed pictures and video coming in, written in FAT16 or FAT32,
and for the case of video, at very high rates over and over again. If the cards "wore out" under any sort of reasonable use,
I am sure we would see discussions about that very issue. But in general, the only complaints I ever see is that a particular
card doesn't work at all to start with, not that they fail after time.
I have a 1GB SanDisk micro SD card here. If I try to write an infinite supply of zeros to it then it happily accepts 1Gib (1024*1024*1024 bytes or Gibi byte) before giving up.
If I read the entire card I get exactly that back again, 1 Gibi byte.
Cluso: I might conclude the FAT does indeed take a lot of space.
rokicki: Well I found a few minutes to play with my dead card. The card contains about 2Mb of data at the beginning that I cannot alter. I can see from using "dd" and "hexdump" under Linux that those 2M still contain old CP/M disk image data that I put there.
Turns out I can write all over this card, except those first 2Mb.
Using "dd" I have set the entire content of the card to zero and then to 0xFF. In each case verifying the write with dd and hexdump. In each case finding that first 2Mb does not get written.
According to Panasonics SD card formatter program the card is write protected, it is not.
So far with the FSRW demo I only get as far as:
Then it hangs. What to do?
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
For me, the past is not over yet.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Composite NTSC sprite driver: Forum
NTSC & PAL driver templates: ObEx Forum
OnePinTVText driver: ObEx Forum
So I'm with you on the FAT (and other fs) problem now.
Just starts me thinking. Currently my CP/M emulator uses raw SD cards, no FAT, and puts it's CP/M disk images directly into blocks on the card. The biggest image possible under CP/M 2 is 8Mb.
So if CP/M hammers on it's files it will always be working within a small number of these "pools". Not good. But there are another 128 8Mb sized areas on that card. So after some maximum number of writes within one CP/M disk the emulator could just move the whole CP/M disk image to another spot and start wearing that down instead. Bingo 128 times longer life !
Actually I could do that now with my "dead" card that has all but 2M writeable now.
It's unlikely I would ever get into implementing that though, especially as there is a push to put the CP/M images inside FAT (Blech) for convenience.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
For me, the past is not over yet.
Regarding the mount hanging, which version of the FSRW demo are you using?
Jonathan
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
lonesock
Piranha are people too.
The version of fswr I have here is v2.1 12 July 2009
Is there any way to un-stick this card that I could try? Sure I could bin it and get a new one but this bugs me as most of it seems to be usable and it's interesting to learn what goes on anyway.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
For me, the past is not over yet.
Regarding the mount hanging, maybe you could try it with the latest code? I believe I posted it a page or two back. (Sorry for the lack of an official release.) There is some extra debug code you can uncomment at the end of the SPIN send_cmd_slow routine (and a few other places...which the compiler should be able to highlight for you . Feel free to send me a pm or email my gmail account (same username) with the log values if you go this route.
Jonathan
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
lonesock
Piranha are people too.
I have uncommented that debug section in send_cmd_slow and other bits thatthe compiler complained about.
Result is exactly the same. Hangs after "mount returned"
I could try and hack an un-protect if you have some pointers and it's not going to get to long winded.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
For me, the past is not over yet.