Truncated SD card Files
BTL24
Posts: 54
Does anyone know an easy way to truncate data off the end of an SD card file?
I have been researching the truncate( ) and ftruncate( ) functions on slicing off data bytes at the end of a file. Can't seem to get them to work. I think I am setting them up wrong ...I.e, file pointers verses file paths).
I also could to it by opening file, reading data sequentially using fread( ) then write it out to new file using fwrite( ) functions in a loop....but the c library functions take quite a while to perform this task on large files. Not sure but am guessing the c library functions access the SD card using SPI rather than other faster parallel interface protocols.
Regards,
Brian (BTL24)
I have been researching the truncate( ) and ftruncate( ) functions on slicing off data bytes at the end of a file. Can't seem to get them to work. I think I am setting them up wrong ...I.e, file pointers verses file paths).
#include <unistd.h> int ftruncate(int fildes, off_t length); int truncate(const char *path, off_t length);
I also could to it by opening file, reading data sequentially using fread( ) then write it out to new file using fwrite( ) functions in a loop....but the c library functions take quite a while to perform this task on large files. Not sure but am guessing the c library functions access the SD card using SPI rather than other faster parallel interface protocols.
Regards,
Brian (BTL24)
Comments
Thanks Dave. Good catch on the bigger chunks. I have been reading and writing in 4 Byte chunks....no wonder it is slow.
This may solve another timing issue I have with a critical loop of reads and writes. Wow...2 solutions in one answer!
Regards,
Brian (BTL24)
One issue is that file system does all of it's SPI transfers through a single shared 512-byte scratch buffer. So if a few bytes are read or written at a time, the same sector must be re-read multiple times. It may be that using a 16-byte buffer for LMM/CMM was not a good choice, and we should use 512-bytes for all modes. After a file is opened it is possible to change the size of the buffer. I'll look into this to see how this would be done.
So instead of using small buffers for LMM/CMM we should probably use a buffer size of 512 for all of the memory models. The programmer could use setvbuf after they open the file if they want to use a smaller buffer.
Dave et al...
You were right on about the data chunk size. The critical timing loop I referenced here dropped 9mS per cycle by changing the read/write data record size from 4 Bytes to 32 Bytes. I realized an 18% increase in speed with just that little change. I can imagine what benefits in speed can be obtained with a 512 Byte buffer.
BTW... your block size verses time chart is spot on. I too see similar proportional results for a 12,800 Byte file using a 4 Byte block size in my project.
Good stuff... thanks again....
Brian (BTL24)