Shop OBEX P1 Docs P2 Docs Learn Events
Truncated SD card Files — Parallax Forums

Truncated SD card Files

BTL24BTL24 Posts: 54
edited 2014-07-16 07:55 in Propeller 1
Does anyone know an easy way to truncate data off the end of an SD card file?

I have been researching the truncate( ) and ftruncate( ) functions on slicing off data bytes at the end of a file. Can't seem to get them to work. I think I am setting them up wrong ...I.e, file pointers verses file paths).
#include <unistd.h> 
int ftruncate(int fildes, off_t length); 
int truncate(const char *path, off_t length);

I also could to it by opening file, reading data sequentially using fread( ) then write it out to new file using fwrite( ) functions in a loop....but the c library functions take quite a while to perform this task on large files. Not sure but am guessing the c library functions access the SD card using SPI rather than other faster parallel interface protocols.

Regards,
Brian (BTL24)

Comments

  • Dave HeinDave Hein Posts: 6,347
    edited 2014-07-14 18:12
    truncate and ftruncate are not implemented in PropGCC. You will have to use the fread/fwrite method. It will run faster if you do the reads and writes in 512-byte chunks, or multiples of 512 bytes.
  • BTL24BTL24 Posts: 54
    edited 2014-07-14 19:35
    Dave Hein wrote: »
    truncate and ftruncate are not implemented in PropGCC. You will have to use the fread/fwrite method. It will run faster if you do the reads and writes in 512-byte chunks, or multiples of 512 bytes.

    Thanks Dave. Good catch on the bigger chunks. I have been reading and writing in 4 Byte chunks....no wonder it is slow.

    This may solve another timing issue I have with a critical loop of reads and writes. Wow...2 solutions in one answer!

    Regards,
    Brian (BTL24)
  • DavidZemonDavidZemon Posts: 2,973
    edited 2014-07-15 07:40
    Does the default PropGCC SD routines only buffer a single block at a time? Are there any tunable parameters to allocate 2 or more 512 B blocks?
  • jazzedjazzed Posts: 11,803
    edited 2014-07-15 08:42
    Does the default PropGCC SD routines only buffer a single block at a time? Are there any tunable parameters to allocate 2 or more 512 B blocks?
    There are no tunable parameters that I know of (the code is not mine). You are welcome to look at the library and add/suggest features of course.
  • Dave HeinDave Hein Posts: 6,347
    edited 2014-07-15 09:27
    When a file is opened the file system will malloc a dedicated buffer for each opened file. This buffer is 512 bytes in size for the XMM/XMMC modes, but is only 16 bytes for the LMM/CMM modes. When only a few bytes are read or written at a time, the data will be transferred to/from the dedicated file buffer. When a large number of bytes are read or written, the data will be accessed directly from the user buffer

    One issue is that file system does all of it's SPI transfers through a single shared 512-byte scratch buffer. So if a few bytes are read or written at a time, the same sector must be re-read multiple times. It may be that using a 16-byte buffer for LMM/CMM was not a good choice, and we should use 512-bytes for all modes. After a file is opened it is possible to change the size of the buffer. I'll look into this to see how this would be done.
  • Dave HeinDave Hein Posts: 6,347
    edited 2014-07-15 10:35
    So I ran a few tests copying one file to another, and using small block sizes is very slow. I got the following results copying a 20,800-byte file:
    block size    time
    ----------  --------
         4      29.4 sec
        16      29.2
       256       3.63
       512       1.97
      4096       0.54
    
    I then added the following lines after opening the input and output files:
      setvbuf(infile, inbuf, _IOFBF, 512);
      setvbuf(outfile, outbuf, _IOFBF, 512);
    
    inbuf and outbuf are declared as "int inbuf[128], outbuf[128]". The results were then:
    block size    time
    ----------  --------
         4       1.23sec
        16       1.05
       256       0.90
       512       0.93
      4096       0.42
    
    setvbuf could be called with a null pointer for the second parameter, and then the buffer would be malloc'ed. However, there is an issue in our file driver where the buffer would not be freed when the file is closed. We should fix that.

    So instead of using small buffers for LMM/CMM we should probably use a buffer size of 512 for all of the memory models. The programmer could use setvbuf after they open the file if they want to use a smaller buffer.
  • BTL24BTL24 Posts: 54
    edited 2014-07-15 11:01
    BTL24 wrote: »
    Thanks Dave. Good catch on the bigger chunks. I have been reading and writing in 4 Byte chunks....no wonder it is slow.

    This may solve another timing issue I have with a critical loop of reads and writes. Wow...2 solutions in one answer!

    Regards,
    Brian (BTL24)

    Dave et al...

    You were right on about the data chunk size. The critical timing loop I referenced here dropped 9mS per cycle by changing the read/write data record size from 4 Bytes to 32 Bytes. I realized an 18% increase in speed with just that little change. I can imagine what benefits in speed can be obtained with a 512 Byte buffer.

    BTW... your block size verses time chart is spot on. I too see similar proportional results for a 12,800 Byte file using a 4 Byte block size in my project.

    Good stuff... thanks again....
    Brian (BTL24)
  • DavidZemonDavidZemon Posts: 2,973
    edited 2014-07-16 07:55
    Thanks for the investigation Dave! Those are some incredible numbers to point out and a great solution.
Sign In or Register to comment.