Use of 16 bit words instead of 32 bit integers ?
twm47099
Posts: 867
I thought this would be a simple question, so I decided to test it rather than asking. But the result surprised me.
The question - If I want to save memory when using C with SimpleIDE would it be a good idea to use 16 bit words rather than 32 bit integers?
So in a small program I had, I tried changing the declarations of some integers that would be small numbers from int to uint16_t. For example these globals that are uint16_t were int.
I built the program (hammer icon).
The memory for the uint16_t version was: Code size = 5464 (5904 Total).
The memory for the int version was : Code size = 5388 (5836 Total).
So the 32 bit integer version was smaller. Is this due to the way C aligns variables? or because a different library is used? Or ??
Is it worthwhile to declare smaller variables? If so when?
Thanks
Tom
The question - If I want to save memory when using C with SimpleIDE would it be a good idea to use 16 bit words rather than 32 bit integers?
So in a small program I had, I tried changing the declarations of some integers that would be small numbers from int to uint16_t. For example these globals that are uint16_t were int.
char c1; // raw LS byte values from pixy uint16_t i; uint16_t j; // number of pixy words uint16_t k; uint16_t numbks = 6; uint16_t cktot; uint16_t flg; // flg = 1 when first sync word located int pv[7][7]; // Pixy words
I built the program (hammer icon).
The memory for the uint16_t version was: Code size = 5464 (5904 Total).
The memory for the int version was : Code size = 5388 (5836 Total).
So the 32 bit integer version was smaller. Is this due to the way C aligns variables? or because a different library is used? Or ??
Is it worthwhile to declare smaller variables? If so when?
Thanks
Tom
Comments
It's probably only worth it to declare smaller variables when you have arrays of them bigger than 256 elements or so.
Why would that be so? Hubram tends to be easiest and most direct access at the PASM level at 32bit units, smaller sizes require a bit more management.
Seems GCC has built some extra code into there to take care of type conversion or alignment issues some place. But what exactly?
You could compile the C code down to assembler output and have a look at the differences in generated code. Use the -C and -o options to propgcc:
As well as whatever other options SimpleIDE is using when you compile your code.
Do post the result, we'd like to see what is going on.
What I have learned is that the Cog is true 32bit, while the Hubram and EEprom are really 8 bit storage devices adapted to 32bit use.
We need the 8 bit frequently for ASCII representations, but speed generally wins at 32bit integer storage. Much of what makes 16bit useful is how contiguous the data is. A long look up table might work well, but just having all variables in 16 bit may not be worth the trouble.
In other words, I would only bother with 16 if I needed to pack more into HubRAM for a specific project. Otherwise, I would just stick with 32bit for integers and 8 bit for character strings.
Users new to the Propeller and using C may not be aware of the architectural quirks involved and just expect the compiler to clean up something they ported from elsewhere. It is not that simple.
However, internet is now available only on my tablet or almost as bad on my old netbook.
I did run two c programs each with the variables once declared as int and the second time as uint16_t.
Program 1 was a simple while loop that added 3 of the variables until the 4th was 100.
Both the code and total size was the reverse of my original question. Size of the int version was larger than for the 16 bit version.
The second program was an empty main() { }. The code size for each was the same, but the total size was larger for the int version (as I had expected). Unfortunately, that was when I got called away.
Tom
Obviously 16 bit data is much smaller, so the larger code size for 16 bit variables may be worth it, but it is a trade off that you'll have to be aware of.