Object info and assembler questions
Fred Hawkins
Posts: 997
When I look at assemby·with the show hex viewer [noparse][[/noparse]F8], I notice that the strings declared in·a DAT area read left to right like usual. But (and I am having a hard time with this) both numbers and source code are lsbyte to msbyte. Why?· What sort of advantage is there to doing it backwards?
Follow-on question: is the hex view's·byte order·the same as what is put into a·cog memory? Or are we looking at something that is put into order for the Hub's sake?
Assembler question: is there something like 9900's $ (here)? In other environments, one could measure a string's length with the difference
between a symbol's start and here:
This would put five bytes of ascii, followed by $05 into six sequential bytes.
·
Follow-on question: is the hex view's·byte order·the same as what is put into a·cog memory? Or are we looking at something that is put into order for the Hub's sake?
Assembler question: is there something like 9900's $ (here)? In other environments, one could measure a string's length with the difference
between a symbol's start and here:
·oldasm··text "string"
··········· data oldasm-$
··········· data oldasm-$
This would put five bytes of ascii, followed by $05 into six sequential bytes.
·
Comments
the same order of bytes as you see in hex viewer you will find in hub memory. The characters of a string are of type byte. Therefore the order is the same as you have given in DAT section. That is also with numbers of type byte.
If you are using numbers of type word or long they will be store in the order the processor can be it process, which is little endian on the Propeller.
http://en.wikipedia.org/wiki/Byte_order
On the german site is a nice picture that shows the differences side by side.
http://de.wikipedia.org/wiki/Byte-Reihenfolge
There was a discussion in the forum some month ago about the relative address calculation but I have it currently not found. I believe the Prop Tool can not do this on the fly. But you can use the following code instead which uses a local label.
Thomas
The wiki article reminds why I despised Intel x86 and loved Motorala 68k and TMS 9900. I once spent weeks figuring out some operating system's file allocation table from a data dump (just because). It was on a bigendian system which had adopted PC compatible drives. I think I gave up looking under the hood shortly afterwards.
best wiki quote:
Little-endian has the property that, in the absence of alignment restrictions, values can be read from memory at different widths without using different addresses. For example, a 32-bit memory location with content 4A 00 00 00 can be read at the same address as either 8-bit (value = 4A), 16-bit (004A), or 32-bit (0000004A). (This example works only if the value makes sense in all three sizes, which means the value fits in just 8 bits.) This little-endian property is rarely used, and doesn't imply that little-endian has any performance advantage in variable-width data access.
That doesn't apply to the propeller, but maybe Chip was just following the precedent set by the X86 architecture.
The ARM chip is configureable to either big or little-endian which pretty much proves it's not an issue for hardware these days.
But yes, I'm absolutely with you in preferring the 68K series rather than the X86 series back in the day. With that and the segment registers, and other nasties, I just refused to learn X86 assembler. Yuck.
voila! little endianess. So my question has become whether or not the entire manual's opcode description is backwards (and peculiar when you consider status flag bit boundaries, the CON bits and two 9 bit wide s and d fields.)
Logical view of ABS D,S: 101010 001i 1111 ddddddddd sssssssss
nibble & word·view of same: 1010 1000 1i11 11dd· dddd·ddds· ssss ssss
hex view: ssss ssss dddd ddds 1i11 11dd 1010 1000
Notice how the msbit of s floats to·the lsbit of word 1. (I think.)· This is why there's compilers maybe.
The x86 architecture has its roots in the Datapoint 2200 instruction set about 35 years ago, back when a processor had only a few registers and main memory consisted of serial shift registers and where you put your data relative to your instructions was important in tight time dependent loops. Why it was used by Intel and carried forward to the present day is a long story.