Shop OBEX P1 Docs P2 Docs Learn Events
Object info and assembler questions — Parallax Forums

Object info and assembler questions

Fred HawkinsFred Hawkins Posts: 997
edited 2007-06-23 03:08 in Propeller 1
When I look at assemby·with the show hex viewer [noparse][[/noparse]F8], I notice that the strings declared in·a DAT area read left to right like usual. But (and I am having a hard time with this) both numbers and source code are lsbyte to msbyte. Why?· What sort of advantage is there to doing it backwards?

Follow-on question: is the hex view's·byte order·the same as what is put into a·cog memory? Or are we looking at something that is put into order for the Hub's sake?

Assembler question: is there something like 9900's $ (here)? In other environments, one could measure a string's length with the difference
between a symbol's start and here:
·oldasm··text "string"
··········· data oldasm-$

This would put five bytes of ascii, followed by $05 into six sequential bytes.



·

Comments

  • KaioKaio Posts: 253
    edited 2007-06-22 09:22
    Fred,

    the same order of bytes as you see in hex viewer you will find in hub memory. The characters of a string are of type byte. Therefore the order is the same as you have given in DAT section. That is also with numbers of type byte.

    If you are using numbers of type word or long they will be store in the order the processor can be it process, which is little endian on the Propeller.
    http://en.wikipedia.org/wiki/Byte_order

    On the german site is a nice picture that shows the differences side by side.
    http://de.wikipedia.org/wiki/Byte-Reihenfolge


    There was a discussion in the forum some month ago about the relative address calculation but I have it currently not found. I believe the Prop Tool can not do this on the fly. But you can use the following code instead which uses a local label.
    oldasm        byte      "string"
    :current      byte      :current - oldasm
    
    




    Thomas
  • Fred HawkinsFred Hawkins Posts: 997
    edited 2007-06-22 16:32
    Thanks Thomas. At first glance at the hex I knew I was in land of heathens and now it's confirmed. I shall learn to get along at least and may eventually convert. (Don't they know that they're all going to hell?)

    The wiki article reminds why I despised Intel x86 and loved Motorala 68k and TMS 9900. I once spent weeks figuring out some operating system's file allocation table from a data dump (just because). It was on a bigendian system which had adopted PC compatible drives. I think I gave up looking under the hood shortly afterwards.

    best wiki quote:

    Little-endian has the property that, in the absence of alignment restrictions, values can be read from memory at different widths without using different addresses. For example, a 32-bit memory location with content 4A 00 00 00 can be read at the same address as either 8-bit (value = 4A), 16-bit (004A), or 32-bit (0000004A). (This example works only if the value makes sense in all three sizes, which means the value fits in just 8 bits.) This little-endian property is rarely used, and doesn't imply that little-endian has any performance advantage in variable-width data access.
  • CardboardGuruCardboardGuru Posts: 443
    edited 2007-06-22 18:23
    Obviously programmers prefer big-endian. It's easier for humans to read. But in the case where the processor has instructions to do arithmetic on data that is bigger than the data bus width, it's easier (cheaper) for the hardware to do little endian. Because of the carry, the aritmetic has to start with the low byte, so that's the order it needs to be read from memory. And program counters work forwards, not backwards.

    That doesn't apply to the propeller, but maybe Chip was just following the precedent set by the X86 architecture.

    The ARM chip is configureable to either big or little-endian which pretty much proves it's not an issue for hardware these days.

    But yes, I'm absolutely with you in preferring the 68K series rather than the X86 series back in the day. With that and the segment registers, and other nasties, I just refused to learn X86 assembler. Yuck.
  • codemonkeycodemonkey Posts: 38
    edited 2007-06-22 18:47
    Did you know that if you squint a bit, X86 is 68K backwards? A coincidence?
  • Fred HawkinsFred Hawkins Posts: 997
    edited 2007-06-23 00:29
    CardboardGuru said...
    Obviously programmers prefer big-endian. It's easier for humans to read. But in the case where the processor has instructions to do arithmetic on data that is bigger than the data bus width, it's easier (cheaper) for the hardware to do little endian. Because of the carry, the aritmetic has to start with the low byte, so that's the order it needs to be read from memory. And program counters work forwards, not backwards.

    That doesn't apply to the propeller, but maybe Chip was just following the precedent set by the X86 architecture.

    The ARM chip is configureable to either big or little-endian which pretty much proves it's not an issue for hardware these days.

    But yes, I'm absolutely with you in preferring the 68K series rather than the X86 series back in the day. With that and the segment registers, and other nasties, I just refused to learn X86 assembler. Yuck.
    What purplexes me is the spareness (as in not any that I can see) of little-endianess documentation in the assembler chapter. The docs are 100% logical view and then the hex view shows an alternative reality. I wrote a one line assembly program just for the sake of looking at the opcodes and
    voila! little endianess. So my question has become whether or not the entire manual's opcode description is backwards (and peculiar when you consider status flag bit boundaries, the CON bits and two 9 bit wide s and d fields.)

    Logical view of ABS D,S: 101010 001i 1111 ddddddddd sssssssss
    nibble & word·view of same: 1010 1000 1i11 11dd· dddd·ddds· ssss ssss
    hex view: ssss ssss dddd ddds 1i11 11dd 1010 1000

    Notice how the msbit of s floats to·the lsbit of word 1. (I think.)· This is why there's compilers maybe.
  • Mike GreenMike Green Posts: 23,101
    edited 2007-06-23 03:08
    CardboardGuru,
    The x86 architecture has its roots in the Datapoint 2200 instruction set about 35 years ago, back when a processor had only a few registers and main memory consisted of serial shift registers and where you put your data relative to your instructions was important in tight time dependent loops. Why it was used by Intel and carried forward to the present day is a long story.
Sign In or Register to comment.