Shop OBEX P1 Docs P2 Docs Learn Events
Reverse Bits Binary Operator >< Problem — Parallax Forums

Reverse Bits Binary Operator >< Problem

Alex BellAlex Bell Posts: 17
edited 2005-03-05 00:37 in General Discussion
Hello, All!
I have been trying to use the Reverse Bits Binary Operator as listed on page 71 of the SX-Key Blitz Manual 2.0.
The symbol for this operation is >< but I can find no examples of its use...I made a few experiments with it , which assembled, but it does NOT appear to cause a bit reversal in my target byte. Any clues out there? Alex Bell

Comments

  • BeanBean Posts: 8,129
    edited 2005-02-28 22:21
    If you mean to reverse the ORDER of bits. I don't think there is an operator for that. If you want to INVERT the value of bits, that is the "/" operator.

    MOV W,/temp

    Is the same as

    If you do want to reverse the order of the bits, the SWAP command will reverse the 1st 4 bits with the last 4 bits.

    edit
    Looks like the >< operator does the same think as SWAP.

    Bean.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Check out· the "SX-Video Display Module"

    www.sxvm.com


    Post Edited (Bean) : 2/28/2005 10:58:08 PM GMT
  • Alex BellAlex Bell Posts: 17
    edited 2005-03-01 00:34
    Apparently the manual is in error! Have a look at p 71 if you could. The operator is listed under shift right. There is also a seperate Swap command which I have used, which is not the same as reverse bits. The operator >< DOES assemble, but then, does nothing!
    I wrote a routine that DOES reverse the bits, (kinda long) but why reinvent the wheel? Thanks for your reply ! Alex
  • PJMontyPJMonty Posts: 983
    edited 2005-03-01 06:54
    Alex,

    Actually it works fine. Check out the SASM documentation (page 44) for a much more detailed explanation of the bit reverse operation. The second number in the expression is the number of bits to reverse. I tested it with this bit of code:

        bitOrigVal equ %100011
        bitReverse equ bitOrigVal >< 6
    
    



    Here's the output in the list file:

        19  =00000023         bitOrigVal equ %100011
        20  =00000031         bitReverse equ bitOrigVal >< 6
    
    



    The values are in hex, and $23 = 100011, while $31 = 110001. As you can see, all six bits were properly reversed. Remember, that second value is the number of bits (starting from lsb) to reverse. Changing it from 6 to 3 gives this output:

        19  =00000023         bitOrigVal equ %100011
        20  =00000026         bitReverse equ bitOrigVal >< 3
    
    



    In this case, $26 = 100110. As you can see, the lower 3 bits got reversed, while the upper bits stayed where they were. Check the SASM docs, play with it a bit more, and I think you'll figure out what it's doing and how to use it.
      Thanks, PeterM
  • Alex BellAlex Bell Posts: 17
    edited 2005-03-01 16:17
    Hey folks! This is GREAT! Thanks to you Peter, and also to Bean, and also to Parallax for this great Forum! Alex Bell
  • Paul BakerPaul Baker Posts: 6,351
    edited 2005-03-01 16:21
    ah so >< is an SASM directive rather than an assembler function, sigh sadly this means the value to be reversed must be a compile time constant. I have a need for such a function but the value to be reversed is a variable, my variable was a 2^n value, so only one bit was 1 and I had to compute the 256/n value. I found a means for doing this involving a few shifts and swapping the nibbles but I can't remember the precise algorithm and I dont have my notepad with me. (the function had to have a symmetric processing time regardless of the value)

    Post Edited (Paul Baker) : 3/1/2005 4:39:09 PM GMT
  • PJMontyPJMonty Posts: 983
    edited 2005-03-01 19:05
    Paul,

    Indeed, it is a SASM operator (which I should have pointed out in my previous post), thus requiring compile-time constants. It would be prety cool if the chip could do it while running, but I doubt there is enough call for it to be implemented as an opcode in the average CPU. In fact, I was pretty shocked to see that the folks who wrote SASM implemented it in the assembler.
      Thanks, PeterM
  • Alex BellAlex Bell Posts: 17
    edited 2005-03-02 22:08
    Hi Again!
    I have a run time solution. First, I should say that what I am trying to do is a UART (properly USART) virtual peripheral that takes ASCII input from a PC and formats it as synchronous bytes at the output, i.e, strip off the Start/Stop/ bits., put in ram, and then during the Output cycle read the data out of ram MSB first instead of LSB first (as in RS232....) Well, I am using the commonly available UART VP and just left shifting into the RX_Byte instead of right shifting it... I can see the bytes arrive in ram, through the debugger window...In the original case they are in LSB first order....After the change above, they are in Reverse order as desired.
    Now I have hit another snag...In reading the bytes out of ram, to an output pin, they do not seem to be the bytes I need at all! I have inverted them, swapped them, swapped AND inverted them, in fact, I spent two days just iterating the output and can't see why what went in so easily is coming out so strangely! May I ask your guidance again? Thank you, Alex
  • Peter VerkaikPeter Verkaik Posts: 3,956
    edited 2005-03-02 23:04
    Hi, must be a wrong bank setting.

    Try putting an explicit bank instruction·at the start

    of your output·code.

    regards peter
  • PJMontyPJMonty Posts: 983
    edited 2005-03-04 02:16
    Alex,

    As in all things involving debugging, you need to simplify the parameters to find the problem. Right now it sounds like you're taking the shotgun approach where you just try anything that might fix it and see if it does. Instead, you need to strip this thing down and find out what is really going on in a systematic fashion. I would:

    1 - Create a new little custom program that simply toggles the I/O pin you want to output on. Work on that simple piece of code until it works.

    2 - Once you can toggle a bit, see if you can output a byte from a file register to the I/O pin a bit at a time. Keep at it until this works.

    3 - In your actual program, remove your current subroutine for outputting and replace it with the new dummy one. Get that to work.

    4 - Once that is working, modify your dummy program to output a data byte from the actual file register. Get that to work.

    5 - Once you can output the actual data to the actual I/O pin, add back in the code that does whatever data massaging is necessary.

    6 - You now have a working program again.

    The key is to simplify, simplify, simplify until you find the problem. I often make little dummy program to test out things when something isn't working right. A little program that does only one thing will often let you see the problem quickly since you'll now have a working (but non-functional) piece of code and a broken (but functional) piece of code that you can be compared to each other.
      Thanks, PeterM
  • Alex BellAlex Bell Posts: 17
    edited 2005-03-04 15:35
    Thank you Peter, that is the systematic approach I need! I have been waiting for the weekend so I can do some uninterrupted cogitating! Thanks, Alex
  • KenMKenM Posts: 657
    edited 2005-03-04 20:24
    Peters suggestion works well. I know because I almost always need to do it due to my awesome assembly programing skills err.....attempts at programming.

    k
  • PJMontyPJMonty Posts: 983
    edited 2005-03-04 23:54
    It's funny, but debugging is something that isn't usually taught formally. Instead, it's like sex ed where folks have to pick it up from "friends" instead of from someone with actual knowledge. I have accumulated a giant pile of debugging experience since I spent mnay years designing hardware and writing assembly for 6502 and 68HC11 projects.

    When I first started, I did all my debugging with a logic probe and a voltmeter. Let me tell you, when you learn how to bring up a custom CPU based system that you designed yourself (so there's no one to turn to for help) with nothing more than a logic probe and a voltmeter, and then start writing all the software from scratch in assembly without a debugger or monitor program, you either learn how to debug or you walk away in disgust. Most of my early projects involved interrupt driven real time motion control with multiple stepper motors that needed to synchronize/free run as needed. The systems are still in use at Disney studios, and still running without me having to do anything to them in years.

    Wait, I think my eyeballs just started bleeding again from remembering all of this... Must... put painful thoughts... out of my... mind...
      Thanks, PeterM
  • Paul BakerPaul Baker Posts: 6,351
    edited 2005-03-05 00:37
    My first formal teaching of debugging was in Software Programming in college. Bizarre thing is it was a junior level class, the third year of doing labs and other programming exercises, which is amusing to me.

    But in essence the course taught the principles of top-down design, first starting with paper and pencil and drawing a very general flow diagram containing boxes of abstract concepts such as [noparse][[/noparse]aquire data]->[noparse][[/noparse]process data]->[noparse][[/noparse]save data], then starting with a new sheet, draw a flow diagram consisting all the steps to [noparse][[/noparse]aquire data] and repeat the processes for each box in the master diagram, then define a third level which further describes each box in the second level, the third level is the "down and dirty" level that translates directly into computer code.

    Define all intermodule communications (inputs and outputs) between modules and to the outside world.

    Next you code the top level diagram where each box's function is a "do nothing", test it. Code the second level where each box is also a "do nothing", test it. Code each 3rd level diagram, create a testbed (wrapper function) to each module and define test vectors (expected output given an input), test each module in this manner in abstentia of any other code. Combine all third level modules into thier respective second level module, create a testbed for each second level module, create test vectors and test it. Combine each 2nd level module into the master program, create test vectors and test it.

    Its long, its arduous, but it always works because like the above mentioned methodology it limits the scope of the program at any given time, allowing you to concentrate at the task at hand. It also allows you to subdivide the project to enable many programmers to work on it simultaneously (this is where intermodule communication definition becomes important). This is the method used by almost all software companies, and our year end project was to create an entire·customer service database keeping records of customer info, sales history, accounts receivable, and records of technical and customer support. For projects on the smaller scale, such as embedded development, you can meld the 1st and 2nd tiers, I still follow this methodology, though I typically will jump the gun before completely defining the second tier, but I'll always complete it before trying to assemble the entire program.

    This same principle can be applied to circuits as well, thoughouly test each subsystem in outward concentric circles and you'll almost never find yourself in one of those "I don't understand what part of the system isn't working" moments. For best results, don't use your code to test the hardware, create test programs to generally mimic your programs behavior, this eliminates the "is it the hardware or the software that isn't working" problem.

    Post Edited (Paul Baker) : 3/5/2005 12:49:13 AM GMT
Sign In or Register to comment.