Reverse Bits Binary Operator >< Problem
Alex Bell
Posts: 17
Hello, All!
I have been trying to use the Reverse Bits Binary Operator as listed on page 71 of the SX-Key Blitz Manual 2.0.
The symbol for this operation is >< but I can find no examples of its use...I made a few experiments with it , which assembled, but it does NOT appear to cause a bit reversal in my target byte. Any clues out there? Alex Bell
I have been trying to use the Reverse Bits Binary Operator as listed on page 71 of the SX-Key Blitz Manual 2.0.
The symbol for this operation is >< but I can find no examples of its use...I made a few experiments with it , which assembled, but it does NOT appear to cause a bit reversal in my target byte. Any clues out there? Alex Bell
Comments
MOV W,/temp
Is the same as
If you do want to reverse the order of the bits, the SWAP command will reverse the 1st 4 bits with the last 4 bits.
edit
Looks like the >< operator does the same think as SWAP.
Bean.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Check out· the "SX-Video Display Module"
www.sxvm.com
Post Edited (Bean) : 2/28/2005 10:58:08 PM GMT
I wrote a routine that DOES reverse the bits, (kinda long) but why reinvent the wheel? Thanks for your reply ! Alex
Actually it works fine. Check out the SASM documentation (page 44) for a much more detailed explanation of the bit reverse operation. The second number in the expression is the number of bits to reverse. I tested it with this bit of code:
Here's the output in the list file:
The values are in hex, and $23 = 100011, while $31 = 110001. As you can see, all six bits were properly reversed. Remember, that second value is the number of bits (starting from lsb) to reverse. Changing it from 6 to 3 gives this output:
In this case, $26 = 100110. As you can see, the lower 3 bits got reversed, while the upper bits stayed where they were. Check the SASM docs, play with it a bit more, and I think you'll figure out what it's doing and how to use it.
Thanks, PeterM
Post Edited (Paul Baker) : 3/1/2005 4:39:09 PM GMT
Indeed, it is a SASM operator (which I should have pointed out in my previous post), thus requiring compile-time constants. It would be prety cool if the chip could do it while running, but I doubt there is enough call for it to be implemented as an opcode in the average CPU. In fact, I was pretty shocked to see that the folks who wrote SASM implemented it in the assembler.
Thanks, PeterM
I have a run time solution. First, I should say that what I am trying to do is a UART (properly USART) virtual peripheral that takes ASCII input from a PC and formats it as synchronous bytes at the output, i.e, strip off the Start/Stop/ bits., put in ram, and then during the Output cycle read the data out of ram MSB first instead of LSB first (as in RS232....) Well, I am using the commonly available UART VP and just left shifting into the RX_Byte instead of right shifting it... I can see the bytes arrive in ram, through the debugger window...In the original case they are in LSB first order....After the change above, they are in Reverse order as desired.
Now I have hit another snag...In reading the bytes out of ram, to an output pin, they do not seem to be the bytes I need at all! I have inverted them, swapped them, swapped AND inverted them, in fact, I spent two days just iterating the output and can't see why what went in so easily is coming out so strangely! May I ask your guidance again? Thank you, Alex
Try putting an explicit bank instruction·at the start
of your output·code.
regards peter
As in all things involving debugging, you need to simplify the parameters to find the problem. Right now it sounds like you're taking the shotgun approach where you just try anything that might fix it and see if it does. Instead, you need to strip this thing down and find out what is really going on in a systematic fashion. I would:
1 - Create a new little custom program that simply toggles the I/O pin you want to output on. Work on that simple piece of code until it works.
2 - Once you can toggle a bit, see if you can output a byte from a file register to the I/O pin a bit at a time. Keep at it until this works.
3 - In your actual program, remove your current subroutine for outputting and replace it with the new dummy one. Get that to work.
4 - Once that is working, modify your dummy program to output a data byte from the actual file register. Get that to work.
5 - Once you can output the actual data to the actual I/O pin, add back in the code that does whatever data massaging is necessary.
6 - You now have a working program again.
The key is to simplify, simplify, simplify until you find the problem. I often make little dummy program to test out things when something isn't working right. A little program that does only one thing will often let you see the problem quickly since you'll now have a working (but non-functional) piece of code and a broken (but functional) piece of code that you can be compared to each other.
Thanks, PeterM
k
When I first started, I did all my debugging with a logic probe and a voltmeter. Let me tell you, when you learn how to bring up a custom CPU based system that you designed yourself (so there's no one to turn to for help) with nothing more than a logic probe and a voltmeter, and then start writing all the software from scratch in assembly without a debugger or monitor program, you either learn how to debug or you walk away in disgust. Most of my early projects involved interrupt driven real time motion control with multiple stepper motors that needed to synchronize/free run as needed. The systems are still in use at Disney studios, and still running without me having to do anything to them in years.
Wait, I think my eyeballs just started bleeding again from remembering all of this... Must... put painful thoughts... out of my... mind...
Thanks, PeterM
But in essence the course taught the principles of top-down design, first starting with paper and pencil and drawing a very general flow diagram containing boxes of abstract concepts such as [noparse][[/noparse]aquire data]->[noparse][[/noparse]process data]->[noparse][[/noparse]save data], then starting with a new sheet, draw a flow diagram consisting all the steps to [noparse][[/noparse]aquire data] and repeat the processes for each box in the master diagram, then define a third level which further describes each box in the second level, the third level is the "down and dirty" level that translates directly into computer code.
Define all intermodule communications (inputs and outputs) between modules and to the outside world.
Next you code the top level diagram where each box's function is a "do nothing", test it. Code the second level where each box is also a "do nothing", test it. Code each 3rd level diagram, create a testbed (wrapper function) to each module and define test vectors (expected output given an input), test each module in this manner in abstentia of any other code. Combine all third level modules into thier respective second level module, create a testbed for each second level module, create test vectors and test it. Combine each 2nd level module into the master program, create test vectors and test it.
Its long, its arduous, but it always works because like the above mentioned methodology it limits the scope of the program at any given time, allowing you to concentrate at the task at hand. It also allows you to subdivide the project to enable many programmers to work on it simultaneously (this is where intermodule communication definition becomes important). This is the method used by almost all software companies, and our year end project was to create an entire·customer service database keeping records of customer info, sales history, accounts receivable, and records of technical and customer support. For projects on the smaller scale, such as embedded development, you can meld the 1st and 2nd tiers, I still follow this methodology, though I typically will jump the gun before completely defining the second tier, but I'll always complete it before trying to assemble the entire program.
This same principle can be applied to circuits as well, thoughouly test each subsystem in outward concentric circles and you'll almost never find yourself in one of those "I don't understand what part of the system isn't working" moments. For best results, don't use your code to test the hardware, create test programs to generally mimic your programs behavior, this eliminates the "is it the hardware or the software that isn't working" problem.
Post Edited (Paul Baker) : 3/5/2005 12:49:13 AM GMT