debugging a PLL timing issue
Keith M
Posts: 102
Hi.
This should sound familiar to those regulars who have good memories --- this is an ongoing project.
I apologize for my verbosity to begin with.
I'm using an SX28 at 50mhz to build a floppy drive controller.· The drive spits out sync serial MFM data, one bit at a time, at 500kbps(one raw bit every 2us)·with no start/stop info, flow control, etc.··I have written some PLL software within the SX (using SX/B, please don't stone me) to recover the clock and read the data.· I'll attach the current copy of the code, which has undergone many rewrites/revisions over the last few months.
This is partially working, so I know my general idea is on track, but needs some debugging, tuning, etc to get it to a usable level.
The SX needs to read the bits, shift them into a byte, and when it's full, send the byte via parallel to a PC.· The PC handles the MFM decoding, and this part is working.· The SX to PC transfer is pretty simple, and works reliably.
Even if you don't know SX/B, my comments should help make the code·pretty readable.
My main routine does very little: it waits for all 8 bits to be stored by the ISR and then transfers the byte to the PC.· It doesn't "block" on waiting for an ACK from the PC -- the main loop simply keeps checking and ACK's the ACK whenever needed.
My ISR is where all the action happens: It's triggered by either a falling-edge, or an RTCC timeout.· If an edge is received, that fact is noted, and a "1"-bit is shifted into the byte.
The format of the received data is always "1"-bit + (1-3) zeros, and a zero is a high-state(5v).· The idle state is also the high-state, and the only way you know you are idle is if you haven't seen a one in the last few bits.
If an RTCC timeout(which ideally should occur in the middle of the next·bitcell)·occurs, and a one has recently been seen (last 3 bits), then a "0"-bit is shifted into the byte, otherwise we're idle, and this can be ignored.
2us * 8 = 16us(+/- 10% because its a·spinning floppy disk)·for a one whole byte.· PC takes about 8us to notice we have a byte ready, read the byte, store the byte in RAM, and notify the SX we've received the byte.· It's completely ready by 9.x us for the next byte, so the PC can keep up just fine.
My current code does function, but it misses large numbers of sectors, and screws up the bulk of the data in the sectors it does receive.· It falls in and out of sync with the data, and even when its in-sync, only stays that way for perhaps·a couple hundred bits.· I think my longest successful run of correctly received bits is about 800 bits or so.
How do I debug this timing issue?· I would bet that my RTCC timeout is occuring during an edge, and the software misses the edge (and hence misses the next couple "0"-bits too.) The whole problem here is that I have no visibility of the errors, because I can't capture them on my dual-channel storage scope --- I'm limited AFAIK to one saving one screen's worth of data --- less than 10 bits.· I "can't" capture them because they are happening less frequently now.
I set a bit when I enter the ISR and clr it when I leave, and so the input data is on one channel, and the ISR triggers are on the other.· When I look at the scope, I can't find an error --- everything looks kosher.· Before when my timing was COMPLETELY off, I could plainly see the RTCC firing during an edge.
How do I avoid these interrupt conflicts?· I've never used the debugger, don't know how to(although I've read all the associated docs on the SX software), and generally wouldn't know what to look for.· I can read assembly, especially when its my SX/B code that's been converted --- but I'm not sure how this would help.
My blog, link here, repeats some of this information, provides more background info, and generally displays the frustrations of a confused hobbyist.· [noparse]:)[/noparse]
Thanks for any advice.
Keith
Post Edited (Keith M) : 6/26/2005 6:07:14 AM GMT
This should sound familiar to those regulars who have good memories --- this is an ongoing project.
I apologize for my verbosity to begin with.
I'm using an SX28 at 50mhz to build a floppy drive controller.· The drive spits out sync serial MFM data, one bit at a time, at 500kbps(one raw bit every 2us)·with no start/stop info, flow control, etc.··I have written some PLL software within the SX (using SX/B, please don't stone me) to recover the clock and read the data.· I'll attach the current copy of the code, which has undergone many rewrites/revisions over the last few months.
This is partially working, so I know my general idea is on track, but needs some debugging, tuning, etc to get it to a usable level.
The SX needs to read the bits, shift them into a byte, and when it's full, send the byte via parallel to a PC.· The PC handles the MFM decoding, and this part is working.· The SX to PC transfer is pretty simple, and works reliably.
Even if you don't know SX/B, my comments should help make the code·pretty readable.
My main routine does very little: it waits for all 8 bits to be stored by the ISR and then transfers the byte to the PC.· It doesn't "block" on waiting for an ACK from the PC -- the main loop simply keeps checking and ACK's the ACK whenever needed.
My ISR is where all the action happens: It's triggered by either a falling-edge, or an RTCC timeout.· If an edge is received, that fact is noted, and a "1"-bit is shifted into the byte.
The format of the received data is always "1"-bit + (1-3) zeros, and a zero is a high-state(5v).· The idle state is also the high-state, and the only way you know you are idle is if you haven't seen a one in the last few bits.
If an RTCC timeout(which ideally should occur in the middle of the next·bitcell)·occurs, and a one has recently been seen (last 3 bits), then a "0"-bit is shifted into the byte, otherwise we're idle, and this can be ignored.
2us * 8 = 16us(+/- 10% because its a·spinning floppy disk)·for a one whole byte.· PC takes about 8us to notice we have a byte ready, read the byte, store the byte in RAM, and notify the SX we've received the byte.· It's completely ready by 9.x us for the next byte, so the PC can keep up just fine.
My current code does function, but it misses large numbers of sectors, and screws up the bulk of the data in the sectors it does receive.· It falls in and out of sync with the data, and even when its in-sync, only stays that way for perhaps·a couple hundred bits.· I think my longest successful run of correctly received bits is about 800 bits or so.
How do I debug this timing issue?· I would bet that my RTCC timeout is occuring during an edge, and the software misses the edge (and hence misses the next couple "0"-bits too.) The whole problem here is that I have no visibility of the errors, because I can't capture them on my dual-channel storage scope --- I'm limited AFAIK to one saving one screen's worth of data --- less than 10 bits.· I "can't" capture them because they are happening less frequently now.
I set a bit when I enter the ISR and clr it when I leave, and so the input data is on one channel, and the ISR triggers are on the other.· When I look at the scope, I can't find an error --- everything looks kosher.· Before when my timing was COMPLETELY off, I could plainly see the RTCC firing during an edge.
How do I avoid these interrupt conflicts?· I've never used the debugger, don't know how to(although I've read all the associated docs on the SX software), and generally wouldn't know what to look for.· I can read assembly, especially when its my SX/B code that's been converted --- but I'm not sure how this would help.
My blog, link here, repeats some of this information, provides more background info, and generally displays the frustrations of a confused hobbyist.· [noparse]:)[/noparse]
Thanks for any advice.
Keith
Post Edited (Keith M) : 6/26/2005 6:07:14 AM GMT
Comments
here come some general comments on RTCC roll-over interrupts and edge-triggered interrupts: You can never avoid that both interrupts occur (almost) at the same time, so it may happen that you miss one if them. You confirmed that you saw the RTCC firing during an edge. Using both types of interrupts in one program is always a problem with the SX because it only has one interrupt priority, and therefore, does not allow for multi-level interrupts.
Maybe, this idea helps: I would just allow for RTCC roll-over interrupts. Within the ISR, you do a
mode $09
clr w
mov !rb, w
in order to read the WKPND_B bits into w and to clear the bits in WKPND_B at the same time. Actually, the mov !rb, w exchanges the contents of both registers.
Assuming that there was an edge on a port B pin since the last ISR call, the corresponding bit in the WKPND_B register (or in w after the mov) will be set, so the ISR can react accordingly. Note that the bits in the WKPND_B register are set on positive or negative edges according to the configuration of the WKED_B register, no matter if edge interrupts are enabled or not. So, using this method, you should not miss any edge on the port B pins but you don't catch the exact time when this edge occurred. In worst case, you will detect a new edge only after one interrupt period has elapsed. Calling the ISR relatively often should make this error small enough that it does not matter. As the main routine has not much to do, it is ok when the ISR "steals" cycles quite often.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Greetings from Germany,
G
I was just re-reading Unit 7: Interrupts in "Exploring the SX" where it describes the "polling" method just as you described it.
And it sounds like a great idea, but when I started to try and code it I came up with a couple problems, namely
The way my program is structured, where main waits for 8 bits, and then sends them, really only allows the ISR to add 1 bit at a time.· This is because if it gets to 8, main *must* have a time slice to send the byte to the PC before the next bit is added by the ISR.· The reason why this is a problem, imagine if I miss an edge and don't catch it until the next cell, now I have to store a "1" for the edge AND a "0" for the current time-out high.
So you say, increase the frequency of the ISR so that this doesnt happen, BUT the PC takes roughly 9+us(for safety, say 10us) to process 8 bits, so I really can't store more than one bit every 10us/8 bits=1.25us.· And that's pushing it.· That would be increased from 16/8=2us to 1.25us.
So then I thought of "oversampling" by having the ISR fire more times than the bits get stored, and thats OK but then I run into a problem where I need a decent amount more code to handle this --- which works against me because the ISR is firing more often, so the code needs to be smaller.
I didn't not, until I re-read that section, and you pointed that out, realize that you could detect edges and not trip the ISR -- that is very neat.
Any ideas?· Are there workarounds for these given my situation?
Thanks!
Keith
·
to be honest, I did not look into your posted code so far, therefore, I said that I wanted to give some general comments in my last post. Let me first study your code, learn a bit more about the timing of the drive's MFM signal. Maybe, I have an idea then.
For now, I think you should consider to let the ISR do the job of shifting the eight bits into a register (like a serial UART receiver). When the eight bits are complete, the ISR sets a flag for the main program that it is time to fetch that byte. The main program then reads the byte, transfers it to the PC, and finally clears the flag again.
Another idea for reading the MFM data: Usually, the WKED_B register is configured at start-up for rising or falling edges but there is no reason why you can't change that definition within the ISR. You might first define the edge bit for the MFM signal for a falling edge (to catch the "1"). Then - when the ISR has detected one, you re-define it for a rising edge (to catch the "0"). After a "0" has been detected, you toggle that bit again for the next "1". I'm not sure but it might be necessary to clear the WKPND_B register after changing a bit in the WKED_B register to avoid false results.
BTW, I use the "edge-detect feature w/o interrupts" in some of my SX-based motor controllers to count tacho pulses, and to check for various switches, like for end-positions, and reference points, and this works just fine, because there is no need to poll an input "just in time" to not miss an edge. The only trick is that you need to poll WKED_B at a rate that is higher than the highest expected pulse rate on an input. Thanks to the speed of the SX, this usually is an easy task.
As I said before, give me some time to have a look at your code, and I'll be back here again.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Greetings from Germany,
G
Yeah, that's basically how my program works.· The ISR shifts them in, and increases a byte variable for each bit, when main detects that 8 bits are ready to go, it ships the byte to the PC.· There's no real need to notify the ISR that the shifter variable is ready to be written again, because the drive just keeps pumping data, the ISR *has* to be ready to write a bit at any time(well at least one bit per 2us.)· Since main is so small, grabbing the byte and sticking it on the port takes something on the order of hundreds of ns at the most --- and would happen as soon as the ISR exited from processing the 8th bit.
Thanks for taking a look.
Keith
·
in the meantime, I have taken a look at you program, but I could not find the reason for your problems so far. Maybe it's really caused by the two interrupt sources (edges and RTCC).
I'm thinking of another method, where the ISR acts as a timer, and the main program polls the WKPND_B register at a high rate. When an edge is detected, the necessary action is taken, the ISR timer is cleared, and the RTCC is re-initialized to the interrupt period. This way, the timer can be syncronized on the most recent edge.
Before degging deeper, I would like to know if my understanding of the MFM signal and its timing is correct (see the attached text file)?
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Greetings from Germany,
G
Don't get too wrapped around MFM and the decoding.· My application with the Amiga makes this sort of non-standard MFM, so it's easy to get off track.· Trust me, I've spent months on this! [noparse]:)[/noparse]
In every 2us bitcell, one of two things happen, either
a: you get a negative-going pulse at the beginning of the cell that lasts about 500ns, and then slowly transitions to 5v by the time it reaches the end of the 2us cell.· I've turned on Schmitt trigger so the SX sees a larger negative pulse-width.· I don't think it matters much in edge detection anyways.· This indicates a raw MFM "1"
b: you get a high 5v, straight-lined, no-change flat signal across the entire 2us bitcell.· This indicates a raw MFM "0"
Now, the idle state is normally HIGH, so data is always started by a negative-going "1" pulse, and then the number of high 2us bitcells before the next pulse.
The three raw MFM data possibilities are
10 -- one negative pulse followed by one high
100 -- one negative pulse followed by two highs
1000 -- one negative pulse followed by three highs
This is all RAW MFM -- IGNORE what the actual data bits are.· (If you care to know, ODD and EVEN data bits are interleaved with clock bits but are separated by half a sector.· This means you get all the odd bits first, and then all the even bits.)· This is handled in software on the PC, and the SX application doesn't know or care.
Your text file is wrong because it shows TWO raw MFM bits in one 2us cell.· This is wrong.· There is ONE raw MFM bit per 2us cell.· I've attached a picture(mfmhardwaredecode.jpg)·that shows the three different possibilities along with the appropriate decodings.· Also, I've attached one(amigafloppytrace) that shows 10 raw bits.· From left to right, in raw MFM it reads "1010101001".· The left most negative-pulse "1" is chopped-off a little, as is the right most negative-pulse "1."
I hope this is clear, shoot me a message if you need more detail.
Thanks.
Keith
OK, I think I got you - the main task is to check if another falling edge occurs after 4, 6, 8 µs, or never at all.
I have attached some sample code (sorry, it is SASM because I'm not too much experienced in SX/B to figure out the correct timing). This code is - by no means - complete, or tested - it should just give you an idea for a different approach. (As the Forum software did not allow me to send the .SRC file, I had to pack it into a ZIP).
The ISR in this code is only triggered on RTCC roll-overs every 0.5 µs. Normally, it simply increments a timer counter (FrameTimer). Besides this, it also takes care of sending a byte to the PC when one is completed.
The Main code runs through a loop like hell. With SXSim, I measured a loop time of 0.48 µs when the ISR is invoked in between. Within this loop, the code polls the WKPND_B register for new falling edges. When it has detected one, it branches according to the current value in FrameTimer which is incremented by the ISR. I did not add the code to handle the actions for 4, 6, 8, or > 8 µs (just comments instead). The tricky part is the end of the main code, where FrameTimer is cleared, and then the RTCC is re-initialized. To be honest, I never tried this but I think this should work to synchronize the FrameTimer on the most recently detected signal edge.
It might be necessary to increase or decrease the IntPeriod constant by some value in order to "hit" the next edge correctly, and for synchronization.
BTW:
In your original code, I found the following sequence:
' process a high, 5v, non-edge non-idle signal
' shift left, store a 0, and increase storedbits
storedata = storedata << 1
storedata = storedata | 0
The
storedata = storedata | 0
instruction does NOT actually clear the LSB in storedata as the "| 0" means "OR 0" which changes nothing.
To clear this bit, you should code
storedata = storedata & %11111110
but this is not really necessary because for the "<<" operator, SX/B generates a CLC, and then a RL instruction, i.e. the LSB is automatically cleared as the RL shifts in the cleared carry flag there.
On the other hand, the sequence
storedata = storedata << 1
storedata = storedata | 1
is absolutely correct because SX/B shifts in a 0 when coding the "<<", and your "| 1" finally turns this bit on.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Greetings from Germany,
Günther
Post Edited (Guenther Daubach) : 6/27/2005 9:56:48 PM GMT
Thanks for the code. I've looked through it several times, and I understand how it works. You aren't the first person to suggest doing it this way, and I'm beginning to think this is a much superior way of handling it. I guess dealing with two interrupt sources is just too big a pain in the butt when there are other methods for accomplishing the same thing. It's easy to get one track minded --- and half the reason I posted was to get another viewpoint on my problem.
I do wonder though, with your code, how you would deal with the fact that your Test4, Test6, and Test8 are writing multiple bits at a time to the shift register? What happens, for instance, if the shift register already has 6 bits in it (from (2) three sequence (ie 100) groups perhaps) and you need to store another 3 bits or another 4 bits instantaneously? You could implement some sort of FIFO bit array and this would deal with it --- but then things become a little more complicated.
With my code, the ISR only adds one bit at a time, and this is only once every 2us. So, when I get to 8 bits, I move the shift register over to the port, and then tell the PC the byte is ready. This takes a very small amount of time, and the shift register is cleared and ready for use very soon after -- the PC doesn't have to acknowledge the byte until up to 16us later. So there's always room for one bit.
Thanks for correcting my code -- I guess if SX/B didn't implement the line the way you describe, then I probably would have found it already [noparse]:)[/noparse] I've received some valid data, so I knew it was acting the way I needed it to, it just wasn't getting it done the way I THOUGHT it was being done. [noparse]:)[/noparse] Try saying that twice in a row, backwards. [noparse]:)[/noparse]
Thanks.
Keith
as I have commented in the code, I did not care about the actions in Test4, Test6, and Test8. I think, I now finally got the idea as so far, it was not clear to me that actually two (10), three (100), or four (1000) bits are to be shifted in.
Well, I don't think it is too complicated. I would use a 16-bit shift register SRH, and SRL, and always set SRL to %10000000 by the main program after an edge has been detected. Then you would use two variables, say BitsToShift (this must be set to 2,3, or 4 by the main program), and NumStoredBits (initialized to 8, like in my sample code).
The ISR would have to handle the shifting now:
Test if BitsToShift is 0 jump to :ISRExit, else, do a shift like
rl SRL
rl SRH
The initial state of the carry does not matter here, because it goes into SRL's LSB which is never shifted out as the maximum of shifts is four, so there is no need for an initial CLC.
After having done a shift, NumStoredBits is decremented. When it reaches 0, the sequence after :TestSendByte in the sample code is executed to send the byte to the PC, and NumStoredBits is re-initialized to 8.
Next, BitsToShift is decremented. When it is > 0, execution loops back to rl SRL to do another shift.
Note that the execution time of the ISR is now varying because the loop may be executed two, three, or four times, and some extra cycles are requred when a byte is ready to be sent to the PC. Nevertheless, as the code to handle the FrameTimer is executed before the shift/send stuff, it still generates an exact time-base.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Greetings from Germany,
G
I forgot to attach the modified sample code to my last post, so here it is...
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Greetings from Germany,
G
I know you said this code isn't complete --- and I generally get the idea behind the code.· I'm planning on taking your suggestion and coding something in SX/B shortly.
I noticed that you "mov OutputByte, StoreData", but StoreData never gets assigned or populated.· I'm guessing that you'd need a "mov StoreData, SRL" or "mov StoreData, SRH" at some point, right?
I know it's just me coming from C, but not having any high-level constructs in ASM sure make the code harder to read.· I know I'm just restating the obvious, but ASM just seems sooo wordy.· I study the code for 5 minutes, and I'm like, "Duh. This is the exit condition for the loop"· Loops (whether its for, while, do..while, etc) in C are so native and so atomic that there's no really no thought when writing them.
Is there any tricky business happening with the code segment below ?
Maybe I'm just overthinking this.· I know they are both shifting left, but because they are both adjacent in memory, you aren't doing anything crafty like moving the bits from one byte to the other, right?
For people who haven't seen the code, they are defined
For timing, I know the ISR has to be less .5us, but how much less in order to leave enough spare time for the main program?· Obviously, main has to run once total per 2us.
When I posted my original query about 3 months ago, Jim Palmer suggested a similar setup which is encouraging.
Thanks.
Keith
Regarding the "rl" command, have you looked up the documentation on the instruction? If you do, you'll find be able to figure out the answer to your question. "rl" is documented on page 139 of the SX-Key Manual.
With regards to your background of coming from C, remember that while the constructs may appear native and atomic, in fact they have to be decomposed into a bunch of assembly language instructions just like these. If your C compiler/environment allows, try enabling the display of the assembly language that the compiler generates while you're debugging. Very enlightening to see what is really happening under the hood.
You also mention that you've never used the debugger feature of the SX-Key. Do you use one when you program on the PC or the Amiga? The idea of wasting a capability as powerful as hardware enabled on-chip debugging with breakpoints and single stepping in an embedded environment is practically criminal. You mention that you don't know how to use it, but I guarantee that you'll never learn if you never try. How did you learn to program? Surely you had never done it before, but you persevered and now you know how. Does it not make sense that the same applies to using the debugger?
Thanks, PeterM
you are right, I did not send you complete or tested code. So, I have missed to replace the variable name in line
mov OutputByte, StoreData
it should read:
mov OutputByte, SRH
StoreData is a relict of my first sample, where I tried to use the same variable names found in your SX/B code.
Well - although the sequence
rl SRL
rl SRH
looks mysterious, it works as expected (even when both registers are not adjacent in memory) because (other than the << operator in SX/B), the rl instruction rotates left through the carry flag. I.e. the first rl SRL rotates the current MSB into the carry flag, and the next rl SRH rotates carry into its LSB, IOW, the carry flag is the "magic link" here.
"Purists" would clear the carry before each rl SRL to rotate in "clean" data into SRL. As the maximum number of rl instructions per loop is 4, the initial state of the carry does not matter here because it never "arrives" in SRH.
I agree with you that C code is more structured, or IOW, the language itself forces structured coding (to a certain extend - even in C you could write "Spaghetti-Code"), where assembly allows you to do almost whatever you like. No question - you can (and should) write well-structured code in Assembly as well. You can always construct a subroutine with one entry point and one exit point. On the other hand, when speed and code size matters, placing several ret instructions in one subroutine whenever you want to bail out, instead of jumping to one exit point costs less instructions and so results in faster execution speed. I know, we are talking about some 20 ns instruction cycles here but an application like decyphering MFM raw data can drive the SX to its limits.
Concerning timing, I'd like to crunch some numbers before giving an answer because it is a bit tricky here. In most applications using RTCC roll-over controlled ISRs, on ISR exit, RTCC is initialized to the required interrupt period minus the instuction cycles taken by the ISR instructions itself. This is why the RETIW instruction adds W to the RTCC. This is the case here too, while the main loop is waiting for another MFM edge. If it found one, RTCC is re-initialized to -IntPeriod in the main loop in order to "phase-lock" the ISR timer to the new edge.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Greetings from Germany,
G
Right. Of course.· My point simply was that when you are programming in these higher level languages, the abstraction hides the low-level details to the point where they become practically part of the backdrop.· Simple, easy, clean, no thought required.· Just an observation now that I'm working more low level.
I've used a debugger on the PC in Windows.· At least a shareware debugger (not Microsoft's source-level) at the assembly level.· Looking at Visual C++ Windows code in assembly is u*g*l*y.· Trying to figure out high level structures from straight disassembly is disgusting. No comments.· No real labels, just address labels.
I've written C/C++/java on a bunch of platforms like unix(every flavor, at least twice [noparse]:)[/noparse] ), windows, palm, amiga, etc.· Plus I've got a new background in html/javascript/perl/php/mysql/shell scripting, etc.· I'm fairly well versed in most of the programming/scripting languages out there -- enough to be able to write the small applications I do.· I'm afraid my assembly experience is limited mostly to DISASSEMBLY --- usually reverse engineering applications.· Which, yes, is very hard not knowing assembly well! [noparse]:)[/noparse]· I've had all the theory, and I understand at a basic level what's going on·but it's all in the details, or so they say.
Several people have said that I'm missing the boat with not using the debugger in Windows, etc.· I'm not so sure that it's a tool I *need*.· I've probably written thousands? of programs across 15 years without a debugger -- and they've worked well.· They are all non-commercial small apps admitted, but not without their own complexity necessarily.· Encryption, Network apps(including mobile CDPD apps), serial rs-232 apps, camera PTZ,·backup·SCSI tape·--- really a wide gamut of applications.
Incidentally, I never said·I'm not going to try the debugger.· And actually, all this talk about them is making me want to go play with them.· And I'm not trying to say that I don't think it could be a useful tool either.· I'm just saying that I haven't felt "limited" or "handicapped" without one.
Thanks for the words of encouragement.
Keith
I encourage you to use the learn the debugger and use it. I also encourage you to buy this book...
www.amazon.com/exec/obidos/tg/detail/-/1556155514/ref=pd_sxp_f/104-8604446-1694314?v=glance&s=books
...which will explain the benefit of using a debugger in more detail than I have time to here. In a nutshell, the key is to single step through every single line of code you write. Yes, that's right. Every line needs to be stepped through to verify that everything is behaving like you think it should. Everyone always freaks when I tell them this, and they always have a million reasons for why they don't need to do it, or that it's a waste of time, or that they've been programming for years and don't need to. All I can say to them is read the book, try the technique, and then we'll talk.
Thanks, PeterM
I must agree with Peter - since some geniusses invented debuggers, I've been using them on various platforms and together with several languages, like ASM, C, C++, C#, VB, SX-Key IDE, etc.
When I started programming, I did it with punched cards, coding PL-1 programs for an IBM 360 mainframe - no debugger around - eventually a job error list in the locker a couple of days after leaving the stack of punched cards there. Well, this had one advanteage - it made you "think before punch".
Now, that desktop systems are that fast (really?), it is easy to hack in some code, and "see how it works, and debug it - if necessary".
When I got my first microprocessor system, a KIM-1 with a 6502, 1 K of RAM, a hex display/keyboard and an audio cassette interface, I had to do all my assembly on paper, generating the hex code to be "push-buttoned" into that machine. It was really a PIA to re-calculate all the jump/call addresses when there was a need to insert another instruction somewhere in the code. You can be sure, this "made" you at least "triple-check" your code before doing a "hand-assembly", and "buttoning it in" .
This is why I "love" Assemblers, Compilers, and Debuggers.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Greetings from Germany,
G
You know, the program worked flawlessly, which given checksums etc we must have gotten 100%, or real close, of the program exactly perfect.
I had an Amiga where everyone else (except for the local Commodore Users group, which was pretty big at the time) was using a PC, or perhaps an Apple or Atari. I found that by having "non-standard" stuff, I learned so much more because solutions were never cut-and-dry, or correctly packaged specifically for the amiga. It always required a non-standard cable, or some special shareware software download from a BBS. You had to "hack" things(hardware, software, and otherwise) to get them to work properly.
Fun times.
Keith
Following the KIM-1, I bought an SWTP 6800 machine (South West Technical Products). This one had 16K of RAM, a "Kansas City" cassette interface, and 8 bit parallel, and serial RS-232 interfaces. It already had a motherboard using the SS-50 bus (Smoke Signals). Later, I added two 5 3/3" floppy drives. The OS was called "Flex". Too bad that I put the OS diskette too close to the floppy unit's mains transformer - so there was no Flex any longer . Fortunately, I got a new one from my dealer.
As a matter of fact, I once owned the full line of Tandy computers: TRS-80 Model I, Model III, Model IV, the "big" Model II, the "small" Model 100, and the CoCo II. For Tandy Germany, I translated the CoCo manuals into German language.
When I started with Model I, I also bought a used IBM Selectric ball-head typewriter with integrated solenoids, so it could be used as a printer. To convert from Centronics to IBM Selectric, I used the good old SWTP 6800 machine as an interface. Because this equipment was located in a room next to the living room, my wife was close to killing me more than once when this machine started "hammering".
"Talking" about 8K of hex code via the phone is really unusual. I remember, typing in endless hex codes I found published in computer magazines into these vintage machines, but I think I never ever got a program running on first try. Only after double- or triple-checking the typed in code, and fixing all the errors I had made, I sometimes was lucky. Maybe having someone reading you the code while you type it in is better than constantly switching between the keyboard and the printed media.
Yes, I agree - this was fun times. Maybe, one day, I write an SX application to simulate a 6502 - should be no problem speedwise .
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Greetings from Germany,
G
I used that terminal program to download a much better, fuller featured version.· Who knows if all the features of the original one worked, but it worked well enough to download something else.· We were very careful that we got it perfect, and I repeated each and every byte back.· I also remember reading each entire line back to make sure line for line it was perfect.· It was *very* tedious.
I do remember a single minor error in the text, so the entry display screen had a typo.· But who knows.· There were probably other errors, but thank god I never needed to execute a screwed-up piece of code.
Keith
and thanks for the clear explanation of how RL uses/affects the C flag.· Peter is right, the write up in the SX manual was pretty clear --- but every little bit helps.
I'm going to start to construct my version of the MFM reader....
Keith
Peter, welcome to the club - so far, it seemed to me that this was a private thread just between Keith an me .
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Greetings from Germany,
G
I 've been interested in this thread for a bit, but had not read through all the stuff (still not all) on it. Unless I am missing something, this seems like a rather straight forward matter of using a combination of adjustable RTCC timing and edge detection.
So I wrote a program to do what I believe is to be accomplished, and in my simple-minded testing it worked great.
I 've put in lots of comments so it should be easy to follow. Please try it and see if I got it right for you. Use the SX Key in debug mode to single step though it with simulated (push-button) operations for the pretend pulses.
Cheers,
Peter (pjv)
PJV: I modified your main routine because the logic was reversed, my PC software raises PCAck high when it has received a byte.· I think I did this right.
If you are interested, it produced all zeros.· And I can tell by the rate at which it was transferring to the PC, it was too slow, that more data was coming into the SX than going out towards the PC.· The communication to the PC appears to be working right, I switched the logic and it seemed to talk to my software OK.
But listen(directed to everyone)·--- I now have 3 or 4 programs written by different people.· And I appreciate it.· But here's the problem: there are like 4792384 different approaches, and everyone is taking a slightly different one.· I rewrote Guenther's code in SX/B, but that has yet to pan out.· I'm sure it's my fault --- I mean it's my code that's the problem.
BUT --- From my original message. I have written code myself that does this already -- I've attached the latest and greatest version.· My code is clean, short, and simple.· and more importantly, it's WORKING.· I'm probably getting about 10% of the data accurate, but the logic behind it is RIGHT.· The original reason for my post was to get ideas on HOW to debug my program.
I know its a timing problem.· The values I guessed for the code can't possibly be right.· And I have no idea why the value I picked shouldn't even work.· I'm doing a
which ends up producing
That.· And I've tried with trial and error to adjust that 75 to lower values, and higher values.· None of them produce anything even close to the same results.
Is there something so fundamentally wrong with my code that I should completely switch to a different way of writing this?· I've learned that writing code with two interrupt sources is harder than writing code with one.· I get that.· But my code is close.
What I need to know is
1. Should I be clearing RTCC every time I enter my ISR?· If I change it to clear only when I detect an edge with the idea to resync the rollover on an edge, this produces garbage.· Should I be setting it to zero, or some other value? What value?
2. What should I be using for returnint?· Or if you look at it this way, what value should W contain before RETIW?
I know there should be a "calculated" value, ie look how many cycles my ISR takes, how often data comes, etc etc and then add them up and divide by PI, and then I have my number.· And certainly, this will deviate from real life by a +/- 10% or something, which can be adjusted via trial and error etc.· But I don't know how to go about calculating this value.
I'm now using the debugger, which is a step in the right direction.· But I still don't know how to leverage this to help debug my code.
Thanks
Keith
I don't understand. You say your code is working, yet only 10% of the data is right??? I'd say your code is not working, and you're getting 10% random samples you are interpreting as correctness. For code to be running correctly, 100% is the only acceptable value.
The response time of my ISR should be PLENTY to deal with 2Usec events, but just in case I screwed up, I'll go now and measure it, and post the details.
Furthermore, I don't undertand the reason for your questions 1 and 2. If you are designing an ISR, and you think it is running correctly, then the answers to these questions must have been well known to you at the time you designed the concepts; how could you expect it to work? I'm confused.
Cheers,
Peter (jpv)
·
I absolutely agree with Peter - when your code is getting 10% of the data accurate, it is not working at all. To name it "working code", it must get 100% of the data (you also can't say that a woman is just a bit pregnant). The fact that 10% of the data is correct can only tell you that your approach might be the right one but that it requires a lot of refinements to make it work perfectly.
I don't have one of these vintage floppy drives here, and I assume Peter doesn't have one either. Therefore, his and my code examples are a bit of theoretical nature and we can't test them in reality.
When I prepared my code samples for you, I wanted to share with you some ideas of different approaches to solve your problem, and I assume Peter's intention was the same. Peter's idea to test the contents of the RTCC at the very beginning of the ISR to determine if an edge or an RTCC roll-over has caused the interrupt is a very interesting approach, and I like it.
IMO, what is happening in this tread right now is "brainstroming", i.e. collecting various ideas how your problem could be solved, In the end, it is up to you to decide which bits and pieces can be combined to get the job done.
OK, I got you - your major question was how to debug the code you had initially posted - to be honest, my answer is: "I have no idea". Using the SX-Key debugger's breakpoint feature can't really help with such a real-time application. As Peter mentioned, the only way might be to simulate the MFM raw data with a manually actuated push button while going through the code step by step each and every instruction to verify if it works as it should.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Greetings from Germany,
G
My SX is hooked up to a floppy drive that is reading a floppy disk with known contents.· The floppy disk has structure: a SYNC value(a signature of 8 known mfm bytes), a sector header (which has things like a format identifier (always 0xFF), sector number, track number, sectors remaining in the track, header checksum, data checksum), and then it has a data portion.· I've written a couple large files containing the word "AMIGA" in them.
My SX code bundled with my code on the PC is capturing this data that is coming out of the floppy drive.· I then try to decode the raw MFM bytes and see if they produce intelligible data.· If they do, the whole process worked, if not, it didn't.
My SX is falling in and out of sync.· There will be several hundred bits in a row that are correct --- I'm not guessing they are correct, I'm not "interpreting" them as correct --- they *are* correct because I'm getting the correct values that match the track numbers I'm reading, the sector numbers, and the actual payload -- the actual data is decoded and I see "AMIGA" on my PC.· My longest run of good data is about 800 bits in a row, where the SX captured the data properly.· Then, the SX falls out of sync, producing mostly garbage, and then falls back into sync where it reads a sector or two correctly, and then falls out again.
I look for the sync signature, and if that's correct, I check the format ID --- if both of these are correct, then I go ahead and decode the data.· You can't properly decode the data without knowing where to start decoding because the odd bits are stored before the even bits, and they are exactly a 1/2 sector apart.
So, when I say that its working at 10%, I mean that I'm receiving approximately 10% of the total number of sectors correctly.· IE, if I should be getting 100 sectors, I'm only properly receiving 10.· And actually, this number is higher, but if the sync value is corrupted OR the format ID is corrupted --- I don't decode the data.· So the data COULD be correct, but a corrupted header is enough for me to disregard the following data until the next sector.
Look at this blog entry.· Also look here.
I'm sure it's falling in and out of sync because of timing issues. They weren't well known to me at all.· I know that if my bitcells are 2us long, then the ISR has to trigger every 2us.· I started with a number of 100, 2us.· But, things get a little more complicated because the SX takes some time to recognize an edge occurs, and then you have the length of the ISR, and if you reset the rollover at the beginning of the ISR, then 2us from that point is too late.· And I looked my scope, and my ISR was firing too late in the second cell (edge was in 1st cell, of course), so I knew the value had to be smaller.· And I experimented with trial and error until I got to 75.· I find it interesting, and disappointing, that values of 74, and 76 produce garbage --- and dont work at all.
The SX takes some time, probably due to the shape of the input pulse(its not perfectly square), to recognize that an edge has occurred.· I did use the schmitt trigger option to help with this, but the times I measured on my scope from the time the edge really occurred, until the time the ISR fired varied from .1us to .44us.· Remember this is SX/B, and so there is some extra code that actually executes before my test debug bit was raised.· The actual values were .26, .32, .44, and .10us.
And then you have multiple ISR times, edge triggered ISRs last .64-.70us, and timeout ISRs last .86-.94us.
I've trimmed some of these down a little, but you get the picture.
Is this more clear?
Keith
Oh and when I said it was too slow, I didn't mean that the response time of the ISR was too slow. I meant that given the number of bytes that were flowing TO the SX, the number of bytes LEAVING the SX was much less.
So if I sent 300K bytes to the SX, the PC should receive 300K --- I didn't measure this exactly, but it appears the SX sent, lets say, 120k of data of the 300k leaving the floppy drive. It just so happens they were all zeros.
I'm not speculating one way or the other why this is, I was just commenting on the observation.
Thanks.
Keith
I'm not quite finished yet,..... but for your convenience I'm adding debug statements to my code, and now I have to run out for the balance of the day.
I'll finish that, and post tomorrow.
My testing here confirmed proper operation, but some slight (10%) timing improvements can be made.
Why my code reads all zeros??? Perhaps because I stored the results in a memory variable called TRANSFER. Did you modify the code to copy that to the output port for the PC to grab? I don't know what ports/bits you are using for what.
PS I have not yet read your response (no time), but will get to that when I get back.
Cheers,
Peter (pjv)
Thanks.
·
I'm back, and have read your replies, but will not get back to finishing my "debug sprinkled code" for you until tomorrow.
Actually I had not read ANY of your code as I don't pretend to know SX/B adequately to be able to spot problems, but I DO know my assembler code, and how to squeeze a lot out of an SX.
Also I had expected you to be able to read what I was doing and note that the received byte was not yet outputted, and I neglected to point that out to you. Obviously, not having read your code, I had no idea of port and bit assignments nor polarities.
Regarding your post of 1:30 PM, I take issue with your statement that the ISR should fire every 2 USec.
Instead, I believe (and this is how my code is written) that the ISR should sample 3 Usec after detection of a falling edge to put it in the middle of the next cell, allowing for maximum cell timing jitter. Then subsequent ISR's must each fire in 2 USec, staying in the middle of each subsequent cell.
Each time a falling edge occurs, then the next ISR must again fire in 3 Usec. This way each falling edge "resyncs" the whole process.
Also, the maximum number of consecutive zeros read may only be three, so if 4 are detected, an "end of sequence" is assumed, and the process is restarted.
Furthermore the ISR responds to the falling edge in 60 nSec; 3 instructions at 50 MHz. so I don't know where you are getting your numbers from. Is something slow happening in the ISR under SX/B ??? Perhaps I'll do an SX/B compile on your code tomorrow and see what assembler code that produces.
If I understand the nature of the floppy's signals, then I'm pretty sure my code is correct.
I hope I don't have to write a real-time simulator for that to prove it out.
We'll see, and I'm positive we'll get to the bottom of this real quick.
Cheers till tomorrow,
Peter (pjv)