ALL interrupt vectors are stored in cog registers.
The debug interrupt vector in INA's shadow register is initialized to point to 1 of 16 longs at the end of hub memory, based on its cog id.
That interrupt fires before the first instruction executes, causing a debug interrupt to occur with the return address pointing to the first instruction of the user program.
If the default RETI0 executes from 1 of the 16 longs at the end of hub, the cog returns to the user program and there are no more debug interrupts.
If a JMP was at the 1 of 16 longs at the end of hub, the ensuing debug program may set the next debug interrupt condition (if any), may repoint the debug interrupt vector in INA's shadow register somewhere else, and may terminate with an RETI0, so that the user program may resume , until the next debug interrupt. If no new debug interrupt condition was set, no more debug interrupts will occur.
Of course, anytime you are in your debug interrupt routine, you may dump the contents of the COG state and wait for some user input. This is what a debugger would do.
You can always tell if a new cog program has started, because it will use one of the last longs in hub, again, instead of your revectored debug program.
ALL interrupt vectors are stored in cog registers.
The debug interrupt vector in INA's shadow register is initialized to point to 1 of 16 longs at the end of hub memory, based on its cog id.
That interrupt fires before the first instruction executes, causing a debug interrupt to occur with the return address pointing to the first instruction of the user program.
If the default RETI0 executes from 1 of the 16 longs at the end of hub, the cog returns to the user program and there are no more debug interrupts.
If a JMP was at the 1 of 16 longs at the end of hub, the ensuing debug program may set the next debug interrupt condition, if any, and do an RETI0, so that the user program may resume, until the next debug interrupt.
Of course, anytime you are in your debug interrupt routine, you may dump the contents of the COG state and wait for some user input. This is what a debugger would do.
Thanks for the explanation. I didn't realize these were debug registers. In that case, latency isn't really an issue.
ALL interrupt vectors are stored in cog registers.
The debug interrupt vector in INA's shadow register is initialized to point to 1 of 16 longs at the end of hub memory, based on its cog id.
That interrupt fires before the first instruction executes, causing a debug interrupt to occur with the return address pointing to the first instruction of the user program.
If the default RETI0 executes from 1 of the 16 longs at the end of hub, the cog returns to the user program and there are no more debug interrupts.
If a JMP was at the 1 of 16 longs at the end of hub, the ensuing debug program may set the next debug interrupt condition, if any, and do an RETI0, so that the user program may resume, until the next debug interrupt.
Of course, anytime you are in your debug interrupt routine, you may dump the contents of the COG state and wait for some user input. This is what a debugger would do.
Thanks for the explanation. I didn't realize these were debug registers. In that case, latency isn't really an issue.
Right. A debug interrupt will take hundreds of millions of cycles, when you are waiting for user input to proceed.
A single RETI0 at the end of hub will only take umpteen cycles.
When that debug interrupt fires and the vector in the INA shadowram tells it to go to the appropriate hub memory location it's doing so as a hubexec thing and it'll use the streamer and have the associated delays for that at startup of the cog. Right?
I still think it would be nice if we could start a cog without doing that, so we can get more precise deterministic timing on startup of a cog or cogs. Something like a global state setting that you can choose to turn on that enables debugging, so when the debugger is running it can set that state, when it's not the state can be cleared.
When that debug interrupt fires and the vector in the INA shadowram tells it to go to the appropriate hub memory location it's doing so as a hubexec thing and it'll use the streamer and have the associated delays for that at startup of the cog. Right?
I still think it would be nice if we could start a cog without doing that, so we can get more precise deterministic timing on startup of a cog or cogs. Something like a global state setting that you can choose to turn on that enables debugging, so when the debugger is running it can set that state, when it's not the state can be cleared.
Remember there is some cog ROM code which executes, too, so it's not like it's zero-overhead, to begin with. And, at that time, the user program has not initialized the streamer for any other purpose. So, it is quite innocuous.
When that debug interrupt fires and the vector in the INA shadowram tells it to go to the appropriate hub memory location it's doing so as a hubexec thing and it'll use the streamer and have the associated delays for that at startup of the cog. Right?
I still think it would be nice if we could start a cog without doing that, so we can get more precise deterministic timing on startup of a cog or cogs. Something like a global state setting that you can choose to turn on that enables debugging, so when the debugger is running it can set that state, when it's not the state can be cleared.
I think it will be deterministic from start, anway, because there are a few hub ops in the tiny cog ROM code which sets things up and establishes the hub relationship before the debug interrupt and hub exec FIFO begin. The relatively indeterminate delay comes from cog A doing COGINIT on cog B.
I think having a global disable bit wouldn't do anything but make people suppose there's going to be some difference.
When that debug interrupt fires and the vector in the INA shadowram tells it to go to the appropriate hub memory location it's doing so as a hubexec thing and it'll use the streamer and have the associated delays for that at startup of the cog. Right?
I still think it would be nice if we could start a cog without doing that, so we can get more precise deterministic timing on startup of a cog or cogs. Something like a global state setting that you can choose to turn on that enables debugging, so when the debugger is running it can set that state, when it's not the state can be cleared.
I think it will be deterministic from start, anway, because there are a few hub ops in the tiny cog ROM code which sets things up and establishes the hub relationship before the debug interrupt and hub exec FIFO begin. The relatively indeterminate delay comes from cog A doing COGINIT on cog B.
I think having a global disable bit wouldn't do anything but make people suppose there's going to be some difference.
It can't possibly be deterministic from start if it's using hubexec, because you don't know where the hub access is in it's cycle. You will have to wait for it to get around to that one long at the end of hub memory which is variable depending on what cog and where the egg beater is at in it's cycle at the time you start the cog.
However, it sounds like stating a cog is not deterministic at all anyway, because it runs code that does the hub copy to cog (if needed) before executing. I guess it's similar on the P1.
So yeah, the global setting thing would be a waste, so forget that.
When that debug interrupt fires and the vector in the INA shadowram tells it to go to the appropriate hub memory location it's doing so as a hubexec thing and it'll use the streamer and have the associated delays for that at startup of the cog. Right?
I still think it would be nice if we could start a cog without doing that, so we can get more precise deterministic timing on startup of a cog or cogs. Something like a global state setting that you can choose to turn on that enables debugging, so when the debugger is running it can set that state, when it's not the state can be cleared.
I think it will be deterministic from start, anway, because there are a few hub ops in the tiny cog ROM code which sets things up and establishes the hub relationship before the debug interrupt and hub exec FIFO begin. The relatively indeterminate delay comes from cog A doing COGINIT on cog B.
I think having a global disable bit wouldn't do anything but make people suppose there's going to be some difference.
It can't possibly be deterministic from start if it's using hubexec, because you don't know where the hub access is in it's cycle. You will have to wait for it to get around to that one long at the end of hub memory which is variable depending on what cog and where the egg beater is at in it's cycle at the time you start the cog.
However, it sounds like stating a cog is not deterministic at all anyway, because it runs code that does the hub copy to cog (if needed) before executing. I guess it's similar on the P1.
So yeah, the global setting thing would be a waste, so forget that.
Sorry. I was thinking about the cog-load case, only. I forgot about hub exec.
But, isn't it ALWAYS doing hubexec now? Even with the cogload variant? Because it's going to that instruction in hub memory before starting the code it loaded, and also it's doing a copy from hub to cog using the same pathway which is undeterministic, except for when the chip first powers on I guess, but it's still different per cog started.
I guess it just doesn't matter, the time from when you issue a COGINIT and your first instruction executes is long and variable.
In allcogsblink.spin, if I substitute a constant in the waitx statement, cog8 doesn't blink.
con
myx=25_000_000
dat
orgh 0
'
' launch cogs 15..0 with blink program
' cogs that don't exist won't blink
'
org
:loop coginit cognum,#@blink
djns cognum,@:loop
cognum long 15
'
' blink
'
org
blink cogid x 'which cog am I?
setb dirb,x 'make that pin an output
notb outb,x 'flip its output state
add x,#16 'add to my id
shl x,#18 'shift up to make it big
waitx ##myx 'wait that many clocks
jmp @blink 'do it again
x res 1 'variable at cog register 8
Although these files compile properly now I still am having trouble with getting Tachyon to work. I even cut out everything except the part that inits the transmit line and waits for a receive signal on the rxd to synch the boot messages until I'm ready. For some reason I've found that the code is falling through this simple wait-for-start bit test
wfs testb inb,#rx_pin wz
if_nz jmp #wfs
I have code that sends a "." character and once it falls through the wait-for-start it transmits a "#" character before looping this startup. However it always transmits the "#" character.
My coginit is simply: coginit #0,#@boot where boot is the code immediately after the org without any of the old registers etc.
Could the debug or event triggers be doing something?
In allcogsblink.spin, if I substitute a constant in the waitx statement, cog8 doesn't blink.
con
myx=25_000_000
dat
orgh 0
'
' launch cogs 15..0 with blink program
' cogs that don't exist won't blink
'
org
:loop coginit cognum,#@blink
djns cognum,@:loop
cognum long 15
'
' blink
'
org
blink cogid x 'which cog am I?
setb dirb,x 'make that pin an output
notb outb,x 'flip its output state
add x,#16 'add to my id
shl x,#18 'shift up to make it big
waitx ##myx 'wait that many clocks
jmp @blink 'do it again
x res 1 'variable at cog register 8
Rjo, not sure here. I will test it when I get home.
When you call setbrk in the debug ISR, do the breakpoint settings persist through subsequent breaks, or do you have to set it every time before you leave the debug ISR? In other words, can you enable single-step mode just once, or do you have to re-enable it every time the debug ISR is called?
Also, how do you detect when you have exited the debug ISR? Must you use RETI0 (or CALLD INB, INB WC, WZ)? I ask because it would be nice to return with CALLD INA, INB WC, WZ to set up a continuation in the debugger.
When you call setbrk in the debug ISR, do the breakpoint settings persist through subsequent breaks, or do you have to set it every time before you leave the debug ISR? In other words, can you enable single-step mode just once, or do you have to re-enable it every time the debug ISR is called?
Also, how do you detect when you have exited the debug ISR? Must you use RETI0 (or CALLD INB, INB WC, WZ)? I ask because it would be nice to return with CALLD INA, INB WC, WZ to set up a continuation in the debugger.
Be sure to read the updated docs at the top of the thread, concerning the debug interrupt.
You do need to do a new SETBRK every time you exit the debug ISR, if you want another debug interrupt to occur. So, the SETBRK condition is cleared every time a debug interrupt occurs. This is why a single RETI0 is able to end debug interrupts.
What constitutes the end of a debug ISR is 'CALLD anyreg,INB WC,WZ'. So, you could definitely exit via 'CALLD INA,INB WC,WZ' and resume your ISR on the next debug interrupt.
You can always detect if a cog was restarted because it will jump to $FFFCO + cogid*4, instead of to wherever you had repointed INA.
What constitutes the end of a debug ISR is 'CALLD anyreg,INB WC,WZ'. So, you could definitely exit via 'CALLD INA,INB WC,WZ' and resume your ISR on the next debug interrupt.
Does this Debug action consume a stack level ?
ie Can you debug a COG, that needs 8 levels of Stack ?
Comments
The debug interrupt vector in INA's shadow register is initialized to point to 1 of 16 longs at the end of hub memory, based on its cog id.
That interrupt fires before the first instruction executes, causing a debug interrupt to occur with the return address pointing to the first instruction of the user program.
If the default RETI0 executes from 1 of the 16 longs at the end of hub, the cog returns to the user program and there are no more debug interrupts.
If a JMP was at the 1 of 16 longs at the end of hub, the ensuing debug program may set the next debug interrupt condition (if any), may repoint the debug interrupt vector in INA's shadow register somewhere else, and may terminate with an RETI0, so that the user program may resume , until the next debug interrupt. If no new debug interrupt condition was set, no more debug interrupts will occur.
Of course, anytime you are in your debug interrupt routine, you may dump the contents of the COG state and wait for some user input. This is what a debugger would do.
You can always tell if a new cog program has started, because it will use one of the last longs in hub, again, instead of your revectored debug program.
Right. A debug interrupt will take hundreds of millions of cycles, when you are waiting for user input to proceed.
A single RETI0 at the end of hub will only take umpteen cycles.
When that debug interrupt fires and the vector in the INA shadowram tells it to go to the appropriate hub memory location it's doing so as a hubexec thing and it'll use the streamer and have the associated delays for that at startup of the cog. Right?
I still think it would be nice if we could start a cog without doing that, so we can get more precise deterministic timing on startup of a cog or cogs. Something like a global state setting that you can choose to turn on that enables debugging, so when the debugger is running it can set that state, when it's not the state can be cleared.
They are just stubs that can contain RETI0's or jumps to your debug code.
Remember there is some cog ROM code which executes, too, so it's not like it's zero-overhead, to begin with. And, at that time, the user program has not initialized the streamer for any other purpose. So, it is quite innocuous.
I think it will be deterministic from start, anway, because there are a few hub ops in the tiny cog ROM code which sets things up and establishes the hub relationship before the debug interrupt and hub exec FIFO begin. The relatively indeterminate delay comes from cog A doing COGINIT on cog B.
I think having a global disable bit wouldn't do anything but make people suppose there's going to be some difference.
I know you're in the middle of getting another image out, but can you provide a quick description of this?
Absolutely. I will get it done tonight.
It can't possibly be deterministic from start if it's using hubexec, because you don't know where the hub access is in it's cycle. You will have to wait for it to get around to that one long at the end of hub memory which is variable depending on what cog and where the egg beater is at in it's cycle at the time you start the cog.
However, it sounds like stating a cog is not deterministic at all anyway, because it runs code that does the hub copy to cog (if needed) before executing. I guess it's similar on the P1.
So yeah, the global setting thing would be a waste, so forget that.
Sorry. I was thinking about the cog-load case, only. I forgot about hub exec.
I guess it just doesn't matter, the time from when you issue a COGINIT and your first instruction executes is long and variable.
In allcogsblink.spin, if I substitute a constant in the waitx statement, cog8 doesn't blink.
My coginit is simply: coginit #0,#@boot where boot is the code immediately after the org without any of the old registers etc.
Could the debug or event triggers be doing something?
Rjo, not sure here. I will test it when I get home.
It works fine for me on the 1-2-3 board (12 LEDs are blinking in unison).
When you call setbrk in the debug ISR, do the breakpoint settings persist through subsequent breaks, or do you have to set it every time before you leave the debug ISR? In other words, can you enable single-step mode just once, or do you have to re-enable it every time the debug ISR is called?
Also, how do you detect when you have exited the debug ISR? Must you use RETI0 (or CALLD INB, INB WC, WZ)? I ask because it would be nice to return with CALLD INA, INB WC, WZ to set up a continuation in the debugger.
Be sure to read the updated docs at the top of the thread, concerning the debug interrupt.
You do need to do a new SETBRK every time you exit the debug ISR, if you want another debug interrupt to occur. So, the SETBRK condition is cleared every time a debug interrupt occurs. This is why a single RETI0 is able to end debug interrupts.
What constitutes the end of a debug ISR is 'CALLD anyreg,INB WC,WZ'. So, you could definitely exit via 'CALLD INA,INB WC,WZ' and resume your ISR on the next debug interrupt.
You can always detect if a cog was restarted because it will jump to $FFFCO + cogid*4, instead of to wherever you had repointed INA.
That's all correct, except that ALLOWI/STALLI are under software control, only.
Doesn't that put the debug ISR code in HUB only? Doesn't seem to jive with "cog exec area"...
Maybe I don't understand how org and orgh interact...
Does this Debug action consume a stack level ?
ie Can you debug a COG, that needs 8 levels of Stack ?
Yowza! I meant 'hub exec area'.
I've only been getting four hours of sleep per night all week, and I can feel my brain waning.