My clock rate can be changed at any time and so I use hub location $0014 to store the clock frequency along with other system parameters. Can't Spin2 do this also? Here is my take on getms:
' getms ( --- ms ) ' Return with millisecond count
_GETms rdlong r1,#@_CPUHZ ' Read system clock freq
qdiv r1,##1000
getqx r1 ' r1 = clocks/ms
getct y wc ' read 64-bit system counter
getct x
stalli
setq y
qdiv x,r1 ' ms = count/clocks_per_ms
getqx x
allowi
jmp #PUSHX
Btw, I don't see the sense in having such a wide spacing for wc/wz/wcz effects, they should really be close to the other operands.
Isn't "postpone" a good alternative, as for describing the attained behaviour, when it comes to the use of REPs, in the context of interrupts operation?
My clock rate can be changed at any time and so I use hub location $0014 to store the clock frequency along with other system parameters. Can't Spin2 do this also? Here is my take on getms:
' getms ( --- ms ) ' Return with millisecond count
_GETms rdlong r1,#@_CPUHZ ' Read system clock freq
qdiv r1,##1000
getqx r1 ' r1 = clocks/ms
getct y wc ' read 64-bit system counter
getct x
stalli
setq y
qdiv x,r1 ' ms = count/clocks_per_ms
getqx x
allowi
jmp #PUSHX
Btw, I don't see the sense in having such a wide spacing for wc/wz/wcz effects, they should really be close to the other operands.
I think that was the first way I coded GETMS ().
I realized, though, that you need to incorporate those lower three decimal places of CLKFREQ to avoid a cumulative error if the clock frequency is not a multiple of 1000. So, I use the entire CLKFREQ value to determine seconds and then I compute milliseconds from that remainder. This way, there is no difference error between GETSEC () and GETMS ().
What happens if one decides to modify the clock frequency on-the-fly, to, says, service some peripheral device for some time, then return to a previous definition, or even another one, ditctated by any specific needs?
Could not that kind of behaviour be considerted as "very agressive", in the sense of keeping a precise time reference, or there are any available means to adjust eventual residues, when transiting from one sysclk to any other one?
The number of "past CT timer-ticks" would be ever correct, but their relative wheighting needs to be considered, in the timeframe they where accounted for, before the last change at the clock frequency has occured.
The ideal solution would be keeping a correction factor, set to one at power on reset, meant to multiply any CT[63:0] readings, in order to adjust its value to the present system clock frequency, but this would cause some trouble, due to the need of resorting to the cordic mechanism.
Another possibility, would be maintainig a 64-bit entity on hub memory, set to zero at power on reset, whoose 63 least significant bits would express a correction value, with a flag at the msb, whoose meaning would be "0" for addition, and "1" for subtraction.
Every time the system frequency is changed, the correction value and corresponding add/sub flag would be adjusted accordingly, in order to reflect the decreased or increased weight of each previously accounted CT-ticks, at the new timebase.
The above solution would only add significant delay during the procedure of changing the clock frequency, which indeed has the need to account for a delay, anyway, so not much concern about.
Any other correction would involve only an extra addition/subtraction, appart from a forcefully access to the Hub, in order to grab the correction value, if it can't be kept at Cog or Lut memory.
Using the 64-bit count is great for units that do not change the clock rate, and trying to compensate for changing the clock is problematic. Using a timer interrupt instead to maintain ms can also be useful for maintaining timeouts by using soft count down to zero timers that apps can load and check. Tachyon uses this method in a dedicated cog, and apps can create timers which are automatically added to a linked list of soft timers. If the timer is nonzero then it is decremented every ms. Once it reaches zero an optional alarm action may be executed or else the app just checks the value when it's ready. So too, a high level watchdog timer is maintained the same way. The background timer task also maintains time of day in ms with a forced resynch to a hardware RTC every day.
If apps and not just drivers are meant to be coded in Spin2 then surely this level of support should be built in. I have many apps that use this timer cog this way and saves having to reinvent the wheel each time.
Maybe keeping a 1KHz timer interrupt going, while updating the interrupt period every time CLKFREQ gets changed, is the best way to track time.
If you want to track time without interrupts, then on each CLKFREQ change you need to sum time spent at the last frequency into total prior accumulated time, then update the new rate for the next CLKFREQ change.
IMHO, any programs that change the clkfreq must take care of any other changes to any other programs running. You cannot cater for everything, and this is one of those times that the user just needs to take care of himself.
One reason for clock speed changes is the same reason phones and tablets and PCs do so in that they want to run cool and efficient but step up for a burst of high activity that may not be able to be sustained due to power and heat limitations. In the case of P2 we can't clock the cogs separately so it's a system wide change that timing software needs to take into account.
Just hitching a ride on Peter's comment: if one of the objectives is going towards a self-hosted system, why don't enableing it to be a little "greener"?
We don't need to keep screen(s), external memory and many other subsystems, forever operating at full trottle, specially when unatended.
So, why don't being able to also slow down the cpu, if configured to do so?
Well, since we need a real time clock anyways for setting after reboot, we could also read the RTC after switching the clock freq.
my 2 cents,
Mike
Precisely, I resynch from the hardware RTC at reset and every day but I can see too that if I intend to use the hardware counter then I need to latch the reading as an offset when I resynch the RTC. That way I subtract the offset from the 64-bit cnt knowing that the remaining ticks are based upon the current clock speed. Then I can add that clock scaled reading to the last hardware RTC reading to come up with the current time or timestamp without having to access I2C hardware (slow).
Tony,
I'm glad you got me back to the triple-instruction variants. It has rotated the positions in the tables by one place.
PS: I tell you what I'm a little surprised to see that QMUL then RDLONG has same optimal hubRAM address as QMUL then WRLONG. It implies the setup for access to a hubRAM slice doesn't change between reads and writes of hubRAM.
Same behaviour showed up in my earlier testing too but I wasn't trying to examine the results due to not having completed the development at the time.
PPS: Note that WRFAST as a left hand doesn't give valid times. That's because most of the time WRFAST is naturally a non-blocking instruction. And as a right hand it is always 3 ticks.
Hi guys
I am sorry I have to say tthis .. but would you leave this thread to Chip for his PNUT updates?
When I miss the forums for some time and came back I have to scroll all this posts to uderstand if something new is going on with PNUT.
Thanks and regards
I discovered a bug in the Spin2 interpreter that was causing Spin2 cog stacks to be indeterminately offset due to an uninitialized variable. It took me two days to figure out what was going on and track down the cause. All fixed in v35d:
I discovered a bug in the Spin2 interpreter that was causing Spin2 cog stacks to be indeterminately offset due to an uninitialized variable. It took me two days to figure out what was going on and track down the cause. All fixed in v35d:
I discovered a bug in the Spin2 interpreter that was causing Spin2 cog stacks to be indeterminately offset due to an uninitialized variable. It took me two days to figure out what was going on and track down the cause. All fixed in v35d:
I discovered a bug in the Spin2 interpreter that was causing Spin2 cog stacks to be indeterminately offset due to an uninitialized variable. It took me two days to figure out what was going on and track down the cause. All fixed in v35d:
Also, there is room now for 60% more DEBUG statements.
Chip, will this update be available in Propeller Tool as well?
It should be soon.
I'm currently working on integrating Chip's Debug features from PNut v35 into the Propeller Tool - stitching it into the Propeller Tool's existing asynchronous serial port handling techniques; it's proving tricker than I thought, but making progress.
Yes, I'll be able to easily include the v35d bug fix and other updates before I finish this version for release.
Since this is a bug that sounds like a showstopper, please let us know if you've been experiencing this issue in Propeller Tool (or PNut).
Jeff,
There is an old issue where PropTool can't find any propeller chip when used under Wine. It can be used to compile the programs but another tool must be used to download the binary to the chip.
It's notable these days because PNut works beautifully under Wine. All the new debug features as well.
I know that's not much info to go on but I've always thought it was likely a small issue with Propeller detection only. If that got sorted then all the rest will likely work perfect.
I added the symbol DEBUG_BAUD to allow changing from the default baud rate of 2M. Also, the main DEBUG window does not resize and relocate each time a DEBUG download occurs.
Thanks Chip. What's the format for PASM? What does Eric use for C and Basic programs. Basically, if a user wants to start a program from a file, what does the LOAD routine need to do?
Comments
Btw, I don't see the sense in having such a wide spacing for wc/wz/wcz effects, they should really be close to the other operands.
I think that was the first way I coded GETMS ().
I realized, though, that you need to incorporate those lower three decimal places of CLKFREQ to avoid a cumulative error if the clock frequency is not a multiple of 1000. So, I use the entire CLKFREQ value to determine seconds and then I compute milliseconds from that remainder. This way, there is no difference error between GETSEC () and GETMS ().
Would not be there a case for further thought?
What happens if one decides to modify the clock frequency on-the-fly, to, says, service some peripheral device for some time, then return to a previous definition, or even another one, ditctated by any specific needs?
Could not that kind of behaviour be considerted as "very agressive", in the sense of keeping a precise time reference, or there are any available means to adjust eventual residues, when transiting from one sysclk to any other one?
The number of "past CT timer-ticks" would be ever correct, but their relative wheighting needs to be considered, in the timeframe they where accounted for, before the last change at the clock frequency has occured.
The ideal solution would be keeping a correction factor, set to one at power on reset, meant to multiply any CT[63:0] readings, in order to adjust its value to the present system clock frequency, but this would cause some trouble, due to the need of resorting to the cordic mechanism.
Another possibility, would be maintainig a 64-bit entity on hub memory, set to zero at power on reset, whoose 63 least significant bits would express a correction value, with a flag at the msb, whoose meaning would be "0" for addition, and "1" for subtraction.
Every time the system frequency is changed, the correction value and corresponding add/sub flag would be adjusted accordingly, in order to reflect the decreased or increased weight of each previously accounted CT-ticks, at the new timebase.
The above solution would only add significant delay during the procedure of changing the clock frequency, which indeed has the need to account for a delay, anyway, so not much concern about.
Any other correction would involve only an extra addition/subtraction, appart from a forcefully access to the Hub, in order to grab the correction value, if it can't be kept at Cog or Lut memory.
If apps and not just drivers are meant to be coded in Spin2 then surely this level of support should be built in. I have many apps that use this timer cog this way and saves having to reinvent the wheel each time.
If you want to track time without interrupts, then on each CLKFREQ change you need to sum time spent at the last frequency into total prior accumulated time, then update the new rate for the next CLKFREQ change.
We don't need to keep screen(s), external memory and many other subsystems, forever operating at full trottle, specially when unatended.
So, why don't being able to also slow down the cpu, if configured to do so?
my 2 cents,
Mike
Precisely, I resynch from the hardware RTC at reset and every day but I can see too that if I intend to use the hardware counter then I need to latch the reading as an offset when I resynch the RTC. That way I subtract the offset from the 64-bit cnt knowing that the remaining ticks are based upon the current clock speed. Then I can add that clock scaled reading to the last hardware RTC reading to come up with the current time or timestamp without having to access I2C hardware (slow).
I'm glad you got me back to the triple-instruction variants. It has rotated the positions in the tables by one place.
PS: I tell you what I'm a little surprised to see that QMUL then RDLONG has same optimal hubRAM address as QMUL then WRLONG. It implies the setup for access to a hubRAM slice doesn't change between reads and writes of hubRAM.
Same behaviour showed up in my earlier testing too but I wasn't trying to examine the results due to not having completed the development at the time.
PPS: Note that WRFAST as a left hand doesn't give valid times. That's because most of the time WRFAST is naturally a non-blocking instruction. And as a right hand it is always 3 ticks.
I am sorry I have to say tthis .. but would you leave this thread to Chip for his PNUT updates?
When I miss the forums for some time and came back I have to scroll all this posts to uderstand if something new is going on with PNUT.
Thanks and regards
Could a moderator please move all the posts starting at
http://forums.parallax.com/discussion/comment/1510998/#Comment_1510998
and ending at
http://forums.parallax.com/discussion/comment/1512876/#Comment_1512876
to the following thread?
http://forums.parallax.com/discussion/170955/hub-ram-fifo-read-timing
Comments moved here: http://forums.parallax.com/discussion/170955/hub-ram-fifo-read-timing
https://drive.google.com/file/d/1CqFEQiHXKb3dbuuVH2vke4oIBLyDvcNE/view?usp=sharing
Also, there is room now for 60% more DEBUG statements.
Chip, will this update be available in Propeller Tool as well?
It should be soon.
I'm currently working on integrating Chip's Debug features from PNut v35 into the Propeller Tool - stitching it into the Propeller Tool's existing asynchronous serial port handling techniques; it's proving tricker than I thought, but making progress.
Yes, I'll be able to easily include the v35d bug fix and other updates before I finish this version for release.
Since this is a bug that sounds like a showstopper, please let us know if you've been experiencing this issue in Propeller Tool (or PNut).
There is an old issue where PropTool can't find any propeller chip when used under Wine. It can be used to compile the programs but another tool must be used to download the binary to the chip.
It's notable these days because PNut works beautifully under Wine. All the new debug features as well.
I know that's not much info to go on but I've always thought it was likely a small issue with Propeller detection only. If that got sorted then all the rest will likely work perfect.
PNut_v35e ZIP File:
https://drive.google.com/file/d/1WPHKc0P4qUJv87ZZSE-mf2Q32CYqigBD/view?usp=sharing
Thanks @evanh
I've logged this here: https://github.com/parallaxinc/Propeller-Tool/issues/70
When I can look into that, I may have to quiz you about a few things to get me on the right track.
Mike Green asked about the .bin file format. Here it is.
Thanks Chip. What's the format for PASM? What does Eric use for C and Basic programs. Basically, if a user wants to start a program from a file, what does the LOAD routine need to do?