Shop OBEX P1 Docs P2 Docs Learn Events
PNut/Spin2 Latest Version (v46 - DEBUG gating, clock-setter control, VAR flexibility, C_Z for DEBUG) - Page 55 — Parallax Forums

PNut/Spin2 Latest Version (v46 - DEBUG gating, clock-setter control, VAR flexibility, C_Z for DEBUG)

1525355575865

Comments

  • evanhevanh Posts: 15,912

    Auto-calibrate, like the ROM loader, maybe is an option. More work for Chip but does have the bonus of supporting Debug in RCFAST, which it currently won't support.

  • another zany thought... Have an override to the debug() command that includes the baud. Aim to make the process user driven, removes the need for fancy auto-calibration code or untoward stuff in the pasm download.

    ie. when user changes clkfreq in their code, they could issue debug(mybaud) statement afterwards, to update debug with the new (or same) desired debug_baud, which signals to debug that it needs to query clkfreq and re-initialise. Something like that to take away mysterious "behind the scenes" stuff, and keep things simple in the user control.

  • evanhevanh Posts: 15,912
    edited 2022-09-19 11:49

    Ah, it's really the sysclock frequency that's needed. Baud can be a preset of say 2 Mbit/s, but without an auto-calibrate the sysclock frequency needs to be a known to calculate the baud's pacing in sysclock ticks.

    The smartpin can even handle fractional pacing to maintain accurate baud ... As long as the specified sysclock frequency is relatively close to actual frequency.

  • Indeed, exactly what I was trying to express. The debug routine needs a simple way to know when to "recalibrate" or re-init itself at the current clkfreq. That freq is known at the start when debug first executes so that's fine. After that, clkfreq would only change if the user code deliberately changes it. So that's why I wondered if having the user call debug() or debug(debug_baud) or debug(clkfreq), or even debug_reset().. whatever :) after the user has changed the clkfreq might be a clean way to "re-sync" things. Hopefully a pretty simple way to keep debug ticking through a clk change, without needing anything happening that's hidden behind the scenes, no PASM address shifting, no modification of pin 63, etc..

    Sure, it might be that debug can't be extended that way, but it feels like the code to achieve what I'm wondering would be simpler/smaller than the pin63 trick, and not require any changes on the PC side either. Anyway-- just a thought... best I get back to some things I actually know about :)

  • Can't you just only support changing the baud if the program somehow declares where it will maintain a current value of the clock frequency?

  • evanhevanh Posts: 15,912
    edited 2022-09-19 14:38

    The discussion has wandered a little. The issue is with pure pasm builds, where the cold boot default is RCFAST, and historically it's been up to the developer to explicitly code in any use of a crystal and PLL.

    This was easily done by adding the ASMCLK pseudo instruction in each program where and when desired. Or you could choose to do your own routine. Either way, it's, in effect, a runtime change that something like Debug has no default say over.

    But, as I said, Debug, as it stands, requires a known sysclock frequency from which to set the baud pacing interval and RCFAST doesn't provide that. Chip has outline his solution but it involves presetting the clock mode for an external fixed frequency source, like a crystal.

  • ElectrodudeElectrodude Posts: 1,657
    edited 2022-09-19 19:03

    Yes, I understand that - I just didn't realize the header isn't there in pure-PASM code. Wouldn't the problem be solved by some sort of directive that's only meaningful in pure-PASM programs that tells the assembler to arrange for the debug code to read CLKFREQ from the address of a particular hubram symbol, instead of from a long-repository smartpin? It would act like long, but the compiler would record its address somewhere in the debug code. It basically lets programs without the Spin header tell the compiler to store CLKFREQ in a custom location.

  • cgraceycgracey Posts: 14,151
    edited 2022-09-19 20:13

    The problem with auto-bauding for DEBUG is that it means DEBUG stuff will only work when the host system plays along by first sending out recalibration characters.

    About ASMCLK, it is kind of messy to look at and deal with when you just want to debug a simple PASM program. Also, because we can break on COGINIT, before ASMCLK can even run, the ClkFreq must be established beforehand. The only way ASMCLK can even be compatible with single-step debugging is if I put a REP #n,#1 before it, to protect it from debug interrupts. It's just messy. Much better to deal with before the program runs, in order to be consistent between code you will debug and later release.

    I understand that adding anything is kind of weird, though. I could make a symbol that, if defined, would suppress the clock_setter program from getting in front of your code.

  • @cgracey said:
    About ASMCLK, it is kind of messy to look at and deal with when you just want to debug a simple PASM program. Also, because we can break on COGINIT, before ASMCLK can even run, the ClkFreq must be established beforehand. The only way ASMCLK can even be compatible with single-step debugging is if I put a REP #n,#1 before it, to protect it from debug interrupts. It's just messy. Much better to deal with before the program runs, in order to be consistent between code you will debug and later release.

    I understand that adding anything is kind of weird, though. I could make a symbol that, if defined, would suppress the clock_setter program from getting in front of your code.

    But, why it needs to be in a separate and hidden code ? The debugger is already prepended to the main program, can't it just set the clock on its own at startup ? And since asmclk is a compiler thing, it can just replace it with the equivalent number of NOPs when in debug mode to keep the code aligned (or change that to the debug instruction mentioned below).

    Also, why the debugger needs to reconfigure the rx/tx pins each time it is invoked ?

    IMHO, the debugger should set its thing at startup only, then if the code messes with the system clock and/or the rx/tx pins it is the programmer's responsibility to be aware that it break the debugger and do the appropriate things. A new debug instruction could help here.

  • WARNING: P2 noob chiming in without being invited :-|

    Anyway, my fairy-tale vision for future P2 development is getting my dev boards onto my LAN. So, I'm imagin-eering a future version of the "PropPlug" using e.g. https://www.wiznet.io/product-item/w5100/.

    Possible side benefit is replaces the debug async with SPI-to-LAN bridge, so that the P2 can "be the boss" on clocking data in and out for debug purposes. Likewise, the "debug protocol" becomes just another TCP/IP stream to be digested by current and future debugging client apps.

    I'd gladly sacrifice another pin (for SPI) and one of my cogs to have easy debugging between a window on my Dev PC and a P2 on my LAN, along with the reliable comms of TCP/IP and the end of tethering devices to USB ports.

    Best.

  • evanhevanh Posts: 15,912

    It'd still just be a "Virtual Comm Port" over TCP.

  • brianhbrianh Posts: 22
    edited 2022-09-20 12:13

    @evanh said:
    It'd still just be a "Virtual Comm Port" over TCP.

    In my view, the P2 would be a network server and the debugger a network client. Inserting any form of "Comm Port", virtual or real, in the path, would be totally optional. So, the simplest "debugger" might be just a netcat command that spews debug text back to my console.

    I understand that this means P2 firmware would need to speak SPI to the Wiznet in some similar manner that it currently speaks to a Flash chip. Remote Reset signal is probably another "complexifier". As I said, just "imagin-eering" here.

  • evanhevanh Posts: 15,912

    That's an extra chip needed too then. It'd have to be a very optional debug feature that has no negative impact on usability of the comport method. So basically, it may as well be an independent solution.

  • @evanh said:
    That's an extra chip needed too then. It'd have to be a very optional debug feature that has no negative impact on usability of the comport method. So basically, it may as well be an independent solution.

    Sure. The Wiznet LAN-bridge chip replaces the FTDI USB-bridge chip in some optional/accessory device 'ala the "PropPlug".

    I'm not sure it can be an independent solution because the P2 debug functionality is in the P2 firmware, right? Maybe the source code of the debug functionality could be re-rolled as a Spin2 debug module that is conditionally compiled and speaks SPI to the Wiznet.

  • evanhevanh Posts: 15,912
    edited 2022-09-20 13:50

    No, there is no FTDI chip in many designs. It's just the Prop2 smartpins. Same for the Prop1 but it's even bit-bashed all the way.

    And, no, this Debug is not built into the Prop2. It is compiled into each application. Chip has only just finished it.

  • And, no, this Debug is not built into the Prop2. It is compiled into each application. Chip has only just finished it.

    I see. Seems to me like the Spin2/PASM2 "debug()" statement just works without including any application code.

    Anyway, this appears to be off-topic to this thread.

    I've really enjoyed my remote/network debugging workflow with the P1. Pity to regress.

  • cgraceycgracey Posts: 14,151
    edited 2022-09-20 21:50

    @macca said:

    @cgracey said:
    About ASMCLK, it is kind of messy to look at and deal with when you just want to debug a simple PASM program. Also, because we can break on COGINIT, before ASMCLK can even run, the ClkFreq must be established beforehand. The only way ASMCLK can even be compatible with single-step debugging is if I put a REP #n,#1 before it, to protect it from debug interrupts. It's just messy. Much better to deal with before the program runs, in order to be consistent between code you will debug and later release.

    I understand that adding anything is kind of weird, though. I could make a symbol that, if defined, would suppress the clock_setter program from getting in front of your code.

    But, why it needs to be in a separate and hidden code ? The debugger is already prepended to the main program, can't it just set the clock on its own at startup ? And since asmclk is a compiler thing, it can just replace it with the equivalent number of NOPs when in debug mode to keep the code aligned (or change that to the debug instruction mentioned below).

    Also, why the debugger needs to reconfigure the rx/tx pins each time it is invoked ?

    IMHO, the debugger should set its thing at startup only, then if the code messes with the system clock and/or the rx/tx pins it is the programmer's responsibility to be aware that it break the debugger and do the appropriate things. A new debug instruction could help here.

    Yes, the debugger which gets prepended to your code DOES set up the clock. It's in the case of running your PASM-only program WITHOUT the debugger where the clock_setter gets prepended, instead, so that clocking will work the same, with or without the debugger. In the end, your pure-PASM app runs either way, but with the clock mode established. Again, this is only for PASM-only apps. Spin2 apps set up the initial clock and even report subsequent clock-frequency changes to P63's long repository whenever SETCLK(ClkMode, ClkFreq) executes.

    I could add a symbol sensitivity, where if DEBUG_IGNORE_ASMCLK is defined, the ASMCLK instruction would not emit any code when the debugger is included, since the debugger would establish the clock mode, itself. This way, if the debugger is not included, ASMCLK will generate the expected instructions to establish the clock, and no clock_setter will be appended. This would add six longs to your PASM code, though, for ASMCLK (instead of the 16-long prepended clock_setter code.

    Or... I could let ASMCLK always be present, but precede it with a REP #6,#1 instruction to protect it from interruption by the debugger. My thinking was, though, that ASMCLK is a mess to look at, right at the start of your PASM-only program that you want to debug. I wanted to just get it out of the debugger picture, or wish it into the corn field.

  • cgraceycgracey Posts: 14,151

    I made a one-minute video of the debugger operation.

  • brianhbrianh Posts: 22
    edited 2022-09-21 14:41

    @cgracey You inspired me to try out PNut v36, which I'm running in Arch Linux via Wine. Works well, and I even tested it with "debugger_clock_frequency_change.spin2", which also works.

    Now, if I can just figure out why the Propeller Tool refuses to recognize COM ports in Wine... (I know this is already a documented Issue with PropTool).

  • cgraceycgracey Posts: 14,151

    @brianh said:
    @cgracey You inspired me to try out PNut v36, which I'm running in Arch Linux via Wine. Works well, and I even tested it with "debugger_clock_frequency_change.spin2", which also works.

    Now, if I can just figure out why the Propeller Tool refuses to recognize COM ports in Wine... (I know this is already a documented Issue with PropTool).

    We will get PropTool to recognize ports better. Glad you tried it.

    You can place plain DEBUG comands (no parentheses) anywhere in your code and they will drop you into the debugger when executed.

  • brianhbrianh Posts: 22
    edited 2022-09-21 15:16

    @cgracey Thank you for that. I tried troubleshooting the COM port issue from the Wine configuration side but haven't found a solution. So maybe Wine is just limited on what types of Win32 port enumeration API it supports. I'd look at the PropTool code, but it doesn't seem the source code is available.

    Speaking of "source code available", I'm very happy to see that the Pnut code drop includes "Spin2_debugger.spin2" and "flash_loader.spin2". As you can probably tell, I'm early in understanding how P2 compilers/toolchains actually work, but it's good to see that toolchains can easily inject their own boot loader and/or debugger support code. So, my idea of using an SPI accessory for the debugger, instead of my current async-to-USB "PropPlug", seems a plausible option for a customized toolchain.

  • evanhevanh Posts: 15,912

    I tested the dynamic sysclock changing with one of my older testers - hacked the send() statements into debug() statements.
    Works nicely, thanks Chip.

  • cgraceycgracey Posts: 14,151

    @brianh said:
    @cgracey You inspired me to try out PNut v36, which I'm running in Arch Linux via Wine. Works well, and I even tested it with "debugger_clock_frequency_change.spin2", which also works.

    Now, if I can just figure out why the Propeller Tool refuses to recognize COM ports in Wine... (I know this is already a documented Issue with PropTool).

    Jeff is working on getting the debugger integrated into PropellerTool. I will bring this up to him.

  • cgraceycgracey Posts: 14,151
    edited 2022-09-25 02:32

    What would you all think about Spin2 starting to clear local variables upon method entry? It would only take a few extra clocks, but would save frequent code within methods that's needed to clear local variables before use. All local variables (except parameters) would start out at zero. Currently, return values start out at zero, but local variables start out undefined. I know this would simplify a lot of my own Spin2 code. I wish I had implemented this clearing in the first place.

  • evanhevanh Posts: 15,912

    Seems reasonable.

  • cgraceycgracey Posts: 14,151
    edited 2022-09-25 05:56

    @evanh said:
    Seems reasonable.

    Okay. Cool. It's done. It added 2 longs to the Spin2 interpreter, which brought the useable-register limit down from $124 from $122.

    This...

    callgo          rdfast  #0,x            'a b c d e f g h        return from hub, start new bytecode read
                    rfvar   x               'a b c d e f g h        get locals
            _ret_   add     ptra,x          'a b c d e f g h        point stack past locals
    

    ...became this...

    callgo          rdfast  #0,x            'a b c d e f g h        return from hub, start new bytecode read
                    rfvar   x               'a b c d e f g h        get number of local longs
            _ret_   djnf    x,#.clear       'a b c d e f g h        if zero, continue
    .clear          setq    x               'a b c d e f g h        else, clear locals and point stack past them
            _ret_   wrlong  #0,ptra++       'a b c d e f g h
    

    ...and the compiler now makes the RFVAR value a long count, instead of a byte count, to support the SETQ+WRLONG sequence.

    This will be in v37.

  • Please. stop. changing. the. language. semantics.

    No seriously, STOP. You're messing up everyone else for questionable gain.

  • evanhevanh Posts: 15,912

    An advantage of not having the docs done. :D

  • cgraceycgracey Posts: 14,151

    @Wuerfel_21 said:
    Please. stop. changing. the. language. semantics.

    No seriously, STOP. You're messing up everyone else for questionable gain.

    Are you thinking of FlexSpin?

  • Among other things. In this particular case that's the main one*. But I complain in general. Even if the change is kinda good, you shouldn't change the language at this point. It causes everyone a lot of confusion and work. It is done, you are free to literally anything else. Or do things that people have asked for for years, like adding preprocessor.

    Every time you change some minor thing...

    • there's the (in this case admittedly small) potential to break someone's extant code. PropTool has no way to select a different compiler version, so one's just screwed.
    • You create extra work for Eric/me/Stephen/whoever else provides language tooling.
    • You mess with the whole documentation effort. Code is documentation, too. People will see explicit variable clears in old code and copy that habit. And now it's getting double cleared. Poor example ik.

    * Getting the ASM backend to handle variable clear is probably not too bad and wouldn't have much impact due to the sophistication of the dead code elimination. But there's 2 other backends (plus C) that don't have that and would just suffer (nu in particular would either have to waste time clearing local space regardless of language (reducing C performance) or add an explicit fill instruction to every Spin2 mode function. P1 bytecode only has the latter option (and as said, neither has enough IR introspection to elide it when unnecessary)).

Sign In or Register to comment.