Shop OBEX P1 Docs P2 Docs Learn Events
PNut/Spin2 Latest Version (v46 - DEBUG gating, clock-setter control, VAR flexibility, C_Z for DEBUG) - Page 19 — Parallax Forums

PNut/Spin2 Latest Version (v46 - DEBUG gating, clock-setter control, VAR flexibility, C_Z for DEBUG)

1161719212265

Comments

  • Cluso99 wrote: »
    I’m sure some of that can be useful for SD. But most of the time wasted with SD is in waiting for the SD Card to acknowledge the command (mostly 2.7ms on my SD, but as bad as 4ms and best 1.6ms.
    [...]
    BTW IIRC there’s no A class rating on my card.
    Oof, 1.6ms best case is _really_ slow. To qualify for A1 class, a card has to perform two and a half 4KB reads in the time it takes yours to acknowledge one command on a good day.
  • evanhevanh Posts: 15,915
    whicker wrote: »
    So there's digital clock delays (measured in clock ticks) plus the time it takes for the pin to rise and fall.

    160 MHz is the first critical spot where this rise and fall time reaches the period of the clock frequency.

    160 MHz is 6.25 ns period.

    Are we thinking that the pin rise time from 0 to 1 logic threshold is about 6.25 ns?
    No, it can definitely go faster than that. Also, the first frequency band threshold is around 80 MHz. The second one is at 160 MHz. And that's the thing that had me going around in circles early on with the FPGA. The FPGA's slew rate was very fast too, yet it was struggling at even lower clock rates. The actual problem is some sort of latency(lag) internal to the prop2 design. I couldn't quite put my figure on it though.

    On that note, Chip can likely narrow the cause down with the very first silicon he got made - the one with just the custom I/O pad-ring only. Getting an accurate measure of the latencies between the M[12:0] interface and the physical pins would help identify if that is the problem area or if it's all verilog issues.

  • evanhevanh Posts: 15,915
    Cluso99 wrote: »
    Thanks Evan.
    Yes I’m fairly certain I’m using CPOL=CPHA=0 which according to documentation is the preferred SPI mode for SD cards - data out with clk going low, sampling on clock going high.
    0/0 and 1/1 both have the same effective clock edge. The main difference is when data is placed on the data pins:

    - With 0/0, the first data bit is present before any clocking, and the second data bit is presented after the first low-going clock edge.

    - With 1/1, the first data bit is presented only after the first low-going clock edge.

  • Well, what if the final digital output transistor pair has a quick snap action, but the earlier output stage transistors have a long rise and fall time?
  • evanhevanh Posts: 15,915
    edited 2020-05-26 04:51
    They each must have enough speed to switch at 400 M transitions per second. That's how fast the prop2 can toggle at, give or take a little.

    Stick a hundred of them in series though, then you'll have a long latency propagation between the input and output without slowing any one of them down.

    EDIT: Propagation is the better term to use.

  • jmgjmg Posts: 15,173
    Cluso99 wrote: »
    I thought, maybe wrongly, that clock-gating was the solution to ensuring that the time delay from setting an output instruction to it appearing at the output was a fixed number of clocks after the instruction, and that the time delay from a pin being latched to being received by the instruction was also a fixed number of clocks. So I thought that these could (and were) specified. I thought that conditions external to the chip such as loading would not effect these internal fixed delays, so it would only be these external delays which could affect the rise and fall times of the signal at the pin. And these fixed delays would be constant over a wide clock range, whatever that range is.
    Not quite - Clock gating allows for power saving.
    Registers at the pins do help to make things a bit more consistent, but as your tests show, the overall delays are many SysCLKs and so that will affect maximum speeds.
    Cluso99 wrote: »
    What I am finding is that the current document is wrong. It needs to be corrected, and notes added giving details of what to expect. I realise we are in the early days but this will need to be precisely spelt out in the documents.
    To put this bluntly, the P2 cannot be taken seriously without this basic information for designers. To tell an engineer (potential source of volume sales) that they will have to work it out for themselves will immediately lose any credibility that they may have to use the P2.
    You would be surprised at the reasons chips get "dumped" by engineers. It's hard enough to get engineers to consider the P1 or P2 for a design in the first place, let alone give them a simple reason to give it a miss.
    The I/O pins are a fundamental part of the P2 design, particularly in light of the fact there are no peripheral blocks in silicon.
    That's a valid point, and P2 is going to need some minimum specs.
    Not yet seen, are how much variations are between batches of P2, but certainly temperature does affect P2 designs.

    Cluso99 wrote: »
    Here are some test results on my RevB chip on the P2EVAL pcb.
    There is nothing attached to the test pins 0-53, 54 & 55 have the buffer/leds IIRC.
    For testb I only tested pins 0-31.
                                    TESTP   TestB
    40-140MHz  (20MHz steps)        6       7      clocks
    160MHz                          6-7     7-8
    180-300MHz (20MHz steps)        7       8
    320-350MHz (10MHz steps)        7-8     8-9
    360-390MHz (10MHz steps)        8       9
    

    IMHO this does not sit well for bit-bashing as code may have to be tailored for the specific clock used :(

    for bit-bashing P2 may need a minimum clock count, for a given SysCLK and Temperature.
    eg from your tests above, a define of 8 clocks may be needed for up to 300MHz, but 7 may be ok for up to 140MHz.

    Other MCUs are already a bit like this, some specify wait-states for flash, which means higher SysCLKs get you more clks/opcode.
    A difference there is, you define a count, and you know that is what you get.

    Where this gets really tricky, is when someone has a combination of critical peripheral and clocks that crosses one of the boundaries, in 'normal operation'.
    Their SysCLK may be dictated by the application they may not have the luxury of picking a mid-point sweet spot for best temperature tolerance.
    The most common symptom would be a failure as something warmed, but there may be aging effects as well.
  • evanhevanh Posts: 15,915
    To be clear, unreliable read data is only a risk when the data rate dictates a short valid window.

    If the read data rate is something lower like sysclock/4 then you've got a window of four lag compensations that'll all work below 80 MHz. If the largest lag compensation is used as a hardcoded compensation then, as the sysclock frequency is raised, the window moves closer to the centre then to the other side, but the dreaded frequency bands don't intrude on reliability because the data only transitions every four sysclocks.

    Data rates above sysclock/4 is where calculating the right compensation becomes frequency dependant. And at sysclock/1 it's down right tricky.

  • Cluso99Cluso99 Posts: 18,069
    evanh wrote: »
    Cluso99 wrote: »
    Thanks Evan.
    Yes I’m fairly certain I’m using CPOL=CPHA=0 which according to documentation is the preferred SPI mode for SD cards - data out with clk going low, sampling on clock going high.
    0/0 and 1/1 both have the same effective clock edge. The main difference is when data is placed on the data pins:

    - With 0/0, the first data bit is present before any clocking, and the second data bit is presented after the first low-going clock edge.

    - With 1/1, the first data bit is presented only after the first low-going clock edge.
    Yes, it's 0/0 and yes the first data bit (msb) is output while the clock is 0. And the data is read on the high going edge.

    I was answering on my ipad and half asleep around 5am ;)

    I'm just going to tidy my code up and release it. Its doing 8 clocks per bit and an average of 9bits over the whole 512+2 bytes.
    I cheat and send the CRC as all $FF so I clock at 4 clocks per bit (OUTL+OUTH) ;)

    Later I'd like to revisit the code and try using the smartpins and SPI. When I get there I might enlist you and Roger's help as you've been doing marvelous things with the hyperram. But I have other things to do first.
  • evanhevanh Posts: 15,915
    Cluso99 wrote: »
    Yes, it's 0/0 and yes the first data bit (msb) is output while the clock is 0. And the data is read on the high going edge.
    Those points are the same for both. The difference between 0/0 and 1/1 is small.

  • Admin: Making this thread sticky for now, at the top of Propeller 2 section.
  • Hooray
  • I don't know if this is the right place. I can't find the original thread about the Propeller Tool V2.0 alpha.

    I get a strange error message when I try to compile a spin2 file. It says "Cannot find object "FullDuplexSerial2.spin2" in editor tabs, work folder or library although it's clearly there in both an editor tab and the working directory (see left window in the picture).

    BTW, how do I set the library path? I can't find anything in the preferences dialog (F5).
    822 x 679 - 59K
  • VonSzarvasVonSzarvas Posts: 3,450
    edited 2020-07-09 07:20
    ManAtWork wrote: »
    I don't know if this is the right place. I can't find the original thread about the Propeller Tool V2.0 alpha.

    I get a strange error message when I try to compile a spin2 file. It says "Cannot find object "FullDuplexSerial2.spin2" in editor tabs, work folder or library although it's clearly there in both an editor tab and the working directory (see left window in the picture).

    BTW, how do I set the library path? I can't find anything in the preferences dialog (F5).

    One situation that happens, is when the code file you are working on has not been saved yet, or was accidentally saved with .spin suffix (as is the PropellerTool default- it doesn't check the code you've written and figure out the correct file suffix yet).

    Try doing File / SaveAs , and select spin2 from the file type dropdown.

  • VonSzarvas wrote: »
    One situation that happens, is when the code file you are working on has not been saved yet, or was accidentally saved with .spin suffix (as is the PropellerTool default- it doesn't check the code you've written and figure out the correct file suffix yet).

    Ah, yes, that was the problem. Fastspin is a bit more tolerant, in this case. Thanks.

    But I still haven't found out how to set the library path. I can't find anything in the preferences dialog (F5). I think the old propeller tool had a fixed subfolder "Library" in the program folder.
  • cgraceycgracey Posts: 14,151
    edited 2020-07-15 19:18
    I've got the debugger working! It enables you to watch variables and expressions while your program runs. It's very simple to use. Just Ctrl-F10 to run any Spin2/PASM program with DEBUG built it. It's very low-impact, taking the last 16KB of RAM and operating unseen by your application. It shows you what happens from when your program originally starts with cog 0 loading from $00000, as if the debugger is not even there.

    Here is a picture that is self-explanatory:

    DEBUG_demo.png

    Here is the latest .zip and Google Doc:

    ZIP File:
    https://drive.google.com/file/d/10QwmwlZQOTLFy0MVyNNgzc71d2-ej8xr/view?usp=sharing

    Documentation:
    https://docs.google.com/document/d/16qVkmA6Co5fUNKJHF6pBfGfDupuRwDtf-wyieh_fbqw/edit?usp=sharing

    I will demonstrate this at 9pm GMT today, in less than two hours from now.
    473 x 436 - 8K
  • PublisonPublison Posts: 12,366
    edited 2020-07-15 19:54
    PNUT 34t produces an error on spin2_debugger.spin2 "DEBUG requires at least 10 MHz of clocking". I'm using P2 EVAL board with 20 MHz.

    EDIT: Just found this in the docs:
    To use the debugger, you must configure at least a 10 MHz clock derived from a crystal or external input. You cannot use RCFAST or RCSLOW.
  • cgraceycgracey Posts: 14,151
    Publison wrote: »
    PNUT 34t produces an error on spin2_debugger.spin2 "DEBUG requires at least 10 MHz of clocking". I'm using P2 EVAL board with 20 MHz.

    EDIT: Just found this in the docs:
    To use the debugger, you must configure at least a 10 MHz clock derived from a crystal or external input. You cannot use RCFAST or RCSLOW.

    Just put this line at the top of your program:

    CON _clkfreq = 10_000_000
  • cgraceycgracey Posts: 14,151
    Oh, sorry, that file spin2_debugger.spin2 is not meant to be run - it's the actual debugger, in case anyone wants to see it.

    Run the debugger_demo.spin2 program, instead. And then add DEBUG statements to the program(s) you are working on.
  • cgracey wrote: »
    Oh, sorry, that file spin2_debugger.spin2 is not meant to be run - it's the actual debugger, in case anyone wants to see it.

    Run the debugger_demo.spin2 program, instead. And then add DEBUG statements to the program(s) you are working on.

    Gotcha.
  • RaymanRayman Posts: 14,640
    edited 2020-07-16 01:11
    I do like the new debugger, it’s more or less an improvement on sending info out on the serial port to see what’s going on.

    Kind of like how I used fullduplex serial on P1 to output diagnostic info.

    Great things here is that works in both spin and assembly. Also, you can turn it off just by compiling with it off.

    Chip built his terminal for speed, but I think I’d rather just use PST.

    Do need that delay so have time to open that up though...
  • JonnyMacJonnyMac Posts: 9,102
    edited 2020-07-16 03:45
    Agreed; I have many P1 projects where I did formatted serial output from a background Spin cog for debugging. On more than one occasion I surprised clients when they called to ask if there was any way they could monitor what was going on.

    Long term, I think it would be neat if we could tell Propeller Tool which terminal app we'd like to use (PST is the default, of course).
  • RaymanRayman Posts: 14,640
    Is the baud for the debugger fixed?
  • cgraceycgracey Posts: 14,151
    I just posted a new .zip at the top of this thread which has an uncompressed PNut_v34ta.exe file. It seems the .exe compressor I was using was causing alarms to go off.
  • Hi,
    I have seen the video of the debugger.
    It might be useful, if there was a possibility to include the source file name and the line number into the debug(... ) so that you can see from where the information is coming.
    Just a suggestion.
    Best regards Christof
  • @cgracey Once output data from a debug method scrolls off the pc screen is it non- recoverable? If true will there be a Pause Method (then continue by space bar)?
  • cgraceycgracey Posts: 14,151
    Hi,
    I have seen the video of the debugger.
    It might be useful, if there was a possibility to include the source file name and the line number into the debug(... ) so that you can see from where the information is coming.
    Just a suggestion.
    Best regards Christof

    I think you would only have a few current DEBUG statements in your program, at a time, to keep the message count down. I could add filename and line#, but I think it would be superfluous, in practice. You'll probably know exactly where your messages should be coming from as you work.
  • cgraceycgracey Posts: 14,151
    kg1 wrote: »
    @cgracey Once output data from a debug method scrolls off the pc screen is it non- recoverable? If true will there be a Pause Method (then continue by space bar)?

    Yes, at this time, it's non-recoverable. It's very simplistic. You can read, but not copy, and there's no history. You can always set up a real terminal program to capture all the DEBUG messages, though.
  • cgraceycgracey Posts: 14,151
    edited 2020-07-19 12:45
    I just posted v34u at the top of this thread. Documentation is updated, too.

    - Pnut_v34u.exe is not packed, so it should not trigger virus scanners.
    - The long-standing PASM hub-exec addressing bug is fixed.
    - Spin2 expressions can now get register/LUT addresses defined under ORG by using #reg_or_LUT_symbol.
    - DEBUG improved with new CON-symbol sensitivities: DEBUG_DELAY, DEBUG_PIN, and DEBUG_TIMESTAMP

    Here are timestamped DEBUG messages:

    DEBUG_timestamp.png
    723 x 432 - 8K
  • Installs with no problem on WIN7 Pro, but WIN10 Pro still complains about the .exe, but was able to "Run Anyway".
  • cgraceycgracey Posts: 14,151
    edited 2020-07-19 13:01
    Publison wrote: »
    Installs with no problem on WIN7 Pro, but WIN10 Pro still complains about the .exe, but was able to "Run Anyway".

    Thanks for running it. I tried many variations of the .exe on the www.virustotal.com site and Kaspersky had a problem with everything I gave it. I don't know what it would take to make a clean .exe.
Sign In or Register to comment.