Shop OBEX P1 Docs P2 Docs Learn Events
Memory drivers for P2 - PSRAM/SRAM/HyperRAM (was HyperRAM driver for P2) - Page 3 — Parallax Forums

Memory drivers for P2 - PSRAM/SRAM/HyperRAM (was HyperRAM driver for P2)

1356737

Comments

  • roglohrogloh Posts: 5,791
    edited 2020-03-21 10:17
    Here are some of the different data formats this external HyperRAM/Flash driver uses for servicing its requests from client COGs. Most of the fields should be reasonably self explanatory. It extends/replaces my earlier two mailbox long based proposal I originally posted and now includes enhanced capabilities for fills/arbitrary length transfers, and graphics/bank-bank copying and request lists.

    You can make requests using the per COG mailbox which has up to 3 consecutive longs. The format of the first mailbox long is the same as that from the earlier proposal. The top 4 bits still indicate the request type and the next four bits indicate the bank and they can also effectively be shared with upper address bits allowing devices larger than 16MB to be supported.

    For individual memory reads (byte/word/long sizes) and list requests you only need to write to a single mailbox long entry to initiate the request, while the other read/write transfer bursts and writes/fills need all 3 mailbox longs written for their requests. There are some other control formats not shown here for accessing HyperRAM registers and other driver configuration settings that also use bank 15 whose specific control values may still be subject to change.

    Request lists are stored in Hub RAM and their processing is triggered when the special start of request list command is detected in the first mailbox long, which also points to the start of the list items.

    Each request list item read from Hub RAM can be either normal length (4 longs) or extended length (8 longs) as indicated by the MSB bit of the 4th long in the item. Both item formats contain a link pointer to the next list item request in Hub RAM and this pointer will be set to zero for ending the list. Though not shown here explicitly, the remaining individual read or write/fill requests are also possible to include in the list too by using the same 3 long mailbox format that is shown in the table followed by a next link field as the 4th long. For my own sanity I have prevented request lists recursively linking to new lists. :smile:

    The request list will be executed until it finishes or detects an error (e.g. invalid bank) and the request completion is then notified back to the requesting COG via the mailbox and/or a COGATN. Transfer bursts will be broken up according to the bank and per COG limits when accessing the HyperRAM bus. This helps support HyperRAM refresh requirements and limits latency for the prioritized requests.

    The extended length list item format now allows special graphics copies and/or bank-bank transfers using the Hub RAM as an intermediate buffer. You can optionally setup the scan line pitch to be used for the Source and/or Destination external memory/Hub RAM addresses after each scan line portion is transferred (the scan line width portion is defined by the Hub Address size field). The two offsets are independent which allows the greatest flexibility of graphics copy data packing/unpacking options.
    requests.png
    860 x 871 - 137K
  • TonyB_ wrote: »
    rogloh wrote: »
    Update:
    By the way, the COGRAM usage is up to 489 longs and the HUB RAM is now 491 longs.

    LUT RAM 491?

    Yep, you got me. I just fixed it in the original.
  • roglohrogloh Posts: 5,791
    edited 2020-03-24 09:50
    I've fixed a couple more bugs causing code path lockups and made more testing progress. Nothing major exciting to announce yet but a fair bit more code is running now and I was able to finally start to get my list processing working. Here's a two item list being processed by my driver. This sample list does two things, firstly a 3 scan line graphics copy from one external memory address to another, and finally a separate 40 word fill operation before the list finishes and the COG is notified of completion.
    list        long    mem#REQ_READBURST + address
                long    dumpbuf        ' intermediate hub buffer
                long    8              ' number of bytes to copy (pixels) 
                long    mem#REQ_WRITEBURST + address + 1024
                long    3              ' number of scan lines to copy
                long    640            ' src scan line length
                long    640            ' dest scan line length
                long    entry2
    
    entry2      long    mem#REQ_WRITEWORD + address
                long    $aa55
                long    40             ' number of writes in this fill
                long    0              ' end of list
    

    This is the capture from the logic analyser showing the bus transfers from this list. We first see 3 pairs of read + write bursts (one per scan line copy). You can see a small gap in the address/data phase for the reads which comes in handy for me to identify them as reads during debug. At the end is the larger 40 word fill operation. Ignore the RWDS level (I'm using it for a debug signal right now on reads) as well as the timing scale which is very slow because I am under clocking the P2 to capture this on a slow logic analyser.

    copy.png

    I'm happy this sequence is finally working - it has actually been rather hard work getting to this point for me because of putting in the time over many small sessions instead of solidly for a few days straight. I'll soon be testing register accesses then will be able to add back the real HW again for final testing.

    You can see above that doing small transfers is going to be inefficient on the HyperRAM bus. It works better when you use much larger transfer bursts. Here's another example copying 200 bytes worth of pixels at a time which will improve performance significantly.
    copy2.png
    878 x 132 - 32K
  • jmgjmg Posts: 15,173
    rogloh wrote: »
    You can see above that doing small transfers is going to be inefficient on the HyperRAM bus. It works better when you use much larger transfer bursts. Here's another example copying 200 bytes worth of pixels at a time which will improve performance significantly.
    Is it proving ok in practical use to have CS low for longer than the spec max ?

  • evanhevanh Posts: 15,921
    edited 2020-03-26 03:26
    My understanding is yes it'll be practical and even robust if managed. That means some sort of accounting in the driver to estimate the hardware refresh progress.

    What I think happens is the hardware tracks ideal rate of row refreshing with an ideal row number as well as an actual row to be refreshed. And will double pace the refreshes until the actual has caught up with ideal.

  • roglohrogloh Posts: 5,791
    edited 2020-03-26 06:19
    jmg wrote: »
    Is it proving ok in practical use to have CS low for longer than the spec max ?
    Well I am not operating this slow with any HW fitted at this point in time. The capture timing above was only for debugging some complex software sequences with a slow logic analyser without actual HyperRAM HW and using an under-clocked P2. In general I don't intend to exceed the CS low time specs in my own applications with this driver once I speed it back up and re-fit the real HW however the driver is flexible and if you choose to you will be able to configure it to use larger burst sizes that exceed the HyperRAM limit and go setup any internal registers to activate longer refresh intervals to experiment with its behaviour.

    As of now now this driver currently has 2 programmable burst size limits in bytes which I use together to fragment the bursts:
    1) a per COG burst limit which can be enforced to allow video QoS with priority COG polling etc,
    2) a per bank burst limit to support the HyperRAM refresh limitations. This also scales in the driver depending on Sysclk/1 or Sysclk/2 operation so it relates back to a time interval.

    Up to 65535 bytes can be setup for either limit (due to the 16 bit size limit of $FFFF in the streamer command)

    Each HyperRAM memory bank will probably initially simply be setup to keep to the 4us default minus the overheads but this can be adjusted afterwards using an API when desired. I already need this per bank setting anyway as HyperFlash is not limited to the same 4us and its burst size can be opened up to improve transfer performance.

    The actual bank burst limit value setup assumes sysclk/1 operation, but the writes (and optionally reads) are done at sysclk/2 and the burst scales down so the observed transfer lengths will be half of the value setup when operating at sysclk/2. The COG limit applies differently and this complicates things because it is applied independent of the transfer speed. Perhaps I will need to scale the COG limit as well but I'd sort of prefer not to.

    Yesterday I was testing the register reads and the code path seemed to be working on the analyser. Today I am looking at the register writes and combining some of its code with register reads to share more of the tight LUT space using EXECF. Things are proceeding well now and it still looks like the code should all fit.
  • evanh wrote: »
    My understanding is yes it'll be practical and even robust if managed. That means some sort of accounting in the driver to estimate the hardware refresh progress.

    What I think happens is the hardware tracks ideal rate of row refreshing with an ideal row number as well as an actual row to be refreshed. And will double pace the refreshes until the actual has caught up with ideal.

    At this time this driver does not do any refreshes and relies on self-refresh of the HyperRAMs. Maybe at some stage this can be re-examined to go do some refreshes periodically but it's probably going to be hard to fit in the code space left (unless it can be run from hub exec perhaps). Also with multiple independent banks supported it could get tricky to manage, particularly if the banks are of different sizes which is allowed by this driver.
  • evanhevanh Posts: 15,921
    edited 2020-03-26 08:08
    My description is still entirely using the self-refresh hardware. The idea is just to keep a tally of refresh starvation, from the long burst transfers, with the aim of balancing that with breaks to allow refresh auto-replenishment at double rate.

  • evanhevanh Posts: 15,921
    edited 2020-03-26 09:30
    I base my assumption of the hyperRAM hardware self-refresh capabilities off this statement in Distributed Refresh Interval section of a Cypress datasheet:
    Because tCSM is set to half the required distributed refresh interval, any series of maximum length host accesses that delay refresh operations will catch up on refresh operations at twice the rate required by the refresh interval divided by the number of rows.

    I read that as saying it can catch up even after large amount of refresh starvation. To me that means it'll catch up on anything that doesn't roll over the 64 ms whole array interval.

  • evanhevanh Posts: 15,921
    edited 2020-03-26 09:51
    Or it maybe just goes at double the specified rate all the time ... with host bursts longer than tCSM effectively whittling that down.

    However you look at it, the end result is, for each 64 ms interval, you have available 32 ms of uninterrupted bursting plus 8192 individual tCSM short bursts.
  • roglohrogloh Posts: 5,791
    edited 2020-03-27 10:01
    evanh wrote: »
    Or it maybe just goes at double the specified rate all the time ... with host bursts longer than tCSM effectively whittling that down.
    That's how I thought it worked but it is all internal to the device so I might be wrong.

    In the case of video, fragmenting the non-video COG's bursts is pretty much going to be required anyway so the benefit of having longer bursts over 4us only really applies to the video COG, because for non-video COGs you can't take advantage of allowing long bursts unless the driver understands in advance when the video access will be idle for an extended time (eg. in vertical blanking intervals). Now I've already done some work here to help reduce the throughput reduction for video burst requests being fragmented by allowing any priority video COG to continue to use the bus without yielding back to the poller. It still fragments the transfer bursts for video but it actually helps performance quite a lot due to avoiding the per request and polling overhead incurred by each fragment.

    UPDATE:
    Today I observed my register write code path functioning on the Logic Analyzer, as well as the HyperFlash word and burst writes which also used the same zero latency write primitive operation.

    It looks like the Cypress HyperFlash burst writes will already need to be limited to 512 bytes and below, and fragmenting them using my normal COG and per-bank limit approach is probably a no-no based on how the flash write sequences actually work. Thankfully however the burst write sizes sent in the original requests themselves could also be reduced in the application level to prevent fragmentation occurring.

    I can foresee that if applications choose to write flash using the largest 512 burst sizes it may have impacts on the video latency when some high resolution video modes are operating. But for now I've hard limited flash burst writes to 512 bytes using a CON defined constant only, and this parameter may possibly need to become configurable at startup time at some future point if other flash devices use different limits. We'll have to see how well these HyperFlash burst numbers play out in the real world once we get that far. I've not even tried any HyperFlash access yet but want to soon.
  • roglohrogloh Posts: 5,791
    edited 2020-03-31 03:33
    Yesterday I fitted the real Parallax HyperRAM/HyperFlash module board to the P2-EVAL and started testing with my code. I've found that the HyperRAM part seems to be entering a deep-sleep mode after bootup at different times. When it doesn't do this I found can read/write actual data from it, otherwise I only read zeroes. I think the startup reset code is somehow contributing to this issue (and it varies with P2 frequency), but at least the device is returning control register data that looks somewhat meaningful and valid for the ISSI part.
    Reading IR0 Reg result = $0D83
    Reading IR1 Reg result = $0000
    Reading CR0 Reg result = $0F1F
    Reading CR1 Reg result = $0002
    

    I might try forcing the deep-sleep mode off at startup time via register write as well to see if this helps. I don't recall this same issue with my old code, but the reset logic has probably changed with this new driver, so I'll need to check into that as well.
  • evanhevanh Posts: 15,921
    rogloh wrote: »
    Yesterday I fitted the real Parallax HyperRAM/HyperFlash module board to the P2-EVAL ...
    What were you using before?

  • whickerwhicker Posts: 749
    edited 2020-03-31 03:56
    I know I'm just stabbing at the dark, but when writing to the config register i recall there being 4 bits that must be set to 1. (Bits 11 to 8 in CR0)

    They don't seem to read back as all 1s, so those 1s must be OR'd in, during a read modify write situation.

    Also when writing to CR0, of course bit 15 must be OR'd to 1 as well. (0 is power down)
  • evanh wrote: »
    rogloh wrote: »
    Yesterday I fitted the real Parallax HyperRAM/HyperFlash module board to the P2-EVAL ...
    What were you using before?

    Well for much of the new driver development for testing basic pin IO and request sequencing, just a logic analyzer. A previous driver I have used the original HyperRAM board and this worked well, but this new software driver uses different optimised control logic which is yet to be fully tested with real HW.

    It looks promising though when it actually worked to read then write a burst yesterday, but the startup seems a bit unreliable. :smile:


  • evanhevanh Posts: 15,921
    From ISSI datasheet:
    5.2.1.6 Deep Power Down
    Deep Power Down Mode is not supported in an 128Mb MCP device.

    But at top of same datasheet under differences with stacked die package:
    Deep Power Down mode by CR write supports for only 1 die only ( either bottom or top die)
    It is selected by CA35 staus when Deep Power Down operation is executed.
  • roglohrogloh Posts: 5,791
    edited 2020-03-31 04:23
    whicker wrote: »
    I know I'm just stabbing at the dark, but when writing to the config register i recall there being 4 bits that must be set to 1. (Bits 11 to 8 in CR0)

    They don't seem to read back as all 1s, so those 1s must be OR'd in, during a read modify write situation.

    Also when writing to CR0, of course bit 15 must be OR'd to 1 as well. (0 is power down)


    Yeah I want to write this register to see what happens. Interestingly I was reading the newer data sheet which doesn't mention the two dies in the part (MCP). When I went back to the original data sheet from ISSI for the actual HyperRAM part fitted on the Parallax board, i.e. this on:
    http://www.issi.com/WW/pdf/66-67WVH16M8ALL-BLL.pdf

    I notice it doesn't talk about deep sleep mode the same way or define bit 15 of CR0 other than saying it is reserved.
  • roglohrogloh Posts: 5,791
    edited 2020-03-31 04:26
    evanh wrote: »
    From ISSI datasheet:
    5.2.1.6 Deep Power Down
    Deep Power Down Mode is not supported in an 128Mb MCP device.

    But at top of same datasheet under differences with stacked die package:
    Deep Power Down mode by CR write supports for only 1 die only ( either bottom or top die)
    It is selected by CA35 staus when Deep Power Down operation is executed.

    Hmm okay, the weird thing is I am still not 100% sure how it is getting into this state given I drive the reset pin high for 150us (it gets pulled low on the HyperRAM board) then low for 3us then wait another 150us before any access, all with CS = high. Then I do the reads of the config registers and get the above results. No writes were done in the meantime so nothing should be clobbered after reset.
  • roglohrogloh Posts: 5,791
    edited 2020-04-02 05:15
    The zero thing in bit15 of CR0 seems to be a red herring and it just appears to return 0 in this register from bootup. Maybe anyone else with this same HyperRAM board and their own drivers can confirm this result. But the RAM still seems to function even with this bit left intact (and I'm not writing anything to this register yet).

    I found an issue with the polling of the RWDS line at higher speeds. I don't think I was giving it enough time between when CS was lowered to when the RWDS pin was polled to check for the latency doubling, resulting in some pretty weird behavior. The actual internal P2 GPIO latency makes this issue worse. On a 200MHz P2, by adding an extra NOP I was able to eliminate it, but I don't like the NOP and if possible need to find some useful work to do there. Ideally two or more instructions to add further safety margin and allow higher P2 speeds.
                                drvl    cspin
                                drvl    datapins                'enable the DATA bus
    
                                setbyte addrhi, rdcommand, #0   'setup burst read command
                                movbyts addrhi, #%%1230         'reverse byte order to be sent on pins
                                movbyts addrlo, #%%3201         'reverse byte order to be sent on pins
    nop ' extra time needed to test RWDS pin logic level after making CS pin low
    
                                testp   rwdspin wz              'check RWDS pin for latency
                if_z            shl     latency, #1             'double latency edges if RWDS is high
    
    
  • Actually prior to dropping the CS pin low on the HyperRAM read transfer I do this work below to reset to the transition output mode for the CLK pin and set things back to sysclk/2 operation for the address write phase (as it could still have been left at sysclk/1 during a prior data read phase). I wonder if there is a safe way to do this work after making CS=low so I can avoid the NOP. I don't want floating clocks with CS=low. I vaguely recall that I may have discussed this before in another post and I need to try to dig it up to see what the outcome of that discussion was. If the CLK pin can safely remain low during this work and not float I think the code can be moved after the "drvl cspin" instruction and create a larger delay before I poll RWDS. This way I do something useful instead of NOP(s).
                                fltl    clkpin                  'disable Smartpin clock output mode
                                wrpin   #%1_00101_0, clkpin     'set into Smartpin transition output mode
                                wxpin   #2, clkpin              'configure for 2 clocks between transitions
                                drvl    clkpin                  'enable Smartpin   
    
    
  • evanhevanh Posts: 15,921
    edited 2020-04-02 05:46
    Assuming that smartpin is repeatedly being used in same "transition" mode ... you don't need to keep reissuing the WRPIN at all. In fact, as long as any prior WYPIN has completed its steps, then I think even DIRL/H is only useful for synchronising the start of X "base period" ... and, conveniently, changing from X=1 to X=2 should be clean. And going back to X=1 self-corrects as well. So, no need for repeated DIRL/H pairs either.

    Only the WXPIN is needed and that can be any time after the prior WYPIN has completed all its step transitions.

  • roglohrogloh Posts: 5,791
    edited 2020-04-02 06:30
    Ok I already had a feeling some of this setup work could be optimised out. I originally expected I might need to resync the start of the base period so that my actual WYPIN clock re-start and streamer commands that follows later (always some multiple of 2 clocks per intermediate instruction executed) can then be aligned as I need and not ever get out of phase. But maybe you are right there and it is not required if X was 1 and I now want it to be 2. I'll have to experiment a bit more...

    Update: @evanh one fear I still have is that if the clock transition setting X=2 was already active from the prior state and the P2 instruction phase is now out of step with it (due to something like doing a waitx #3 or other wait command that can change the phase to be on another P2 clock boundary), then without the fltl instruction I won't be able to reset its phase back to what I want. Or maybe I need to go through setting X=1 first, then X=2. I wonder if this can be done?
  • evanhevanh Posts: 15,921
    edited 2020-04-02 09:26
    Yeah, that's all part of getting the phasing right. Off-by-ones are annoying when having to accommodate them but the important thing is it stays consistently set even if instructions outside the critical sequence are changed, and it will be consistent as far as I can tell.
  • Did a bit more on this today. I think I have both the sysclock/2 and sysclock/1 read rate configurations actually working now with real HW. You can now assign each bank to operate with its own delay and a registered vs direct input data bus setting which would allow fine tuning operation over a frequency range for each device, and something you can change if the P2 frequency ever changed dynamically after startup to allow the driver to continue to operate. This was a bit tricky to get right but it looks more stable now. Still testing it with real HyperRAM.

    This selectability does cost a few more instructions in the setup path but I found I could also eliminate that clock pin related WRPIN instruction above and also make use of the extra NOP time to do something useful which helped pay for some of this extra overhead. The extra WRPIN I need to now choose between registered/non-registered operation moved later into a time where I am doing a WAITX, so I can overlap it there and just reduce the waitx delay time by 2 clocks.

    One thing I found is that the delay needs to change (by one clock cycle) depending on whether we run the data transfer phase at sysclk/2 or sysclk/1. I think this is because WXPIN timing change required after the address phase is quantised to 2 clocks for sysclk/2, but only one clock for sysclk/1. This makes the data appear one cycle sooner for sysclk/1 operation once the clock is started, shrinking the read delay accordingly. At 200MHz I found the delay value needed once you start the clock output with WYPIN is 3 clock cycles for sysclk/1 (effectively 5 clocks if you exclude the overlapped WRPIN instruction in the code below). For sysclk/2 reading it is 4 clocks, or 6 if you exclude the WRPIN.
                                wxpin   clockdiv, clkpin        'adjust transition delay to # clocks
                                setxfrq xfreqr                  'setup streamer frequency
                                wypin   clks, clkpin            'setup number of transfer clocks
                                wrpin   registered, datapins    'setup data bus inputs as registered or not
                                waitx   delay                   'tuning delay
                                xinit   xrecv, #0               'start data transfer
    
  • evanhevanh Posts: 15,921
    edited 2020-04-06 07:25
    Hurm, Brian's chart doesn't hit 200 MHz with registered data pins. Are you registering the clock pin too? https://forums.parallax.com/discussion/comment/1482076/#Comment_1482076

    EDIT: Hmm, that doesn't work either. I've just found my own testing from the next day after Brian did his:
    Room temperature, about 24 oC
    
    $$$$$$$$$$$$$$$$$$$$--------------------------------------------------------------- 001 to 098 MHz
    -----------------------$$$$$$$$$$$$$$$$$$$$$$-------------------------------------- 112 to 196 MHz
    -----------------------------------------------------$$$$$$$$$$$$$$$$-------------- 233 to 289 MHz
              reg'd HR_Clk          reg'd HR_Dat
    00000000000000000000111111111111111111111111122222222222222222222222223333333333333
    22233444556667788899000112223344455666778889900011222334445566677888990001122233444
    04826048260482604826048260482604826048260482604826048260482604826048260482604826048
              reg'd HR_Clk        unreg'd HR_Dat
    $$$$$$$$$$$$$$$$------------------------------------------------------------------- 001 to 080 MHz
    ------------------$$$$$$$$$$$$$$$$$$----------------------------------------------- 090 to 162 MHz
    ----------------------------------------$$$$$$$$$$$$$$$$--------------------------- 180 to 241 MHz
    ----------------------------------------------------------------$$$$$$$$$$$-------- 276 to 317 MHz
    
    
    
    $$$$$$$$$$$$$$$$$$$---------------------------------------------------------------- 001 to 094 MHz
    ----------------------$$$$$$$$$$$$$$$$$$$$$---------------------------------------- 107 to 188 MHz
    --------------------------------------------------$$$$$$$$$$$$$$$------------------ 221 to 277 MHz
            unreg'd HR_Clk          reg'd HR_Dat
    00000000000000000000111111111111111111111111122222222222222222222222223333333333333
    22233444556667788899000112223344455666778889900011222334445566677888990001122233444
    04826048260482604826048260482604826048260482604826048260482604826048260482604826048
            unreg'd HR_Clk        unreg'd HR_Dat
    $$$$$$$$$$$$$$$-------------------------------------------------------------------- 001 to 078 MHz
    -----------------$$$$$$$$$$$$$$$$$$------------------------------------------------ 087 to 156 MHz
    --------------------------------------$$$$$$$$$$$$$$$$----------------------------- 173 to 233 MHz
    -------------------------------------------------------------$$$$$$$$$$$$---------- 264 to 308 MHz
    
    

    There isn't any combination of registered data pins that fits 200 MHz sysclock.

    EDIT2: The band limits move up a little at lower temperature. 200 MHz would work with registered clock and data pins at 0 degC.

  • Well the 200MHz data phase operation was without registered pins enabled. The "registered" variable above actually gets set to either 0 or this other value below, by flipping bit16 and is configurable per bank. We can set this value (and delay) with a table for example. I'm not registering the clock output at the moment but I keep the data pin output registered during the address phase (always sysclk/2) and slip it by 1 clock cycle with a wait #1 to create the 90 degree clock phase difference that I want. Unfortunately I don't have a high bandwidth scope to test the exact phase between clock and data pins so I can't be 100% sure it is perfectly centered when the clock output is not registered and the data output is. It seems to be enough to work however (at this rate). I guess it could need tweaking if it doesn't work at other rates though.
    registered      long    %100_000_000_00_00000_0 'setup clocked input pins
    
  • roglohrogloh Posts: 5,791
    edited 2020-04-06 07:33
    I'm not 100% sure that registering the output side makes a large amount of difference. So far I've not seen it but there might be something there at higher clocks perhaps and as I mentioned I don't have a good scope to test this.

    Update: Just saw your update @evanh, ok then so maybe the output clock registering does have a slight impact at higher clocks at some boundaries according to your test results.

    I think the original results ozpropdev came up with only ever adjusted the data bus pins, never the clock and from what I recall there was full frequency overlap when the input bus was alternating registered/non-registered and the delay was varied. But I think you may have gone much further here.
  • evanhevanh Posts: 15,921
    Ah, good. All makes sense with data unregistered.
  • evanhevanh Posts: 15,921
    edited 2020-04-06 08:02
    Here's a couple of read results from February this year when I was retesting HR writes at sysclock/1 with a capacitor. HR writes worked all the way up to 354 MHz with clock pin unregistered and 22 pF capacitor attached to P24 at the accessory header of the Prop2 Eval Board revB.

    Frequency bands for HR Read Data, room temperature, data pins P16-P23, clock pin P24:
    1-96 MHz, 112-193 MHz, 232-288 MHz: All registered pins, no capacitor.
    1-87 MHz, 107-174 MHz, 217-266 MHz: All registered pins, 22 pF capacitor.

    I didn't test HR reads with clock pin unregistered and capacitor attached but that can roughly be derived as scaled down bands likewise, ie: take 20 MHz off the top and 10 MHz off the bottom.

  • evanhevanh Posts: 15,921
    edited 2020-04-06 08:07
    So, unreg'd HR_Dat and unreg'd HR_Clk, with 22 pF cap, will be about: 1-68 MHz, 77 to 136 MHz, 153-213 MHz, 244-288 MHz.
Sign In or Register to comment.