USB/uSD/Hyperram/Flash/Camera test board for P123

2»

Comments

  • Rayman wrote: »
    I doubt I'll be able to get to this anytime soon...

    Maybe Chip can eye-ball the timing notes above, and see what the streamer can do ?
    It's more a streamer question, than a HyperRAM question.
  • jmgjmg Posts: 10,083
    Rayman wrote: »
    Got the HyperRam chip talking! Took me an hour...
    ....
    Had to give a lot more latency clocks that I thought before data came out...

    Curious if you did any more work on HyperRAM ?
    eg on the working code, did you try removing the NOP packers, to check it still worked ?
    Did you check just burst playback, as in no tCMS fragmenting, and instead rely on read-scan-repeat inside 64ms ?
    If that works, then P1 + HyperRAM may even be practical.

    Did you check manual refresh ?

    I got some info from Cypress that suggests refresh can be as short as tRWR+tACC (40+40ns) and that row-read is done on load of CA23:16 edge.
    (which is why the timing diagrams pivot around that point)

    Actual data to pins is not so important, nor is finer address info, what is needed is the row-read, and then restore refresh. ( CS _/= )
    They were less clear on minimum possible edges, but the CA23:16 point needs 4 edges, so it may be that 4 edges + CS cycle is enough, with Row++ ?

    I make that about 12 lines of DJNZ code, with a timer-generated clock running in parallel, if this works, it means even P1 can refresh in
    12*8192/20M 4.9152ms ~159 lines (or proportionally less, for less refresh coverage).
    On the display I'm looking at, up to 255 lines of Frame Flyback are allowed.
    Refresh is only needed on not-presently-displayed frames, so less than the whole 64Mb can be refreshed to save time.

    On another note, I see Mouser list a good price of S27KS0641DPBHA020 $1.86/1014+
  • Hi JMG

    Unless there are any mistakes in the part # being advertised by Mouser and according to Cypress Ordering Information part number assembly, available at page #50 of Cypress datasheet, the device S27KS0641DPBHA020 stands for a 166mhZ@1.8V part, thus unsuitable to P2 3.3V I/O.

    The equivalent 100MHz@3.3V device should be S27KL0641DABH020, whose, sadly, isn't available for sale, at least within Google's reach...

    The nearest 3,3V equivalent I could find for sale at Mouser @ $2.85/(Min. Qty. = 2500) is ISSI's IS66WVH8M8BLL-100B1LI, a 24 BGA part (B1 segment of the part number) that lacks ball #1 (VSS), according to ISSI datasheet.

    Henrique
  • jmgjmg Posts: 10,083
    Yanomani wrote: »
    .. the device S27KS0641DPBHA020 stands for a 166mhZ@1.8V part, thus unsuitable to P2 3.3V I/O...
    Oops, yes, everyone else uses "L" to mean Low(er) voltage part, and I rather skim read ...

    Checking again, that gives 3V KL's at a lower price than the faster 1.8V parts

    Arrow: S27KL0641DABHI020 DRAM Chip DDR SDRAM 64Mbit 8Mx8 3V Tray Stock: 288 $1.6040

    Mouser: S27KL0641DABHI020 DRAM HyperRAM 3.0-V 64Mb Stock 286 1000 $1.6300
  • Hi jmg

    Good finds! Perhaps Erco should start hoarding now! :lol:
    jmg wrote: »

    I got some info from Cypress that suggests refresh can be as short as tRWR+tACC (40+40ns) and that row-read is done on load of CA23:16 edge.
    (which is why the timing diagrams pivot around that point)

    This is true for a Memory Core Space READ/WRITE transaction that hits a Single Initial Latency Count Cycle, meaning that the cycle in course is a No Extra Time Required for a Pending Refresh Cycle.

    In these cycles, RWDS switches from its initial three state to LOW LEVEL during the whole (CA[47:40] thru CA[7:0]) transfer portion of the READ/WRITE cycle.

    Moreover, that is not the default operational behavior of the 64Mbit devices and also it's unnavailable at the 128Mbit (dual-die stacked) parts.

    128Mbit and 64Mbit parts begin their operations under Fixed 2 times Initial Latency Count rules, after powering up or receiving a RESET pulse.

    They (64Mbit parts) can also be programmed to operate under Variable Latency Count rules, but from a software standpoint, this involves sensing the state of RWDS during the CA transfer portion and adjusting the clock count from CA[23:16] pivot point till the first WORD to be written or read, sure, if you wanna reach the Memory Core Space data transfer portion of the cycle.

    There are tradeoffs and also advantages in choosing to shorten that type of cycle and you should use some discretion when choosing the one that best fits your needs, because:

    - IF you are doing a programm controlled (or supervised) refresh cycle, and operating under Linear Burst rules, its wise to read the LAST WORD of a ROW AND the FIRST WORD of the following in sequence, since this involves only two more CK cycles (Up then DOWN) before CS# going HIGH.

    The net advantage of doing it that way is that both ROWS are automaticaly READ from the Memory Core Space, in sequence, and also automaticaly written back, in sequence, thus refreshing both ones using only two extra clock cycles before setting CS# high.

    Naturally this also applies when you are at the beggining of a ROW; it is wiser to begin at the end of the previously numbered one, reading its LAST Word, then proceeding to the intended read operation at the next (and intended) row.

    The same holds valid when you are finishing a ROW: take your time and read the first WORD of the next one, thus refreshing both ones, under a mere one extra clock cycle budget.

    To use the same trick when writing to the Memory Core Space, you should craft a map/table, e.g., at the HUB, where you left a copy of each pair of words into each long of the table.

    Then you could resort to table's contents to get the right data that should remain at its true position, when extending the first (and last) linear write burst into any ROW of interest.

    Henrique
  • jmgjmg Posts: 10,083
    Yanomani wrote: »
    ..
    - IF you are doing a programm controlled (or supervised) refresh cycle, and operating under Linear Burst rules, its wise to read the LAST WORD of a ROW AND the FIRST WORD of the following in sequence, since this involves only two more CK cycles (Up then DOWN) before CS# going HIGH.
    The net advantage of doing it that way is that both ROWS are automaticaly READ from the Memory Core Space, in sequence, and also automaticaly written back, in sequence, thus refreshing both ones using only two extra clock cycles before setting CS# high.
    ..
    Yes, I had thought about doing that, should the clock-shortened approach not work.

    For Software pin control, I think the number of machine cycles to do the [lastW+firstW] dummy reads, will exceed 2 x the shorter version, not to mention larger code size, so that was a plan-B.

    Even with a P2-streamer, maybe [CS_pulse + 2 x 32b write, plus X Clocks] can take longer than twice as long as [CS_pulse + 1 x 32b write + (t> 40ns)]

    The Cypress reply suggested it was ok to skip clocks after the tRWR, so I was keen to see if anyone with a connected HyperRAM can verify that ?

Sign In or Register to comment.