Shop OBEX P1 Docs P2 Docs Learn Events
New Hub Scheme For Next Chip - Page 20 — Parallax Forums

New Hub Scheme For Next Chip

1171820222337

Comments

  • TubularTubular Posts: 4,702
    edited 2014-05-18 16:52
    Sorry, Lawson, missed your line saying pretty much the same thing.

    I was thinking more along the lines of keeping the fifo direction 'the same', but muxing the source (either hub or pins INA), and the output (pins OUTA/DACs or hub, respectively). Maybe that is a 'reverse direction fifo', don't spend enough time in that world to know

    Good points regarding the pin bandwidth and esd, and how to get a clock in.
  • potatoheadpotatohead Posts: 10,261
    edited 2014-05-18 16:55
    But, I don't think I'd want to give up ~30 pins for video when 4 pins can give you pretty much the same picture...

    That's where I've always been at on it.
  • tonyp12tonyp12 Posts: 1,951
    edited 2014-05-18 17:12
    >30 pins for video when 4 pins can give you pretty much the same picture...
    hdmi: Only 4-8 pins needed if you can make the P2 do the bit-serializing and not relying on a 30pin external serializer IC.

    could it be possible for a smart pin serializer to work at 371 Mhz (probably not at 165nm) ?
    if so you can do 1280x720 at 30hz.
  • TubularTubular Posts: 4,702
    edited 2014-05-18 17:41
    There are also LCDs such as the pixel qi that take LVDS at a lower clock rate
    http://forums.parallax.com/showthread.php/152729-LVDS-8b-10b-etc...?p=1231118&viewfull=1#post1231118

    If chip implements the 'LUT sequencer', which would always use 256 elements, but step through a sequence of samples depending on the bit depth, we could use the LUT to do the bit encoding (to spread the data across 4 or 5 lanes). The LUT would allow you to cater for a wide range of encoding schemes
  • jmgjmg Posts: 15,173
    edited 2014-05-18 17:59
    tonyp12 wrote: »
    I'm not spending $6-$30 for an external chip/box when a few extra flip/flops or tuned to 250mhz P2 could give it to me for free.
    I would be upset if it turns out that P2 is just 95% of reaching 480P hdmi.

    Interesting idea, but may be pushing the process ?
    I found these figures in a Xilinx App note
       TMDS I/O 
    VGA(1)  (640x480  @ 60 Hz) 25   250 24b
    480p(2) (720x480  @ 60 Hz) 27   270 24b
    SVGA(1) (800x600  @ 60 Hz) 40   400 24b
    XGA(1)  (1024x768 @ 60 Hz) 65   650 24b
    HD(1)   (1366x768 @ 60 Hz) 85.5 855 24b
    WXGA(1) (1280x800 @ 60 Hz) 71   710 24b
                |               |     |  |          
    	    |               |     |  \->Color Depth
                |               |     \---->Serial Data Rate (Mb/s)
    	    |               \---------->Screen Mode Pixel Rate (MHz)
    	    \-------------------------->Common Video Screen Modes
    

    Suggests 4.00ns and 3.70370ns bit-rates and a CLK with 2ns HI/2nsLo or 1.85185ns Hi/Lo
    for the two slowest modes on that list. That sounds very quick for a 3.3V 180nm IO pin.

    Contrast FIN1215 which lists 500+ $1.562
    Might be a better solution ?
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2014-05-18 18:54
    I fear that VGA and DVI monitors are soon to be deprecated in favor of HDMI. I hate to see that happen, though, because 1080p is totally inadequate for CAD. (My current DVI LCD monitor is 1600 x 1200, and it's as big as I could get without spending a fortune..) NTSC, OTOH, still has some life in it, due to the simple interface and ubiquitous, cheap CCTV offerings (e.g. rear-view cameras/monitors). For these reasons, I would hate to see the next chip dependent upon VGA alone for quality video.

    -Phil
  • potatoheadpotatohead Posts: 10,261
    edited 2014-05-18 18:59
    It won't be.

    Really, the core difference between this one and the last design is VGA is the common denominator target. The other analog formats, with PAL composite in question, can be done. The other difference is no color management. That means it's harder to run the same data on lots of display types, but the current design is targeting 8 bit palette color. Change that per type, and it all will mostly work.
  • TubularTubular Posts: 4,702
    edited 2014-05-18 19:03
    jmg wrote: »
    Suggests 4.00ns and 3.70370ns bit-rates and a CLK with 2ns HI/2nsLo or 1.85185ns Hi/Lo
    for the two slowest modes on that list. That sounds very quick for a 3.3V 180nm IO pin.

    Contrast FIN1215 which lists 500+ $1.562
    Might be a better solution ?

    Interesting part. Wonder whether it could be used with the P1, if you accept a few pixels in a row having the same value, or gate the data in from different cogs on different phases.

    Regarding the transition times, if this is done through the dacs to achieve a lower level consistent with lvds, would you expect faster performance than a full 3v3 swing? Chip did quote a settling time on the dacs somewhere ages ago
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2014-05-18 19:03
    potatohead wrote:
    Really, the core difference between this one and the last design is VGA is the common denominator target.

    And that's what I see as a possible issue. It's the same fear I had about a dedicated DRAM scheme. The external technology is changing faster than the expected design-in lifetime of the P2.

    -Phil
  • TubularTubular Posts: 4,702
    edited 2014-05-18 19:10
    And that's what I see as a possible issue. It's the same fear I had about a dedicated DRAM scheme. The external technology is changing faster than the expected design-in lifetime of the P2.

    -Phil

    Yes, go with generic blocks and hope for the best

    I note the DIP8 version of the Winbond W25Q series (as plugged into the DE0 and DE2 parallax boards) has gone end of life, with last buy around July. Still available in other formats such as SOIC8.
  • potatoheadpotatohead Posts: 10,261
    edited 2014-05-18 19:20
    Analog component is around for a long time. And at some point adding a converter chip may make sense. Not a big deal.
  • jmgjmg Posts: 15,173
    edited 2014-05-18 19:25
    Tubular wrote: »
    Regarding the transition times, if this is done through the dacs to achieve a lower level consistent with lvds, would you expect faster performance than a full 3v3 swing?
    Unlikely to help.

    Chip could ask OnSemi if they have any Hardened IP in this process that could hit the lower sets of numbers.
  • RaymanRayman Posts: 14,643
    edited 2014-05-18 19:40
    Keep in mind that display port and HDMI and DVI are really almost the same thing...
    Actual real differences are the connector and the logo...
  • TubularTubular Posts: 4,702
    edited 2014-05-18 19:59
    Can anyone explain to me where 7 bit vs 10 bit serialization fits into the various standards? I believe DVI is always 10 bit, is that correct?

    That FIN1215 and at least some of the displays want 7 bit. 7 bits (150~200 MHz) across 3 or 4 lanes (+clock lane) seems relatively within reach
  • markmark Posts: 252
    edited 2014-05-18 20:13
    VGA might have some life left in it if monitors continue to support DVI-I. Good thing is, most of the DVI receiver chips stuffed in monitors support that mode, and I don't see that particularly changing until they dump DVI all together for HDMI and DP.
    And that's what I see as a possible issue. It's the same fear I had about a dedicated DRAM scheme. The external technology is changing faster than the expected design-in lifetime of the P2.

    -Phil

    This is why I think being able to output the video stream in parallel (in a variety of bit widths preferably) is probably the most future proof you'll be able to get. Sure, you rely on external components then, but at least the option to use them is there. The future of embedded displays is MIPI-DSI and eDP, while external displays will be primarily DVI/HDMI and DP.

    Rayman wrote: »
    Keep in mind that display port and HDMI and DVI are really almost the same thing...
    Actual real differences are the connector and the logo...

    HDMI also supports audio and DRM. But yeah, minimal HDMI is essentially DVI.
  • tonyp12tonyp12 Posts: 1,951
    edited 2014-05-18 20:46
    >Can anyone explain to me where 7 bit vs 10 bit serialization
    >That FIN1215 and at least some of the displays want 7 bit.

    hdmi/dvi/displayport wants 10bits at a speed of 251mbits to 1gbits+ on its RGB channels
    The clock channel is not serialized so its speed is the original 1/10th, as the receiver have PLL to recover the bit-rate.

    They encode the bits xor'ed from bit to bit, so there is NO bit expanding so it's still 8bits, but there is 2bits added to tell receiver that you A: used xnor instead. B: you inverted it.

    This two is to help out with signal noise and DC balance, but at low resolution probably could still work if you always kept the default.
    As the values 0-255 are fixed on when to use xnor and inv, would not be hard to make a 256 array table or maybe just 32 array table for 32k colors


    The Fin1215 will not work, I have not seen a 10bit Serializer, TI has one but it pads starts and stop bit so it sends 12bit

    NXP have some $5 hdmi chips, but no full datasheet have ever be released, not sure if they plan to keep selling these ic's
    The cool part is that it have 12bit mode and that H+V sync can be included as EOL codes etc, so the total would be 12pins from Prop but still 24bit colors
  • markmark Posts: 252
    edited 2014-05-18 20:54
    Tubular wrote: »
    Can anyone explain to me where 7 bit vs 10 bit serialization fits into the various standards? I believe DVI is always 10 bit, is that correct?

    That FIN1215 and at least some of the displays want 7 bit. 7 bits (150~200 MHz) across 3 or 4 lanes (+clock lane) seems relatively within reach

    DVI/HDMI uses TMDS which transmits 10 bit frames that contains an 8-bit payload.

    The FIN1215 looks to be simply a 21-bit LVDS serializer, which has nothing to do with DVI/HDMI. It wouldn't necessarily even work with all LVDS LCDs, as the output stream encoding needs to match the panel's input (it's unfortunately not always as simple as each lane streaming a certain bit range from its parallel input). There's several common encoding schemes, but no standards AFAIK. That's why it's not uncommon for a manufacturer of a given display panel to recommend a specific LVDS tx IC.
  • TubularTubular Posts: 4,702
    edited 2014-05-18 21:31
    Ok, thanks for that information TonyP and Mark
  • markmark Posts: 252
    edited 2014-05-18 22:41
    I wanted to elaborate on my thoughts for eliminating all those multiplexers in the hub, so I drew up the following image (disclaimer: this is actually an edited image - the original can be found here: http://web.sfc.keio.ac.jp/~rdv/keio/sfc/teaching/architecture/computer-architecture-2012/lec08-vm.html)

    As you can see, the diagram only shows a 4-bit wide data/address path instead of 32, and a 4 bank bus instead of 16, but it should get the point across. Also, I would imagine the circuit design could be used for both data and addressing.

    I don't know if it would actually work, or if it would take up less die space than the current scheme, but in case it would, then perhaps it's worth considering.


    Square_array_of_mosfet_cells_read.png
  • jmgjmg Posts: 15,173
    edited 2014-05-18 22:49
    [QUOTE=mark
  • pik33pik33 Posts: 2,366
    edited 2014-05-18 23:08
    Tubular wrote: »
    No crazier than doing 1600x1200 on the P1

    1920x1200 is possible with P1 too, so I cannot imagine a P2 or P1+ which cannot display 1920x1200. It will be a regression.
  • Roy ElthamRoy Eltham Posts: 3,000
    edited 2014-05-18 23:19
    In order to drive HDMI's minimum pixel clock frequency (25Mhz pixel clock), you need to be able to drive the red/green/blue lines at 250Mhz. That's simply not going to happen with the current P2 design. You are only able to drive the pins with software at 100Mhz at best. Maybe with one of Chip's direct hub to pin modes we could get 200Mhz (assuming the chip does actually run at 200Mhz when it's done).

    We are going to have to have external hardware in order to do HDMI. I know a lot of you know this, but I still see some people thinking somehow we can get it via bit-banging the P2 pins, I'm sorry but it's just not an option at this time.

    There are adapters available on amazon and newegg that take a VGA+audio signal and output an HDMI signal. They are fairly cheap too, so I am betting we can find some chips to pair up to the P2 that will give us a decent HDMI output given the P2's great VGA output.

    Also, HDMI supports up to 48bit color (30 and 36bit too), and 340Mhz signalling giving resolutions up to 4kx2k. Pretty soon it'll support even more...
  • potatoheadpotatohead Posts: 10,261
    edited 2014-05-19 00:06
    Yeah, double those rates for the full frame rate 4K display, with support for 8K displays...
  • TubularTubular Posts: 4,702
    edited 2014-05-19 00:43
    Roy I don't really mind if we do need external hardware. I'm keen to find out the reasons for these limitations, rather than the absolute answer yes/no.

    For instance getting fast clocks past the pin/pads on the P2, as jmg highlighted before, gives a good reason why something might not be possible. Its generally worth looking at the problem "just beyond" the comfortable zone in order to flush out what the preventative issues are. Personally, I'm far more interesting in connecting an LVDS ADC than HDMI, but there are common issues, what may not be quite good enough for HDMI may be good enough for ADCs that require a slightly lower clock rate.

    I think you mean Mbps rather than MHz in your post? There are certainly LVDS LCD displays (raw panels) that require sub 200 Mbps signalling that seem to have a broader clock tolerance.
  • Brian FairchildBrian Fairchild Posts: 549
    edited 2014-05-19 02:29
    Has anyone been keeping track? Do we know what P2 features haven't crept back in to the P16X64A?
  • TubularTubular Posts: 4,702
    edited 2014-05-19 02:46
    I think how the pins get connected is the key remaining feature.

    I look forward to hearing about these smart pins and how they fit in.
  • tonyp12tonyp12 Posts: 1,951
    edited 2014-05-19 08:05
    HDMI bitrate:
    This 80mhz serialized to a 960mhz bitrate was made in 2001, so it's probably 180 nm (a limit reached in 1999, 130nm was reached in 2002)
    http://pdf.datasheetcatalog.com/datasheet/nationalsemiconductor/SCAN921226.pdf

    So there should not be any technical problems of reaching these bitrates on a P2 video/smartpin-serializer
    Maybe the smartpins needs some simple PLL to double its clock. (or PLLx32 and system gets 1/2)

    The rules when to xor/xnor/inv:
    http://www.oocities.org/yehcheang/Images/TMDS_Encoding_Algorithm.jpg
    But as we probably will use 256 color palette. the palette lookup table could contain the 10b x3 data already encoded.
    256 longs needed to store the 30bits for RGB, but with running disparity correction it would better to store RGB data as 10bit individual LUTs as 80% of the TMDS values have 2versions to correct the disparity.
    But 20% of 16millions colors is good enough for me as to only keep one set of tmds values.

    here is a Xilinx TMDS encoder
    http://read.pudn.com/downloads183/sourcecode/embed/857895/xapp460/dvi_demo/rtl/tx/encode.v__.htm
  • koehlerkoehler Posts: 598
    edited 2014-05-19 10:26
    Has anyone been keeping track? Do we know what P2 features haven't crept back in to the P16X64A?

    Thats against the process.
  • cgraceycgracey Posts: 14,152
    edited 2014-05-19 16:22
    In working out this read-FIFO, I see it would be possible to have a write-FIFO, as well. This would enable data to be streamed either direction at one long per clock, never stalling execution. No stalls could be achieved by reading and writing as many longs as were in each FIFO, at a time.

    Handling sizes other than longs (bytes and words) gums up a lot of things in the hardware, as well as the documentation. It makes the addressing more difficult to understand.

    I've asked before and the consensus was that byte read and write instructions were absolutely needed, but are you sure about that? For text data that is large, four bytes to a long are certainly needed, and without byte operations, some software parsing of longs would be required. String buffers could use a long per character, though, as they are usually quite limited in size. Byte and word data could still be declared in your code, as now, but PASM would only support long reads and writes. Hub addressing would become a uniform 17 bits for 128K longs. What do you think?
  • RaymanRayman Posts: 14,643
    edited 2014-05-19 16:26
    Maybe living without read/write byte/word would be easier if there was a new instruction that extracted the specific byte or word you were interested in, based on it's address?

    Actually, writing a byte of word to the fifo isn't a problem, I think... It's just reading, when you want say an arbitrary byte from HUB...
    If it came as a long with other bytes, you'd have to do math on it's address and then shift and then mask...

    Still, there is so much HUB RAM that maybe you could live without using bytes to save space.

    I do like the idea of read and write buffers to prevent stalling though...

    BTW: How would this FIFO thing work if you just want to write or read one long?
Sign In or Register to comment.