Full-chip integration at On Semi

13468915

Comments

  • All this talk of LVDS is interesting.

    I guess at the end of the day raw signalling speed is only part of the problem.

    It might be useful to be able to use that capability to speak to other things than just between Propellers.

    At which point some kind of standard protocol adoption is called for.

    Like Tile Link https://www.sifive.com/documentation/tilelink/tilelink-spec/

    OK, that seems to be for communication between parts on a SoC. But it is also used for chip to chip links. As far as I can tell.
  • jmgjmg Posts: 14,244
    Heater. wrote: »
    All this talk of LVDS is interesting.

    I guess at the end of the day raw signalling speed is only part of the problem.

    It might be useful to be able to use that capability to speak to other things than just between Propellers.

    Obvious candidates are HS-USB, and HDMI type Displays, plus there are some high performance ADCs with LVDS...
    Then there is M-LVDS, the 'RS-485' equivalent operation
    https://en.wikipedia.org/wiki/Low-voltage_differential_signaling#Multipoint_LVDS

  • LVDS transmitter and receiver as smart pin mode?

    hmm,

    Mike
  • Heater. wrote: »
    Chip,
    Well, I thought I was already achieving that with the Google Docs.
    Indeed you were (are)

    But, you know, Google docs.

    Better to have control of your own stuff.

    Good. Wonder how I missed that? Doc looks good heater. That's the bare minimum needed.


  • Yanomani wrote: »
    cgracey wrote: »
    I would be quite happy to just start designing an LVDS transceiver today, but there's a ton of other stuff to do, first.

    Hi Chip

    You don't need to reinvent the wheel!

    Onsemi ONC-18 does yet has LVDS, in its standard cell library.

    They had developed an image sensor at their facilities in Belgium, for space applications, under 180nm design rules. LVDS signaling!

    Many of the documents listed as having references to both OnSemi and LVDS are also related to Fairchild, acquired by OnSemi in 2016.

    https://fairchildsemi.com/application-notes/AN/AN-5017.pdf

    imagesensors.org/Past%20Workshops/2017%20Workshop/2017%20Papers/P12_innocent_2.pdf

    onsemi.com/PowerSolutions/content.do?id=16626

    I'm yet to find any reference for comms speed!

    I think I saw 450MHz somewhere. Those are interesting links.
  • As for the sensor, sure 450 Mbps could be the maximum designed rate, since its datasheet shows it outputs data at 4 x LVDS @ 384 Mbps.

    I'll keep digging a bit more.
  • msrobots wrote: »
    LVDS transmitter and receiver as smart pin mode?

    hmm,

    Mike

    I think there's a mode for this, where the '1' level is generated by fast settling dac (to an arbitrarily set voltage). And the paired pin can be set to invert that same data input. Kind of pseudo-lvds.
  • Heater. wrote: »
    This kind of documentation is hard. As we probably all know. Somebody has to extract the definitive meaning of everything from the creator, Chip, and translate and present is as something normal humans will understand and can make use of.

    In the past, I've often found it much easier to write documentation if someone else writes something for me first, even if it's completely wrong. To that end, would it make sense to start with the Propeller 1 Manual, replace P8x32A with P8x64A (or whatever) then start fixing everything that's wrong?
  • Bob Lawrence (VE1RLL)Bob Lawrence (VE1RLL) Posts: 1,577
    edited 2018-03-13 - 00:36:26
    Lots of LVDS video's:
    https://www.youtube.com/results?search_query=LVDS

    Datasheet's:
    http://www.futureelectronics.com/en/Search.aspx?dsNav=Ntk:PlainTextSearch|LVDS|3|,Ny:True,Nea:True,N:898

    WIKI:
    https://en.wikipedia.org/wiki/Low-voltage_differential_signaling

    Raspberry Pi LVDS Example:

    How to Connect Raspberry Pi to a Laptop LCD Panel via LVDS/LCD Controller Board




  • cgracey wrote: »
    Question:

    Would 16 cogs and 1 MB of Hub be worth a speed reduction from 160 MHz to 120 MHz?

    YES. Think active/passive cooling AND margin (i.e 80MHZ Prop 1 running at 100MHZ just fine). If the same margin is there, surely 150 ish is possible with a 120MHZ version.

    I say Yes. Yes. Yes. You will blow up the paradigm.
  • Never mind. I see we already dropped back to 8. Boo.
  • cgraceycgracey Posts: 12,440
    edited 2018-03-15 - 03:38:22
    I ran it by OnSemi and they said it would reset the whole contract back to the quoting phase. That was a "banker's NO".

    On the other hand, we have a minimum Fmax requirement of 160 MHz, with a stretch goal of 200 MHz. We may be able to pursue the 200Mhz if size permits. That's a 10ns instruction cycle for anyone in Rio Linda.
  • I wouldn't be much help initially, but if you need anything edited for proper English, I will volunteer to be a "grammar hound" at the end. :cool:
  • DrPop wrote: »
    I wouldn't be much help initially, but if you need anything edited for proper English, I will volunteer to be a "grammar hound" at the end. :cool:

    Okay. Thanks!
  • I'm not sure how much help I'll be, but I do have some experience writing technical documentation, tutorials, etc. and would love to help out with the Prop 2 docs.

    I also vote for using GitHub. Being able to review changes in a pull request alone is worth it in my opinion, especially if we're hoping to get lots of help from the community.
  • cgraceycgracey Posts: 12,440
    edited 2018-03-24 - 10:56:39
    Here is how the new VCO mux got implemented. It was my first time actually building circuits, though I've moved lots of metal around. This was fun.

    Here is the old PLL_MATH circuit:

    PLL_MATH_old.png

    The problem here is that the VCO runs too fast for the VCO dividers to track. It was going 630MHz on the test chip, while the counters lost track at 420MHz. I slowed down the VCO inverters by lengthening their gates from 300nm to 500nm. The VCO tops out at 285MHz, worst case, now, which is always trackable by the VCO dividers.

    We can't rely on being able to use the VCO post-divider output to toggle a flop, anymore, at 400MHz, since that's near the limit of the digital divider's ability and the VCO no longer goes that fast (thankfully). So, we need a mode where we can use the VCO output, directly, for 100MHz..200MHz frequencies.

    Here's the new PLL math circuit, which allows direct use of the VCO signal when P[3:0] = %1111:

    PLL_MATH_new.png

    I made the change, myself. First, I got all the standard cells that I needed:

    PLL_MATH_MUX_parts.png

    Then I wired them together:

    PLL_MATH_MUX.png

    ...And wired the new mux circuit into the existing circuit, replacing the old flop and two inverters:

    VCO_mux.png

    It only took 1/2 hour on the phone, working with Nathan at OnSemi, to have him stream the new GDS in from the cell I had modified, and then make the top-level edits to wire in the new PLL output. It passed DRC and LVS without any more edits. Done!!!
    1087 x 1304 - 35K
    1075 x 1884 - 45K
    3648 x 710 - 129K
    3799 x 897 - 180K
    1002 x 2074 - 679K
  • WTG Chip. Thanks for the insight to how it's done.
  • jmgjmg Posts: 14,244
    edited 2018-03-24 - 19:28:04
    cgracey wrote: »
    The problem here is that the VCO runs too fast for the VCO dividers to track. It was going 630MHz on the test chip, while the counters lost track at 420MHz. I slowed down the VCO inverters by lengthening their gates from 300nm to 500nm. The VCO tops out at 285MHz, worst case, now, which is always trackable by the VCO dividers.

    It only took 1/2 hour on the phone, working with Nathan at OnSemi, to have him stream the new GDS in from the cell I had modified, and then make the top-level edits to wire in the new PLL output. It passed DRC and LVS without any more edits. Done!!!
    Great :) Lucky there was that tram line space ! Have you had someone else double checked all the connections/netlist ?

    How closely do the measurements line up with the Spice predictions ?
    What is the Process/Temperature Min MHz, for the VCO Max value ?

  • jmg wrote: »
    cgracey wrote: »
    The problem here is that the VCO runs too fast for the VCO dividers to track. It was going 630MHz on the test chip, while the counters lost track at 420MHz. I slowed down the VCO inverters by lengthening their gates from 300nm to 500nm. The VCO tops out at 285MHz, worst case, now, which is always trackable by the VCO dividers.

    It only took 1/2 hour on the phone, working with Nathan at OnSemi, to have him stream the new GDS in from the cell I had modified, and then make the top-level edits to wire in the new PLL output. It passed DRC and LVS without any more edits. Done!!!
    Great :) Lucky there was that tram line space ! Have you had someone else double checked all the connections/netlist ?

    How closely do the measurements line up with the Spice predictions ?
    What is the Process/Temperature Min MHz, for the VCO Max value ?

    The SPICE predictions showed that the VCO should run up to ~1GHz, but I only saw it going 630MHz. It also showed the dividers working up to ~600MHz, but they were outpaced at ~420MHz. Not sure why such great discrepancies.

    The new VCO tops out at 285MHz, worst-case (90% voltages, 125C, slow process corner). It will never outpace the dividers now. If it suffers a big derating, it should still cover 160MHz. I'm going to do some more tests on that now, because I'm worried it may wind up being too slow, in reality.

    We know that the layout matches my schematic because it passes LVS (layout-vs-schematic). And I simulated it to make sure it was okay, even though the change was very simple.
  • cgraceycgracey Posts: 12,440
    edited 2018-03-25 - 08:13:58
    Tonight I'm going over the I/O pad, making sure it's doing what it's supposed to do and that it also works properly with the smart pin modes.

    I was testing out the ADCs and DACs by running a signal into one I/O pad configured as a 33mV full-scale (100x mag) ADC, then taking the ADC sample and running it through another I/O configured as 16-bit dithered DAC.

    In 33mV full-scale ADC mode, the signal to be quantized needs to be capacitively coupled into the I/O pad, as the pad must be allowed to set the average level to somewhere around VIO/2. Then, it quantizes changes spanning up to 33mV, resulting in ADC values from near 0% to near 100%.

    You can see in the picture that the function generator is coming in on channel 1 of the scope and it is outputting a 30mV peak-to-peak sine wave of 50KHz. It's getting converted to a 5-bit ADC sample at an 80/32 = 2.5MHz rate and then output from a DAC onto channel 2. I noticed there was a 10mV ~500KHz signal superimposed on the conversion. It turns out this is VIO power supply noise, causing the ADC reference to move around against the incoming signal. It's being faithfully quantized by the delta-sigma ADC converter in the I/O pin. This is what ought to be expected. The remedy for this is filter the VIO supply that goes to I/O pins that are performing accurate A/D conversions.

    Anyway, the ADC resolving power is quite good on small AC signals.


    ADC_30mV.jpg


    Here's the code:
    con		n = 5				'number of conversion bits
    
    dat		org
    
    		hubset	#$FF			'run at 80MHz
    
    		wrpin	adc_mode,#4		'adc mode for P4
    		wxpin	##1<<n,#4		'n-bit conversion
    		wypin	#0,#4			'count a-highs
    		dirh	#4			'adc enable
    
    		wrpin	dac_mode,#5		'dac mode for P5
    		wxpin	##1<<n,#5		'n-bit conversion
    		dirh	#5			'dac enable
    
    		setse1	#%110<<6+4		'se1 triggers on adc sample
    
    loop		waitse1				'wait adc sample
    		rdpin	x,#4			'get adc sample
    		shl	x,#16-n			'shift up and set dac
    		wypin	x,#5
    		jmp	#loop			'loop
    
    
    adc_mode	long	%0000_0000_000_100111_0000000_00_01111_0
    dac_mode	long	%0000_0000_000_10100_00000000_01_00011_0
    
    x		res	1
    
    694 x 543 - 118K
  • Chip,
    I'm interested in running that. Does that work on the A123 board? I'm assuming those pin numbers are someway mapped to the ADC and DAC connectors.
  • Cluso99Cluso99 Posts: 15,762
    edited 2018-03-25 - 12:26:05
    Chip,
    I don't like to be the one to say this, but you have only just received the test chip back from OnSemi, so there has not been much time for you to test this, amongst doing other things with the P2 Verilog and OnSemi.

    Add to this the fact that there has hardly been ANY decent testing of the Verilog/FPGA for some considerable time, and there has been numerous changes recently, which have uncovered a number of bugs, and by reviewing the code, not by real testing.

    Are you really in a position to get the Production Test Chip underway now, with minimal risk ???

    Personally I have no confidence that adequate testing has been done at this point in time. But its not my money.

    However, if an error is discovered, the timescales you have given would indicate if you go ahead now, a respin will cost ~26weeks. What would be the delay if you wait for the next shuttle run to allow more testing time?

    I know we all want the P2 years ago, me included. But I also don't want to wait for the sample Chip to find bugs.
  • jmgjmg Posts: 14,244
    evanh wrote: »
    Chip,
    I'm interested in running that. Does that work on the A123 board? I'm assuming those pin numbers are someway mapped to the ADC and DAC connectors.
    To test this, you would need the test silicon chip.
    Not sure how many of those parallax got ?
    Certainly seems a good idea to expand the testing
  • "The SPICE predictions showed that the VCO should run up to ~1GHz, but I only saw it going 630MHz. It also showed the dividers working up to ~600MHz, but they were outpaced at ~420MHz. Not sure why such great discrepancies." .... This makes perfect sense to me. There are parasitic considerations in layout that need to need to be addressed that the SPICE model can not completely simulate.

    "We know that the layout matches my schematic because it passes LVS (layout-vs-schematic). And I simulated it to make sure it was okay, even though the change was very simple." .... Just because LVS passes does not mean you are out of the woods. There are many other things to consider that must be empirically tested. This means physical test structures that represent your process and layout characteristics of other nearby structures. Looking at the layout for the "new PLL math circuit" you posted earlier scares me quite a bit. Not because I might be a little biased, of course I am, but from a professional level of quality layout work. When I worked at National Semiconductor, I mentored the students from Georgia Tech in a classroom/lab environment on what to do vs. what not to do in terms of layout. I see several issues with the PLL math circuit layout you posted. I can't stress enough the need for empirical testing. While at NSC, we were constantly updating and tweaking the SPICE models used for TSMC's process. This was an iterative process, where through empirical testing our data from NSC was actually defining the SPICE models other companies would use. One of my specialties at NSC was to create a standard cell library, characterize each cell through timing and parasitic analysis, and update the SPICE model to reflect the physical layout results. That's just how it is done. You can not solely rely on what the SPICE model might tell you is supposed to happen. There are just too many other variables at play that must be considered when it comes to timing and getting it just right.
  • Beau, everything but the PLL has worked exactly as simulated. I need to delve more into why the PLL simulates faster than it runs.

    So, what is wrong with the layout, anyway?
  • jmgjmg Posts: 14,244
    cgracey wrote: »
    I need to delve more into why the PLL simulates faster than it runs.

    Does the simulation include layout delays & supply inductance effects, or it is just a circuit-simulation ?

  • "So, what is wrong with the layout, anyway?" ... The first thing I see is how that cell block is powered. TOP LEFT, and then BOTTOM RIGHT. You would be much better off if you mirrored the X axis of just that block and strapped power in a more direct approach to the adjacent cells. If the power is done like that there, then most likely it is done that way in several other places. There are some other common sense errors that just come with experience. That layout looks typical to how I would see the Georgia Tech students attempt at layout. It might get the job done but it's not as organized as it should be for that kind of structure and frequencies you are expecting. I didn't see any guard rings except maybe in the lower right structure partially off screen, but one contact width for a guard ring is usually not adequate.
  • cgraceycgracey Posts: 12,440
    edited 2018-03-25 - 20:52:37
    I see. Thanks for the elaboration. I'm pretty much the best layout guy here on the farm, and maybe even in Red Bluff.

    My only concern is that I understand why the VCO was so off. I can understand that with long wire runs, the digital counters could slow, but I don't know how something as compact as the VCO could be so far off. I wish there was a way to know what the regulated voltage was for the VCO supply. I high-z filter the 1.8V power, then replicate it from the 3.3V supply. If that voltage wound up being really low, that would explain the speed difference.
  • jmgjmg Posts: 14,244
    cgracey wrote: »
    My only concern is that I understand why the VCO was so off. I can understand that with long wire runs, the digital counters could slow, but I don't know how something as compact as the VCO could be so far off..

    A simple litmus test is to add some fF of parasitic C to every inverter, and adjust that to drop the Spice-1G to the Measured-630MHz
    You only need to add ~42ps to each inverter.

  • "I see. Thanks for the elaboration. I'm pretty much the best layout guy here on the farm, and maybe even in Red Bluff." ... As I said, I might be a little biased, only for a good reason though. I didn't realize you were doing the layout. Hind sight is 20/20

    What is the process limitation for frequency? ... While at NSC, the TSMC rule of thumb for 180nm was about 350MHz mostly due to substrate leakage. That rule of thumb is usually derated, so double that would not be out of the question. However with appropriate use of NWELL isolation you could greatly reduce the substrate leakage where higher frequencies would be achievable.
Sign In or Register to comment.