Shop OBEX P1 Docs P2 Docs Learn Events
The New 16-Cog, 512KB, 64 analog I/O Propeller Chip - Page 125 — Parallax Forums

The New 16-Cog, 512KB, 64 analog I/O Propeller Chip

1122123125127128144

Comments

  • evanhevanh Posts: 16,039
    Was there ever much portability between fabs?

    Also, OnSemi's highest density is 180nm. They ain't going to be retiring that any time soon. It might end up boxing Parallax into one provider is all. But it'll always be an effort to jump ship I suspect.
  • jmgjmg Posts: 15,175
    evanh wrote: »
    Was there ever much portability between fabs?

    There is, if you are called intel... ;)


  • Cluso99Cluso99 Posts: 18,069
    According to wiki OnSemi has a 130nm plant in the USA
  • evanhevanh Posts: 16,039
    edited 2015-08-15 04:33
    Cluso99 wrote: »
    According to wiki OnSemi has a 130nm plant in the USA

    Maybe it's been upgraded. That would be a positive indicator for longevity of that specific fab. I didn't look it up at Wikipedia but I'm pretty sure I had read 180nm spec at the time. It was just a single fab plant in USA.


    EDIT: Prop3 @ 130nm then? :)
  • ErNaErNa Posts: 1,752
    edited 2015-08-15 12:24
    I hope, Chip ultimately decided to have both 16 cogs and memory, so my vote for 16 is no longer needed. But to all the others I direct this statement: We all know, what more memory is good for. But we have no clue, what to with 16 cogs. And so: up to new frontiers. Let us make things differently. Better. https://en.wikipedia.org/wiki/Where_no_man_has_gone_before
  • Heater.Heater. Posts: 21,230
    I'm not sure that catch phrases from fictional space faring T.V. stories makes the engineering challenges easier. :)

    May the Force be with you.

  • Heater.Heater. Posts: 21,230
    Actually, on reflection, I think we know a lot about what to do with 16 COGs.

    1) In terms of raw processing power we know that having two processors working on a problem will always be slower than one processor of twice the speed. That the more processors you add to the problem the less benefit is gained from each additional one. See Amdahl's Law.

    2) We know that parallelizing algorithms for use on multi-processor systems is hard. Some of the brightest minds have been tackling this problem for decades.

    3) Of course the way COGs are used in the Prop is not all about raw processing power applied to some algorithm but rather the flexibility of implementing interfaces in software rather than having dedicated hardware. This we know a lot about because people here have been doing that with 8 COGs for years already.

    So, meh, 16 cogs is just like 8 cogs but more so.





  • cgraceycgracey Posts: 14,208
    Heater. wrote: »
    @ Chip,

    Some questions:

    1) Am I right in saying COG code can run, as events/interrupts whilst HUB execution is in progress?

    2) Can the interrupt/event mechanism work with HUB exec code?

    3) Can I use the interrupt/event mechanism as a purely event driven programming model?

    What I mean by the last question is:

    a) We have events, pin changes, time outs, signals from other COGS, etc.

    b) Each of those events will have some handler code attached to them.

    c) When events fire the appropriate event handler is run. That event handler does what it has to do, as fast as possible, and then terminates.

    d) When the event handler is done the COG drops back to "do nothing", HALT, low power mode.

    In the event driven model, there are no priories or preemption. Nothing ever gets interrupted. It is not an "interrupt" model. There is no "background" loop endlessly running that needs to be interrupted. All processing is done in response to events only.

    Event driven programming removes all kind of hassle with sharing data between tasks or interrupt handlers that cold fire of at any time. Only one thing runs at a time and it runs to completion.

    4) I would like to see such an event driven programming model also work with HUB exec code. Is that possible?

    Affirmative, in all cases.
  • Heater.Heater. Posts: 21,230
    Thanks Chip. That sounds great!

  • cgraceycgracey Posts: 14,208
    Cluso99 wrote: »
    Chip,
    Can the internal oscillator be connected to the PLL to generate the P2 clock?
    From what I recall, the PLL can basically be any multiple. So what I am thinking is that it may be possible to calibrate the internal oscillator using an external source (USB or Serial) and then adjust the PLL multiple to get a rounded frequency source, without requiring an external xtal. I understand the RC oscillator may not be the most stable/accurate, but perhaps this might make such use possible in some areas.

    PS I hope you haven't forgotten the special instruction(s) for USB. I'd rather wait for the Smart Pins first as that may resolve some of the things required for USB FS.

    The internal oscillator cannot run the PLL. I could make it do so, but the timing would be all over the place. You could calibrate for things like serial, but not video or USB. And using some other signal that is not a clock for syncing would be difficult. At least, it's not designed to try to do that. If you can hook a crystal up or feed a clock signal in, you can use the PLL to multiply it for the internal clock.

    The USB stuff is perfect for the smart pins to handle.

  • cgraceycgracey Posts: 14,208
    edited 2015-08-17 15:53
    Heater - "Yes, but nobody has VGA monitors any more. The whole video thing is some kind of historical artifact."

    I'm concerned that the same fate might be in for the 180nm technology since it was introduced 15 years ago in 2000, many improvements have been made since then. By the end of this year we are expected to have 22nm commercial technologies available.

    The analogy is that 180nm is like an old car... if something goes wrong, even something slight, it might be hard to find replacement parts. Where as something designed with 22nm might be more readily available. While there are still Chip foundries right now capable of 180nm, that could change in just a couple of years and you can bet that they will be harder and harder to find.

    Although 22nm is more expensive, it is also 67 times more dense than the 180nm technology, and I certainly bet that the more expensive price is NOT 67 times more than the current 180nm technology price.

    As far as Smart pins being done, the pins were completed and tested during the November 2011 test die tapeout and targeted a specific process. What happened? well, the target process changed, and it changed several times. Why? Cost? ... I'm not sure exactly. ...As a result these "smart Pins" needed to be completely redesigned and targeted for the current (whatever it might be) process.

    Here is an analogy as far as different processes vs. technology

    technology = PIE
    process = "The flavor of your PIE"

    So while the technology has been constant at 180nm , the guy that wants the Cherry PIE keeps getting Lemon Meringue PIE and the merchant claims that there no difference between the two.

    -- Good luck

    I heard about a year ago that 180nm processes are still the most commonly taped-out. 350nm is still hugely popular, as well. The reason is cost. For about $150k, you can make a 180nm chip with dedicated masks. A 350nm chip costs about $60k. If you want to make a 28nm chip, you'd better have a few million dollars to spend. I don't think these older technologies are going away for a long time. 350nm was the last low-leakage process, on the way down. Stepping from 350nm to 180nm, you get 80x the leakage. It just goes up exponentially from there. These older processes have intrinsic value. What HAS gone away is the really old and cheap 2um process for which you used to be able to get prototypes from MOSIS for only $500! Right now, 350nm is a good replacement for that, as it is low-leakage and the currently-cheapest to use. For these leakage dynamics to change, some fundamental developments must be made in how semiconductors work.

    The smart pins we are talking about here are not the analog/digital pins that you laid out, but synthesized logic blocks that go in-between the IN/OUT/DIR signals that attach to the chip logic and the I/O pins. Smart pins will handle things like PWM, ADC, DAC, and (hopefully) USB, without the need of a COG babysitting the operation. They are configured by clock-to-clock transitions on the DIR lines. In this architecture, software cannot toggle the DIR lines more often than every other clock. Each cog has a messaging circuit that can serially toggle DIR lines every clock. A smart pin sees this activity and configures itself, while holding its DIR output to the actual pin circuit steady, in the original state, while it receives the message. Smart pins can also output messages via IN signals.
  • Hi Chip

    By reading former posts, focusing on total gate count used by the actual design, versus 300 k max, allowed by the synthesis tools licensing terms, are there any chances for the gate count that will be allocated to the smart pin logic design, added to the ones already in use, to exceed the limits?

    Henrique
  • LeonLeon Posts: 7,620
    FWIW, XMOS started out at 90 nm. They then moved to 65 nm, and the latest chips probably use 40 nm.
  • cgracey wrote: »

    I heard about a year ago that 180nm processes are still the most commonly taped-out. 350nm is still hugely popular, as well. The reason is cost. For about $150k, you can make a 180nm chip with dedicated masks. A 350nm chip costs about $60k. If you want to make a 28nm chip, you'd better have a few million dollars to spend. I don't think these older technologies are going away for a long time. 350nm was the last low-leakage process, on the way day. Stepping from 350nm to 180nm, you get 80x the leakage. It just goes exponentially from there. These older processes have intrinsic value. What HAS gone away is the really old and cheap 2um process for which you used to be able to get prototypes from MOSIS for only $500! Right now, 350nm is a good replacement for that, as it is low-leakage and the currently-cheapest to use. For these leakage dynamics to change, some fundamental developments must be made in how semiconductors work.

    Interesting info! I'm somewhat surprised the price increase between 350nm to 180nm is over 100%, yet still less than I expected! Is it a similar jump in price to 130nm? This really doesn't seem like an unreasonable amount for a Kickstarter fund-raiser as long as it's not a costly effort all the way up the chain.

    I hope that one day a company with the necessary technology emerges to create fairly large geometry but cheap die, sort of like how there's all the cheap PCB houses now. Maybe replace the physical masks with digital image projection? I bet there's plenty of DIYers who would love to have their own low density IC produced just for fun. It totally blows my mind that at one point you could have a 2um prototype made for only $500!

  • Chip said: " I don't think these older technologies are going away for a long time."

    A neighbor who works for a power semiconductor company (that everyone has heard of) told me that virtually all of their devices are implemented with 350nm rules. There is not even a discussion about changing that.
  • Heater.Heater. Posts: 21,230
    mark,
    I bet there's plenty of DIYers who would love to have their own low density IC produced just for fun.
    Ha! Over on The Amp Hour pod cast http://www.theamphour.com/ Chris Gamel and Dave Jones have argued the toss over the possibility of a chip printer for hobbyists for ages.

    Back in university we did one lab exercise where we made our own germanium diode and measured it's characteristics. So my felling is that that the idea is is not totally crazy. You might get a feature size of 1mm never mind 1um, but hey it's a home made integrated circuit right?

    That $500 chip thing grabbed my attention too. You mean there was a time when i could have financed my own personal IC prototype? Wish I'd known about that!
  • User Name wrote: »
    Chip said: " I don't think these older technologies are going away for a long time."

    A neighbor who works for a power semiconductor company (that everyone has heard of) told me that virtually all of their devices are implemented with 350nm rules. There is not even a discussion about changing that.

    That makes sense as it seems that a small process would be useless for relatively high current/voltage devices. I just looked at the MOSIS site, and it states that ON's largest process is 700nm.
  • Heater. wrote: »
    mark,
    I bet there's plenty of DIYers who would love to have their own low density IC produced just for fun.
    Ha! Over on The Amp Hour pod cast http://www.theamphour.com/ Chris Gamel and Dave Jones have argued the toss over the possibility of a chip printer for hobbyists for ages.

    Back in university we did one lab exercise where we made our own germanium diode and measured it's characteristics. So my felling is that that the idea is is not totally crazy. You might get a feature size of 1mm never mind 1um, but hey it's a home made integrated circuit right?

    That $500 chip thing grabbed my attention too. You mean there was a time when i could have financed my own personal IC prototype? Wish I'd known about that!


    I believe it was mentioned on this forum a while back where DARPA(?) was looking to fund low cost IC fabrication where the masks were replaced by controlled laser projection, or something along those lines. I get the feeling that the technology necessary to make such a thing a reality exists, but it probably wouldn't be a particularly big money maker. So how many people would want to run a high-tech business that isn't making millions of $ a year?

    I know that some of the better funded universities get to have their own ICs prototyped. Although making something as "simple" as a diode is cool too! Heck, Jeri Ellsworth made a rough transistor right in her kitchen!

  • Heater.Heater. Posts: 21,230
    The diode lab was very crude. If you happen to have some pre-made N and P type germanium, a vacuum chamber and a means of heating the thing and fusing the germanium together it's almost impossible to not make a diode.

    Diodes occur in nature. We used to have "crystal set" radios that involved poking bits of wire at crystals of whatever it was until you hit a point that formed a diode rectifier. It's even possible to get diode action from the zinc plating on metal buckets and such like. Heck, light emitting diodes have been found in some naturally occurring crystals.

    Transistors are bit more tricky. Jeri did an amazing job there.

    I liken the whole idea to the hobbyist 3D printer phenomena. Not exactly high tech and the resulting product is usually a lot more crappy than a factory made item. But hey, we can do it ourselves and customize as we like.
  • jmgjmg Posts: 15,175
    cgracey wrote: »
    The internal oscillator cannot run the PLL. I could make it do so, but the timing would be all over the place. You could calibrate for things like serial, but not video or USB. And using some other signal that is not a clock for syncing would be difficult. At least, it's not designed to try to do that. If you can hook a crystal up or feed a clock signal in, you can use the PLL to multiply it for the internal clock.
    A common small size xtal value (example: XRCGB27M000F2P00R0) looks to be 26MHz and 27MHz - uses in many Clock synthesizers.

    ie the Xtal Osc should be able to support the 26~27MHz range of crystal.

    Oscillator Modules can come in almost any frequency, with ever improving ppm/$,
    eg
    http://www.digikey.com/product-detail/en/TG-5035CJ-12N 26.0000M3/TG-5035CJ-12N 26.0000M3-ND/5261216
    ( but that model is a clipped sine, which may need XtalOsc spec'd for such AC coupled drive ?)

    - and there are some small MEMS RTC modules that can output 32.768KHz Temperature compensated.
    A PLL signal of 32,678KHz may be pushing things ?
  • cgraceycgracey Posts: 14,208
    I met with a friend yesterday who works for a huge American company that makes 40M+ gate chips in the newest technologies. They are now designing in processes under 20nm! Their dies are 1"x1", and burn hundreds of watts.

    He told me a lot of very interesting things. One thing he's noted is that these chip designers are typically in their 50's today and are burned out, lacking passion for their work. Chinese and Indian nationals completely dominate the workforce. In one recent hiring effort, only foreigners on work visas even applied. He said that everybody has difficult accents, as English is nobody's native language. I think I know why this is. Chip design has become so complex, expensive, and risk-management-oriented that only deep pockets are employing teams of workers, and they ask them not so much to invent things, but to realize specifications driven by existing/evolving product goals and incremental possibilities afforded by process shrinkage. These workers might as well be miners working underground for years at a time. They do it not for the glory or excitement, but for the paycheck to support their families. All the Westerners, according to my friend, are in software now, as it's more appealing, given the shorter cycles and relative openness. Chip design is a very closed-model business, and will probably continue to be, given the ever-increasing money required for projects. The way my friend was describing things, I almost get the picture that the IC-design workforce will eventually peter out, amid extreme complexities of future process technologies, as work becomes even more demanding and they fatigue of it all. He said many engineers are averse to even using newer methodologies, which would save them tons of time, yet they still get these monster chips done.

    If some revolution were to occur in IC fabrication that made it all cheaper and more accessible, many new people would get involved and tools would come down in price and knowledge would spread. The way things are headed now, though, the opposite looks to be happening.
  • cgraceycgracey Posts: 14,208
    jmg wrote: »
    ...and there are some small MEMS RTC modules that can output 32.768KHz Temperature compensated.
    A PLL signal of 32,678KHz may be pushing things ?

    Winding 32KHz up to 160MHz seems like a very jittery proposition.

    You might go mad trying to catch a signal on a scope that was triggered ~30us earlier.
  • rod1963rod1963 Posts: 752
    edited 2015-08-15 21:30
    Speaking of fab processes, I see that Freescale is still making chips with 180 and 130nm processes. And these are their automotive PPC monsters. BTW they also run at 5V as well.

    To me at least it says 180 and 130 nm processes aren't going away anytime soon.




  • jmgjmg Posts: 15,175
    edited 2015-08-15 22:12
    cgracey wrote: »
    jmg wrote: »
    ...and there are some small MEMS RTC modules that can output 32.768KHz Temperature compensated.
    A PLL signal of 32,678KHz may be pushing things ?

    Winding 32KHz up to 160MHz seems like a very jittery proposition.

    You might go mad trying to catch a signal on a scope that was triggered ~30us earlier.
    Well,yes, the 'phase lock' part of PLL is somewhat loose in these cases.
    Mostly I've seen it done using a Trim-Osc and a DPLL which seeks to average the right number of cycles over time, so it does chatter away between two nearest Trim Values.

    The USB chips that do this, manage to lock 1ms sample rates to 48Mhz Osc
    I'll check their ppm values...

    Results: 3 Brands of USB DPLL ie (UARTS) all come in just under 200ppm fast, which I recall as the USB frame precision of this PC - ie they have locked as well as the PC precision allows them.
    The actual trim steps will be in the 0.1~0.2% region, or better.
  • Heater.Heater. Posts: 21,230
    Chip,

    Interesting insight on the chip industry.

    It does not surprise me that most of their workers are from Asia. Way back in 1976 when I was at university I was amazed to find myself in math, EE, computer science lectures together with hundreds of students from China and wherever. At that time only 10 percent of the British population ever got to university. We nationals were the minority in the classes. The universities were making money from all these visiting students.

    Wind the clock forward some decades and we have a huge population outside of Europe and the USA who are very well educated, as smart as anyone, and willing to work for a lot less. Supply and demand and all that.

    I don't worry too much. There are still enthusiastic and imaginative logic designers around. You and the Propeller, Andreas Olofsson and his Epihpahny chip. The XMOS crew and the xcores.

    What I want to know is, where are the guys driving the actual progress here? The semiconductor physics and process guys.



  • jmgjmg Posts: 15,175
    Heater. wrote: »
    What I want to know is, where are the guys driving the actual progress here? The semiconductor physics and process guys.
    Yup, the real secret sauce is in the New Process and Process control.
    Chip design is only really software on top of that, these days...
    (as the P2 flow changes illustrate)
  • cgraceycgracey Posts: 14,208
    edited 2015-08-15 22:23
    Heater. wrote: »
    ...Way back in 1976 when I was at university I was amazed to find myself in math, EE, computer science lectures together with hundreds of students from China and wherever. At that time only 10 percent of the British population ever got to university. We nationals were the minority in the classes. The universities were making money from all these visiting students.

    My brother took his oldest son to visit Northwestern University a few weeks ago and they had an interesting experience. Northwestern was putting on a dog and pony show to lure potential engineering students. There were all these young men in the audience. At the front of the room there were several female students putting on the presentation who just talked about how Northwestern empowered them to become future engineering managers, and how great it was, yada yada yada. It was all about diversity and not much about engineering. One of the male audience (almost everyone was male) raised his hand and asked why he ought to go to school there. The answer, from what I remember hearing from Ken, was more PC blather. It seems to me that political correctness has actually broken the requisite feedback loop needed to get more paying (borrowing) students into their engineering program.

    About IC process engineers, I wonder, too.
  • potatoheadpotatohead Posts: 10,261
    edited 2015-08-15 22:26
    I know a process guy at Intel. It is brutal work. Very intense as a new process is being brought up. Many burn out, have a ton of education that must continue while they are working, and failure isn't really an option.

    Well it is, but it is career ending.

    Many of them work very hard for 3 or so years, then take a long sabbatical to recharge and do it again. If a person can hack it, they can retire at 40 something. Or burn out.

    I get the impression there are not too many top process engineers. They are hard to come by and the better ones are known and in constant demand.

    Under them, there are big teams of domain specialists, all fairly narrow focused.

    One woman I met told me her career is designing registers. Just registers that are optimized for various processes and in tandem with the development of new ones.

    She also worked for Intel after about 8 years of college. It is comparable to medicine in terms of the investment and career path.

    Process engineers need very strong domain expertise, physics, engineering, science and be able to manage tight, very highly technical teams consistently over long, intense periods.

    It is a closed club as a lot of the research and relevant new science is kept behind closed doors.
  • cgracey wrote: »
    I met with a friend yesterday who works for a huge American company that makes 40M+ gate chips in the newest technologies. They are now designing in processes under 20nm! Their dies are 1"x1", and burn hundreds of watts.

    He told me a lot of very interesting things. One thing he's noted is that these chip designers are typically in their 50's today and are burned out, lacking passion for their work. Chinese and Indian nationals completely dominate the workforce. In one recent hiring effort, only foreigners on work visas even applied. He said that everybody has difficult accents, as English is nobody's native language. I think I know why this is. Chip design has become so complex, expensive, and risk-management-oriented that only deep pockets are employing teams of workers, and they ask them not so much to invent things, but to realize specifications driven by existing/evolving product goals and incremental possibilities afforded by process shrinkage. These workers might as well be miners working underground for years at a time. They do it not for the glory or excitement, but for the paycheck to support their families. All the Westerners, according to my friend, are in software now, as it's more appealing, given the shorter cycles and relative openness. Chip design is a very closed-model business, and will probably continue to be, given the ever-increasing money required for projects. The way my friend was describing things, I almost get the picture that the IC-design workforce will eventually peter out, amid extreme complexities of future process technologies, as work becomes even more demanding and they fatigue of it all. He said many engineers are averse to even using newer methodologies, which would save them tons of time, yet they still get these monster chips done.

    If some revolution were to occur in IC fabrication that made it all cheaper and more accessible, many new people would get involved and tools would come down in price and knowledge would spread. The way things are headed now, though, the opposite looks to be happening.

    I'm guessing that "40M+" is actually 40B+, and given the die size and power consumption, you must be talking about GPUs. Quite amazing pieces of technology.

    While I have no problems with bright and capable foreigners taking these jobs, it is somewhat unfortunate that nationals show little or no interest in them. I suppose I'm not surprised about people preferring software development, as there's this impression that it's a lot more glamorous to work for the Googles rather than the intels, to say nothing of the notion of software "startup culture" in which a few guys and their laptops can turn a small company that required very little capital into something that ends up being worth billions in a few years. Of course that last bit is rarely true, but when was the last time an IC developer did that? The insane costs of bringing cutting edge ICs to market, tough competition and the reliance on software tools to make them even useful means they inherently have to be conservative in their design. I'm not surprised the old timers have gotten burned out. Not just because they've been doing it for so long, but it seems that modern CPU design has gotten boring. SoCs have managed to do the same in a very short amount of time. The latest phone and tablet has an 8 core SoC and 4GB of ram? Neat, but meh. I guess that's good for lazy programmers. Of course, that's just my opinion. Microcontrollers otoh don't cease to amaze me. Their performance, capabilities and price are often incredible. They enable a lot to be done with a little.

    As for IC fabrication, there seems to be little effort in directly reducing costs of existing cell sizes, with all the focus being on enabling smaller geometries, and eventually reducing prices as the processes grow long in the tooth and the fabs have long amortized the cost of their equipment. There's enough incumbents to support the industry as-is, so there's no need to make it cheap enough for the nobodies of the world to waltz in a fab and make their own chips. Then again, there's probably no better way of making equipment that's capable of producing ICs in a process equivalent to 20 year old tech. But what about 30 year old tech? Is there value in making, say, a 1um process available to the average nerdy Joe? Could the availability of modern technology make the necessary equipment "cheap" to manufacture? It's great that the hobbyist can now get a PCB produced and assembled on the cheap with access to plenty of free software development tools, but unfortunately producing ICs are still out of their realm despite the tech having existed for 40+ years. It really is one of the final frontiers for the electronics hobbyist. Hopefully that changes soon.
  • Cluso99Cluso99 Posts: 18,069
    Chip,
    Very interesting info thanks Chip.

    I expect the 180nm & 350nm are used for the small standard chip parts which is why there is no need to shrink that process.

    This brings up an interesting option. I wonder what could be achieved in 350nm....
    a) A P1V ?
    b) A P2 without ADC and Hub RAM - use an external RAM ?

    I wonder if the second option (8 cogs) could be used to build a new P2 variant to prove the design while still giving us a usable mini P2.
Sign In or Register to comment.