Just in time for the Prop3 :D
evanh
Posts: 15,916
https://www.anandtech.com/show/14254/globalfoundries-to-sell-300mm-new-york-fab-to-on-semiconductorGlobalFoundries and ON Semiconductor on Monday signed a definitive agreement for the latter to buy GlobalFoundries’ 300-mm fab in East Fishkill, New York.
GlobalFoundries first received Fab 10 as part of its acquisition of IBM's microelectronics assets in 2015. The fab is used to process 300-mm wafers using various technologies, including 45nm and 65 nm technology nodes (as well as their 40 nm and 55 nm versions).
Comments
P3 here we come
IMHO, the 90nm would be the next logic step. We could fit 16 cogs and smart pins in a smaller die, or that plus larger memory in an equivalent sized die. In a few years, 90nm could be as affordable as 180nm is now. It is no longer considered leading edge.
Kind regards, Samuel Lourenço
That would be more of a P2+, not a P3, even so, it would be a huge step forward.
If OnSemi can do that then you don't need the cost of processes below 90nm.
This is currently being done by the SkyWater foundary.
I posted an article about this before.
Discussion link:
http://forums.parallax.com/discussion/comment/1446896#Comment_1446896
Article link:
https://spectrum.ieee.org/nanoclast/semiconductors/processors/the-foundry-at-the-heart-of-darpas-plan-to-let-old-fabs-beat-new-ones
j
That's why 'RAM based' MCUs are becoming more common, and why Analog devices dropped their Flash DSP some years back.
P2 executing from flash, would be much slower, (and draw more power)
Kind regards, Samuel Lourenço
My comment was in regard to the many calls, me not included, to have Flash added to the Prop2 as a designed in feature. I was explaining the most likely reason why Chip never even entertained having Flash. I'm not sure if Chip has ever said anything on the matter.
It just always seemed better to keep the eeprom/flash complexity out of the chip design. Commodity memory is super-cheap and low-pin-count, anyway.
Makes sense, as P2 is cheaper and faster as a result.
Did you ever discuss a two die package option with OnSemi, where they bond the SPI memory internally ?
On the topic of packages, are OnSemi able to do a BGA version (flip-chip?) of P2, using the current die, or does BGA need a different PAD scheme ?
They build two-metal-layer interposer dies in their 500nm process for this purpose. They are really cheap, like 5 cents. These interposers can be used to connect multiple dies together or for BGA packages, too, I believe.
Unlike Flash, MRAM is a RAM so can directly be main memory. But it comes with more masks, more processing steps, more complexity in handling what will likely be more cell latency too. The big win over SRAM is capacity, MRAM is a lot denser than SRAM. It's worth going to extra design effort to get more hubRAM on the same size die.
Doesn't MRAM have a read limitation, not just a write limitation?
Cluso,
That can be adjusted as needed. Obviously needs a change to instructions. We're talking Prop3 territory.
When you say limitation, are you meaning a wearing lifetime? You may be thinking of FRAM. It wears out on both reads and writes. Its reads are destructive so requires rewritten. It can handle lots more cycles than Flash but eventually it will die from use. Not so for MRAM.
On datasheets, you will see a estimated cycle lifetime figure given for MRAM but you'll also find the same for SRAM and DRAM now too. As far as I can tell this figure is just an Smile covering exercise for all RAM types. Or at least it's no more than slapping on an expected life of CMOS based on all the various ways any chip can be damaged mechanically or electrically.
Newer SRAMs (reduced feature size) with internal ECC are becoming more common. This of course adds to actual size as 12 bits are used to support 8 real bits.
Many/most servers are now sporting ECC DRAMs. And while most PC Workstations do not support ECC DRAMs, there is more evidence showing that workstations should really be using ECC DRAMs too.
IMHO the rowhammer effect is confirmation that reduced feature size is now compromising the integrity of DRAM.
Wear factor, aka endurance, is permanent damage. It would be alarm bells if ECC was having to prop that up. Wear levelling in SSDs is a mitigation for wear factor.
Endurance gets used for both I've noticed. It's certainly annoying when searching the term to find one article talks about cell damage rate while another article talks about bit error rate and then a Wikipedia article mix the two together with no apparent distinction.
If we went to a 28nm process, that would afford an easy 16MB of hub SRAM. Cog SRAM could quadruple, too, if we went from 32 to 36 bits. 64 bits would seem the next logical step, but something just a bit bigger than 32 would allow a nice fit for everything.