What sort of single chip/IC P2 version could we make using the whole 8" wafer ?
Cluso99
Posts: 18,069
in Propeller 2
What could we make if the IC took the whole wafer?
The following info is taken from various recent threads...
The P2 (original 16 cogs, 512KB Hub and 64 SmartPins) logic+memories area is looking to be 72 mm2.
We only have 58 mm2 of space in the middle of our huge 8.5 x 8.5 mm die.
The wafer size is 8" (200mm) dia.
350x P2 die fit, but 75% yield gives 262 usable P2's.
A P2 die is 8.5mm x 8.5mm = 72.25mm2
The outer ring frame (Analog+Digital I/O) = 72.25mm2 - 58mm2 = ~14mm2
The synthesis guy just came back and said that the logic+memories area is looking to be 72 mm2.
We have 16 instances of 8192x32 SP RAM at 1.57mm2 = ~25mm2.
... That also halves the main RAM, for now, but I believe they have a 16kx32 instance we could use to keep the hub RAM at 512KB.
We have 32 instances of 512x32 DP RAM at 0.292mm2 = ~9.3mm2.
Those RAMs total to ~34mm2.
Each smart pin is 1/9th the logic of a cog, so 64 of them are equivalent to 64/9 = 7 cogs.
The CORDIC is equivalent to 2 cogs.
So, we have 16 + 7 + 2 = 25 cogs' equivalent of logic here.
This means a cog's worth of logic is about 1.5 mm2 (38 / 25).
So lets play with some numbers...
P2 (16 cog, CORDIC, 512KB HUB RAM, 64 SmartPins, without ring frame) = 72mm2
Plus 512KB SP HUB RAM (1MB Total) = 72mm2 + 25mm2 = 97mm2
Less 64 SmartPins (~7cogs =7*1.5mm2=10.5mm2): 97-10.5 = ~86.5mm2
Therefore, a P2 with 16* 4KB DP RAM COGs, 1MB SP HUB RAM, CORDIC without SmartPins and without the ring frame = ~86.5mm2
A wafer can fit 350 x 8.5mm x 8.5mm = 350 x 72.5mm2 = 25,287mm2 usable per wafer
25,287mm2 / 86.5mm2 = 292 new P2s x 75% yield = 219 usable new P2s.
So lets presume we get 200 usable new P2's(16* 4KB DP RAM COGs, 1MB SP HUB RAM, CORDIC, no SmartPins/IO/ring-fram)
Plus 4 * P2's (16* 4KB DP RAM COGs, 1MB SP HUB RAM, CORDIC, plus 32* Smart I/O Pins and the ring frame for 32 I/O) placed around four sides/edges of the (round) 8" wafer.
Use larger pin pads for easy soldering.
So what do we get...
An 8" (200mm) dia IC with...
I/O: 128 Smart I/O Pins with 64 x 32-bit cores with CORDIC and 4KB DP Private RAM and 4MB shared SP HUB RAM
CPU: 3,200 x 32-bit cores with CORDIC and 4KB DP Private RAM and 200MB shared SP HUB RAM (~4,600 cores if 100% work)
At 200MHz, 2 clock average instructions = 320,000 MIPs plus CORDIC.
And yes, we will require some form of interconnect between these new P2 Blocks.
The following info is taken from various recent threads...
The P2 (original 16 cogs, 512KB Hub and 64 SmartPins) logic+memories area is looking to be 72 mm2.
We only have 58 mm2 of space in the middle of our huge 8.5 x 8.5 mm die.
The wafer size is 8" (200mm) dia.
350x P2 die fit, but 75% yield gives 262 usable P2's.
A P2 die is 8.5mm x 8.5mm = 72.25mm2
The outer ring frame (Analog+Digital I/O) = 72.25mm2 - 58mm2 = ~14mm2
The synthesis guy just came back and said that the logic+memories area is looking to be 72 mm2.
We have 16 instances of 8192x32 SP RAM at 1.57mm2 = ~25mm2.
... That also halves the main RAM, for now, but I believe they have a 16kx32 instance we could use to keep the hub RAM at 512KB.
We have 32 instances of 512x32 DP RAM at 0.292mm2 = ~9.3mm2.
Those RAMs total to ~34mm2.
Each smart pin is 1/9th the logic of a cog, so 64 of them are equivalent to 64/9 = 7 cogs.
The CORDIC is equivalent to 2 cogs.
So, we have 16 + 7 + 2 = 25 cogs' equivalent of logic here.
This means a cog's worth of logic is about 1.5 mm2 (38 / 25).
So lets play with some numbers...
P2 (16 cog, CORDIC, 512KB HUB RAM, 64 SmartPins, without ring frame) = 72mm2
Plus 512KB SP HUB RAM (1MB Total) = 72mm2 + 25mm2 = 97mm2
Less 64 SmartPins (~7cogs =7*1.5mm2=10.5mm2): 97-10.5 = ~86.5mm2
Therefore, a P2 with 16* 4KB DP RAM COGs, 1MB SP HUB RAM, CORDIC without SmartPins and without the ring frame = ~86.5mm2
A wafer can fit 350 x 8.5mm x 8.5mm = 350 x 72.5mm2 = 25,287mm2 usable per wafer
25,287mm2 / 86.5mm2 = 292 new P2s x 75% yield = 219 usable new P2s.
So lets presume we get 200 usable new P2's(16* 4KB DP RAM COGs, 1MB SP HUB RAM, CORDIC, no SmartPins/IO/ring-fram)
Plus 4 * P2's (16* 4KB DP RAM COGs, 1MB SP HUB RAM, CORDIC, plus 32* Smart I/O Pins and the ring frame for 32 I/O) placed around four sides/edges of the (round) 8" wafer.
Use larger pin pads for easy soldering.
So what do we get...
An 8" (200mm) dia IC with...
I/O: 128 Smart I/O Pins with 64 x 32-bit cores with CORDIC and 4KB DP Private RAM and 4MB shared SP HUB RAM
CPU: 3,200 x 32-bit cores with CORDIC and 4KB DP Private RAM and 200MB shared SP HUB RAM (~4,600 cores if 100% work)
At 200MHz, 2 clock average instructions = 320,000 MIPs plus CORDIC.
And yes, we will require some form of interconnect between these new P2 Blocks.
Comments
I recall there once was a company called WaferScaleIntegration (WSI), but they did not actually make wafer-scale parts...
There is the first clue of a logistics problem, even before you consider the bonding wires and who would actually want to buy an 8" device !!
At 75% yield, you need some means to tag and map those 'bad' parts, so they are skipped/avoided.
Reminds me of the stories of those Russian Chips, that came with their individual 'yield sheets' showing what actually worked, and the end user had to customize for each one...
In 1972 Ivor Catt Catt patented some ideas on Wafer scale integration. There was even a company, Anamartic, with backers like Clive Sinclair and Tandem computers formed to commercialize it. I remember reading Catt's articles on the idea in Wireless World Magazine as a kid.
Basically he suggests there is no need to test and mark all the bad parts. Just build enough communications fabric onto the wafer such that when powered up all the individual "nodes" can test themselves and each other and make a network of all the working ones.
Needless to say the idea did not catch on and Catt is regarded as a bit of a nut job.
https://en.wikipedia.org/wiki/Ivor_Catt
I suspect the idea might still be viable for those wanting to build massively parallel processors for use as neural networks. Like the Google TPU.
Don't they use 12" wafers too?
And 3+GHz !
Could get some really smart AI or video processing, or just plain "cloud servers".
We might have to dunk this thing into a fluid cooling system.
I can't help thinking that bonding out the million I/O pins is go to prove tricky
It's not going to compete with Google's TPU for AI applications.
1. We have the coal.
2. We have the oil.
3. If we can prove a top shelf application, Ken will wire it up by hand.
4. AI?.... you guys really believe all that Smile? God help us.
What, you don't think AI is a real thing?
Consider the humble light switch in my living room.
When I press it up the light comes on, When I press it down the light goes off. Frikin magic, isn't it?
But wait, there is more.
It has a memory. When I go away it remembers if the light should be on or off and keeps it that way. Amazing.
And even more.
My light switch has a language. In it's way it has symbols, a lexicon, press up and press down.
Not only that it has a grammar or syntax. Press up means put the light on. But press up followed by press up does nothing more than the first press up. It's a grammatical error. And so on. Me and my light switch have a conversation going on.
If you are impressed by the light switch in my living room the ones in my hallway will blow your socks off. I can press up or down at the entrance and up or down at the exit and the light knows what to do. The syntax is a bit more complicated.
Now, put a few billion of such switches together and you have AI.
Ah, you mean AI is not intelligent like a human? Well, fair enough. Early days yet.
I notice that the guys that do this kind of thing don't call it "AI" anymore. No doubt because it does not live up to what people like you expect. They like to talk about "Deep Learning". Which, as an extension of my light switches seems accurate enough.
No one except a fanboi will waste time on a 8" monster. It's a oddity like a Lavalamp or a dwarf as a manservant.
My bet it cooks itself after powering up.
FWIW The ARM chips use way less power than the Intel chips for similar densities. One has to improve their power down design when the section is not in use. Apple seem to be way ahead here.
-Phil
I live in a home once owned by a gay couple who sold furniture out of the top floor. The very strange chandelier (in what is now our entrance) sometimes comes on and then goes off for no apparent reason. I can turn the thing on and nothing happens. I can turn it off ... and nothing happens. But when I leave it on... sometimes it comes on and sometimes it goes off. Sometimes it comes on and I look at it... and it suddenly goes off...almost as the lamp knows that I am watching. It happened again tonight... as I was reading your post:) No kidding.
So... while you might have perfectly deterministic lighting, mine is more of a paranormal variety or of a slightly undetermined nature:)
Unless a problem is parameterized correctly, AI is useless... but if a problem is parameterized correctly, AI is unnecessary.
The very definition of AI has been and will continue to shift... sort of how "global warming" has become "climate change." I don't discourage the work... Governments and private organizations should fund it. They should support anyone who becomes an expert... just don't expect any miracles.
If the definition of AI is reduced to a framework for establishing formal relationships between parameters, and it frees the user from doing a bunch of "if thens," it would be useful. But at that point the intelligence isn't artificial... and it isn't in the computer.
If you have a doctor implement AI for medical purposes... what you will get is the innate understanding and prejudices of that doctor cloaked in the imprimature of AI supported by Big Blue.
They can just f...k off.
Regards
Rich
Of course the pre-supposes one a has a definition of "intelligence". As far as I know there is no such rigorous, in a logical/mathematical sense, definition of intelligence.
I present my light switch as an extreme example in one direction of the definition. Like I say, it has memory, it has language, so it has some traits of intelligence. Of course most people would discount it from their definition of AI due to it's extreme simplicity.
One definition I like is: "Artificial Intelligence is that which is required to solve any problem that we have not solved with computers yet". The canonical example of this is that it was suggested that the ability to beat humans at chess would be a demonstration of AI. Now that we have done that most people would discount it as such.
Seems to me there is one common trait of so called "AI" systems. That is that they come up with solutions to problems without being specifically programmed to solve those problems. There is no mathematical model that they are programmed with to solve the problem. No specific algorithm or program definition. The upshot of which is that having come up with a solution, nobody can explain how they arrived at it.
For example, when YouTube selects a bunch of video suggestions for me, there is nobody in Google who can tell me why those particular videos were suggested.
This "feature" of AI gets darn right scary in other situations. For example, what happens when your bank refuses you a loan on the basis of the output of some neural net? There is nobody who can tell you why your application was declined. Nobody knows. Nobody to talk to and nothing you can do. Changing banks won't help of course as all their AI's are sharing their data on you.
It's probably an X10 automated light switch and one of your neighbors' kids has guessed the house code. I ended up having to take all the X10 stuff out of my house for that same reason. Changing the house code would only fix it for a couple of days, until he tried them all and found the new one.
Not AI yet, but super useful and powerful.
Nothing created yet has agency. That's the line, IMHO.
Full agency is like us. I can see limited agency too, like kids, animals.
Specialized. Take a cat, for example. An AI with that level of agency, basic ability to plan, reason, predict, detect. Would be darn tough to get past, if it were on watch.
A little kid type, with a big memory? Extremely useful for mining data, performing tasks, parsing, computing, extrapolating.
Us? Well, maybe we shouldn't do that.
I guess you don't mean "intelligence" else you would have said so.
You don't mean "sentience" else you would have said that.
"Free will" perhaps?
I know, I'll ask Google:
Agency is the capacity of an actor to act in a given environment.
From:
https://en.wikipedia.org/wiki/Agency_(philosophy)
Hmm... no idea what that is about. Cats', dogs, mushrooms, radio-active nuclei all have agency then.
You imply that a state machine, even a big one, cannot have "agency"
But there is more: What is "Full agency" ?
Seems it is something you think you have that children and animals do not have.
Interestingly, by saying "like us" you suggest that I might have "full agency".
I don't know what either of them actually is, nor if they are emergent, or intrinsic.
I do know what they can do.
Watson knows stuff, and can process via very complex state, but my cat has and will demonstrate agency. Watson does not.
Kids and animals have it. Just vary by degree. "Full" is us, adult humans. I am completely open to dropping that distinction. May be binary. To refine that, I need to know what intelligence is. I don't.
I see one running in an inert bath. Get this cylinder, with radiator... lol
Q2: what do we seal the die with after soldering? Presume this is like the die bonded directly to the pcb?
Presuming the above, I wonder how big a die we could get from one of those shuttle runs?
there has to be some space between the chips on a die to allow cutting them into pieces, so - technically - it should be possible to connect wires even if not cut into pieces.
But connect to what exactly? Even your 4 prop board does not connect any props to each other, its just exposing the pins.
Doing that on a wafer, seems - hmm- useless impossible to me.
besides the big brain of humanoido(?) and having a nice conversation piece I can not see any way to use a wafer as a running project.
As of the AI thing, besides other jobs in my life I raised, trained and sold dogs to the German police.
My take on AI would be that if IT develops a own personality, it could be valued as a intelligence. But what a personality is depends on the viewer. For me a dog/cat has a personality, a say frog or fly not.
But for frogs and flies as viewers, other frogs and flies MAY have a personality also.
Hmm
Mike
In any case, it's just pattern matching, whatever way it's performed. I see the same input/output behaviour as from IRC bots, which I programmed for some time some years ago. It was for our company internal IRC. The bots would listen and pattern match and come up with some output where they 'decided' it would be helpful. Same stuff as was done on many of the public IRC channels at the time, and still is, I guess (haven't been there for a long time). A casual glance could fool you to think it's intelligence there. It isn't, of course.
Whatever it is that Google programs to come up with their inane suggestions (not to mention their targeted ads), there's not an atom of intelligence there.
Neither on the supposed customer side:
"What Friedman revealed - in brief - was the following: "we've found out that 98% of our business was coming from 22 words. So, wait, we're buying 3,200 words and 98% of the business is coming from 22 words. What are the 22 words? And they said, well, it's the word Restoration Hardware and the 21 ways to spell it wrong, okay?"
http://www.zerohedge.com/news/2017-09-11/startling-anecdote-about-online-advertising-restoration-hardware
IC you've got some burgers there