Intel rumored to be introducing a 50-core processor in 2012
Mike Huselton
Posts: 746
I just received this item on the Next Big Future newsfeed:
http://nextbigfuture.com/2011/06/intel-will-introduce-50-core-processor.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+blogspot%2Fadvancednano+%28nextbigfuture%29
I just wanted to share this news...
http://nextbigfuture.com/2011/06/intel-will-introduce-50-core-processor.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+blogspot%2Fadvancednano+%28nextbigfuture%29
I just wanted to share this news...
Comments
Graphics and stuff can use more cores but office workers are the largest market for Intel to sell to. If it doesn't speed up word its not really a great selling point. Hmm, luckily it seems in this day and age that you have to keep upgrading your computer with the newest technology to be able to run the most advanced piece of virus protection to say ahead...
Seriously, techniques for programing hundreds of cores have been around for many years.
Could not have said it better myself. If you have not had the pleasure, uninstall Norton once, then go into the registry, and look for Symantec. Or Norton, and see if it is really gone. WHO IS THE VIRUS NOW?!
On a serious note so many cores can be used for quantumchemical calculations, the algoritms exist, gimme gimme gimme
My brother-in-law is finishing up his PhD in physics at Stanford. He is specializing in numerical simulations. At the job interview for the organization where he now works he asked how many processors he'd typically get to use for a run. The interviewer said anywhere from thirty to fifty thousand. Now that's what I call massively parallel.
TI are releasing a Dual-core DSP+M3, rather similar to the NXP Asym. pairing of M0+M4
http://focus.ti.com/mcu/docs/mcuproductcontentnp.tsp?sectionId=95&familyId=2049&tabId=2743
claims 10ku price points of :
F28M35Ex 60 / 60 Up to 1MB Flash, 132KB RAM Ethernet, USB (OTG), SPI, SCI, CAN, I2C, McBSP $6.71
F28M35Mx 75 / 75 Up to 1MB Flash, 132KB RAM Ethernet, USB (OTG), SPI, SCI, CAN, I2C, McBSP $9.12
F28M35Hx 150 / 75 or 100 / 100 Up to 1MB Flash, 132KB RAM Ethernet, USB (OTG), SPI, SCI, CAN, I2C, McBSP $11.76
the mention "Expected Shipment on July 15th" for the $99 development system card.
A better table of parts, with price indicators, can be found here:
http://focus.ti.com/lit/ml/sprb203/sprb203.pdf
and I see Freescale have expanded their DSC to 32 bits, and claims "starts under $2 (USD) in 10,000-piece quantities."
That $2 is likely for the smallest, @ 48pins, 64KF, 60MHz.
This claims some high precision timers, but not as clear on if they are 32 bit capable timers.
http://www.freescale.com/webapp/sps/site/prod_summary.jsp?code=MC56F84xx&tid=vanMC56F84xx
Jim
number of cores. The single core computer is so simple, so easy to
program.
Imagine a single core at 500Ghz, you could have an interrupt
happening 100,000 times each second and have enough time to
execute many thousands of lines of code inside that interrupt.
But it looks like 5Ghz or so is about as fast as a single core can
be pushed. So unless we are rescued by some kind of quantum
computing breakthrough then the old simple single core methods are
doomed.
Good compilers for multi core systems are going to be really
hard to create. The complexity is staggering.
When you have a single core, or a handful of cores then you can
still get your head around the hardware and have an intuitive grasp
of exactly what is going on. But with hundreds or thousands of cores
the hardware is just too complex.
It seems like we will soon have massively parallel computers that
we will create code for using some kind of intelligent program generators.
But will anyone really be able to understand exactly what the machine
is doing inside anymore? What kind of debugger could try to find an error
in a system so complex?
In a decade our cellphone, or whatever that device has morphed into, will
probably have at least hundreds of cores stacked up in a layered array.
Even these cheap and common devices will become impossible to program
using the techniques of today. Of course the cellphones we have now are
very powerful, the ARM cpu inside is able to do things like real-time language
translation, voice recognition and turn a printed page into audible language
for the blind using the internal camera. But people will come to expect apps
like augmented reality to be running on these devices and that will take many
cores.
In a few years salvaging thrown out cellphones and hacking them into other
devices will be a fun hobby. It can be fun even now since the cast off phone
is free and the cpu inside is pretty powerful. I wish I had the time to look into
it as a hobby.
Eg...
Do this,
Then this,
Etc,
Algorithms exist to program multiple cores Leon - THERE IS ALOT OF STUFF that can be done using parallel programming. But, most of the big money consumer applications need only a few threads and don't benefit from SIMD instructions and such linearly - very diminishing returns...
-phar
Looks like we are gonna get some seriously great technical computing workstations! Life sciences, mechanical simulation, electrical simulation, fluids, aeroelasticity, etc... All will rock on this CPU hard, probably replacing a cluster.
Remember the i432 anyone? No, didn't think so.
What about the i860 then? No.
OK surely you have heard of the Itainium?. Good, see what mean?
Problem always seemed to be that on paper these things had cutting edge performance as hardware but it was impossible for the software to realize it. For example it was just to hard for the compiler writers.
The only thing they had that took off was the i86. Mostly people thought/think that this is not so cool. Just happened that IBM selected it for their brain dead PC.
So I'm not going to get to excited over their 50 core dreams just yet.
The next real break will be IC's made with a different material like diamond or graphene. That will get the speed up with less dense packing and more reasonable insulator thicknesses for the power dissipation, and nobody except us interruptphobes and some people working on very narrow specific problems will care about multicore architecture.
is in its infancy....actually not even quite born yet.
The more I ponder it the more complex it all seems. Creating efficient software for
these devices will probably be more like running a supercomputer array to solve
complex problems like climate modeling. The more MIPS you can throw at the problem
the better your solution will be. This will give the quality edge to big operators like
Microsoft. They could create a very complex application that would run on a multi-core
device that was a lot more efficient than some small shop could ever build. They could
have a world class supercomputer chew on the problem for weeks to get good efficiency.
It would be kinda like a render farm creating a sequence of video frames for a big
Hollywood film. The big farm at ILM could do world class work but a guy with just a
small computer array in his basement would turn out a mediocre series of frames.
One problem is that the systems will be so complex that every innovation added
or perhaps even core number expansion on end user devices will require rebuilding of the
complex software that runs on a supercomputer array to generate application
software. A kind of return to the days where every new CPU required you to learn
a new asm variant in order to program it.
I see augmented reality software being the first class of consumer applications that will make
full use of a large number of cores efficiently. An augmented reality system will be hundreds of
applications running in real-time and delivering a smooth virtual world to the user. It will
blend this 3D virtual world into a representation of the real world that will be rendered
to varying degrees. People will soon become addicted to using augmented reality gear and will
demand faster and better. It will become unthinkable to go out in public without wearing your
augmentation gear (probably some type of device worn like a visor or goggles)
No, word processors don't need to run a million times as fast as they do now
I for one will miss being able to really understand the hardware that will run my programs.
Extending Moore's Law out over the next few decades means we will have to accept some
changes. Just as we can't beat a supercomputer at chess anymore we soon won't be able
to directly write our own software any more...we will just be describing what we need done
and the rest will be a sort of magic.
(And I'm not exactly sure grandpa doing word-processing is the biggest market for the chip manufacturer...Dell's using the lowest budget cut-rate throttled down processor for the PC they sold him, so how much money actually makes it into Intel's pocket?)
As the average number of cores has increased, so does the software adapt to take advantage of it. I can now run viruses full time on one of my 4 cores, without taxing the other 3. Not to mention I do a lot of computation type programming...multi-cores are the norm, and if you give me more I'll use more...libraries like OpenCL will even hide the complexity from me, at least after I switched from a single-core-only mindset.
One =(difficult jump)=> two =(easy jump)=> many [8^)
Jonathan
A 50 core CPU is pretty exciting for that computing niche. What I think is really interesting is GPU compute is getting very, very good. A simulation solve, on say a Dual Xeon, vs a few nVidia cards is almost no contest for many problems. Having GPU code in the solver is a big deal. Compute speed is very high, putting high end CPUs at a disadvantage, precisely because they do not offer anywhere near the cores, and it's possible to stuff several graphics cards into a machine for insane short solve times, on well distributed problems.
I think Intel is feeling the pressure from the GPU manufacturers in these kinds of niches. This CPU is a response to that. GPU manufacturers are repurposing their stuff to do non-graphics, compute only tasks too.
Multi-core, with a lot of math is a hot-spot right now. One example, I experienced recently, had to do with a plastics part mold flow simulation. Multi-dicipline, flow, thermal, etc... On a high end i7 3+ Ghz CPU, it took many hours, actually the better part of a day to do a solve. The same solution on a GPU was a small fraction of the time, coupla hours. A few graphics boards, and it could be under one hour, where the same scaling with CPUs is very expensive right now, meaning dollar per compute unit * power consumption isn't favorable at all.
You can buy Parallax Propeller chips and put together several partitions of 50 props each. Again, no need to wait.
This shows why Intel is now releasing a commercial chip for this sector.
NVidia was getting commercial traction on their offering, and offering better performance, and companies like Intel cannot ignore that.
Package and power envelopes could be interesting, as would on-chip resource, and what they had to 'throw overboard'.