Shop OBEX P1 Docs P2 Docs Learn Events
Intel rumored to be introducing a 50-core processor in 2012 — Parallax Forums

Intel rumored to be introducing a 50-core processor in 2012

Comments

  • KyeKye Posts: 2,200
    edited 2011-06-21 07:22
    Heh, the thing is programmers have yet to figure out how to use so many processors... Until then many cores offers very diminishing returns.

    Graphics and stuff can use more cores but office workers are the largest market for Intel to sell to. If it doesn't speed up word its not really a great selling point. Hmm, luckily it seems in this day and age that you have to keep upgrading your computer with the newest technology to be able to run the most advanced piece of virus protection to say ahead...
  • Mike HuseltonMike Huselton Posts: 746
    edited 2011-06-21 07:57
    I agree. The Intel announcement was directed a a very specialized market, such as grid computing or communications switching. Imagine the peak current requirements at the highest switching speeds...
  • LeonLeon Posts: 7,620
    edited 2011-06-21 08:03
    Humanoido claims to be able to do something useful with hundreds of Propellers and a multi-core graphics card. :)

    Seriously, techniques for programing hundreds of cores have been around for many years.
  • SarielSariel Posts: 182
    edited 2011-06-21 09:18

    luckily it seems in this day and age that you have to keep upgrading your
    computer with the newest technology to be able to run the most advanced piece of
    virus protection to say ahead...

    Could not have said it better myself. If you have not had the pleasure, uninstall Norton once, then go into the registry, and look for Symantec. Or Norton, and see if it is really gone. WHO IS THE VIRUS NOW?!
  • AleAle Posts: 2,363
    edited 2011-06-21 10:18
    Speeding wordprocessing is just a waste of time. People can type some 100 words per minute... that puts the limit on how fast it has to be.

    On a serious note so many cores can be used for quantumchemical calculations, the algoritms exist, gimme gimme gimme :)
  • vettezr1vettezr1 Posts: 77
    edited 2011-06-21 17:58
    I should keep m ymouth shut but I just can not resist ,, 50 cores huh maybe that will make my intel Extreme with 12 gig ram 6 Terabyte of hard disk 256K of SSD boot ram amd my 3 ATI 5870 crads in SLI amd windows 7 with office 2007 boot almost as fast as my S-100 Z80 computer with dual 8 inch drives a 5 meg hard drive and a whopping 64K not meg K running MPM with Dbase and wordstar and lotus 123 it boots in about 8 seconds runs 5 other terminals and boots them in seconds as well as a word processor and dbase it blows my mind that its can be so darn fast while my super windows I paid over 7K for the setup is such in plain english a piece of sh1t I am so sick of waiting for windows to load or update or do some other Smile and Virus's every single day I have to run update I do not do anything but get email and goto to Parallax and other tech documents yet Norton finds over 30-45 serious threats every time it runs this is just pathectic. OH and can;t forget how my expelorer 9 blows up so often I switched to Firefox realll Intel can build a zillion cores as long as it runs windows it will need that many just to keep up with a commadore
  • SSteveSSteve Posts: 808
    edited 2011-06-21 18:51
    Those processors would sure speed up our physics simulation software. Right now we're using an 8-core Xeon Mac Pro.

    My brother-in-law is finishing up his PhD in physics at Stanford. He is specializing in numerical simulations. At the job interview for the organization where he now works he asked how many processors he'd typically get to use for a run. The interviewer said anywhere from thirty to fifty thousand. Now that's what I call massively parallel.
  • jmgjmg Posts: 15,149
    edited 2011-06-21 21:28
    Since this topic is about 'other devices', here are some more :

    TI are releasing a Dual-core DSP+M3, rather similar to the NXP Asym. pairing of M0+M4

    http://focus.ti.com/mcu/docs/mcuproductcontentnp.tsp?sectionId=95&familyId=2049&tabId=2743
    claims 10ku price points of :
    F28M35Ex 60 / 60 Up to 1MB Flash, 132KB RAM Ethernet, USB (OTG), SPI, SCI, CAN, I2C, McBSP $6.71
    F28M35Mx 75 / 75 Up to 1MB Flash, 132KB RAM Ethernet, USB (OTG), SPI, SCI, CAN, I2C, McBSP $9.12
    F28M35Hx 150 / 75 or 100 / 100 Up to 1MB Flash, 132KB RAM Ethernet, USB (OTG), SPI, SCI, CAN, I2C, McBSP $11.76

    the mention "Expected Shipment on July 15th" for the $99 development system card.

    A better table of parts, with price indicators, can be found here:
    http://focus.ti.com/lit/ml/sprb203/sprb203.pdf


    and I see Freescale have expanded their DSC to 32 bits, and claims "starts under $2 (USD) in 10,000-piece quantities."
    That $2 is likely for the smallest, @ 48pins, 64KF, 60MHz.
    This claims some high precision timers, but not as clear on if they are 32 bit capable timers.
    http://www.freescale.com/webapp/sps/site/prod_summary.jsp?code=MC56F84xx&tid=vanMC56F84xx
  • Clive WakehamClive Wakeham Posts: 152
    edited 2011-06-22 02:57
    Just what I need. After all my 8 core i7 is having trouble running Windows 7......
  • RS_JimRS_Jim Posts: 1,755
    edited 2011-06-22 05:31
    And we thought that the OS's were a software glutton now!
    Jim
  • HollyMinkowskiHollyMinkowski Posts: 1,398
    edited 2011-06-22 05:32
    It's a shame that to get better performance it will take a large
    number of cores. The single core computer is so simple, so easy to
    program.

    Imagine a single core at 500Ghz, you could have an interrupt
    happening 100,000 times each second and have enough time to
    execute many thousands of lines of code inside that interrupt.

    But it looks like 5Ghz or so is about as fast as a single core can
    be pushed. So unless we are rescued by some kind of quantum
    computing breakthrough then the old simple single core methods are
    doomed.

    Good compilers for multi core systems are going to be really
    hard to create. The complexity is staggering.

    When you have a single core, or a handful of cores then you can
    still get your head around the hardware and have an intuitive grasp
    of exactly what is going on. But with hundreds or thousands of cores
    the hardware is just too complex.

    It seems like we will soon have massively parallel computers that
    we will create code for using some kind of intelligent program generators.
    But will anyone really be able to understand exactly what the machine
    is doing inside anymore? What kind of debugger could try to find an error
    in a system so complex?

    In a decade our cellphone, or whatever that device has morphed into, will
    probably have at least hundreds of cores stacked up in a layered array.
    Even these cheap and common devices will become impossible to program
    using the techniques of today. Of course the cellphones we have now are
    very powerful, the ARM cpu inside is able to do things like real-time language
    translation, voice recognition and turn a printed page into audible language
    for the blind using the internal camera. But people will come to expect apps
    like augmented reality to be running on these devices and that will take many
    cores.

    In a few years salvaging thrown out cellphones and hacking them into other
    devices will be a fun hobby. It can be fun even now since the cast off phone
    is free and the cpu inside is pretty powerful. I wish I had the time to look into
    it as a hobby.
  • KyeKye Posts: 2,200
    edited 2011-06-22 08:37
    Programming is done in single core logic.

    Eg...

    Do this,
    Then this,
    Etc,

    Algorithms exist to program multiple cores Leon - THERE IS ALOT OF STUFF that can be done using parallel programming. But, most of the big money consumer applications need only a few threads and don't benefit from SIMD instructions and such linearly - very diminishing returns...
  • LeonLeon Posts: 7,620
    edited 2011-06-22 08:50
    I agree, SIMD or MIMD isn't suitable for ordinary applications. However, viable software techniques and programming languages for such parallel systems have been around for many years, and are well-understood.
  • Pharseid380Pharseid380 Posts: 26
    edited 2011-06-22 09:25
    Of course, the other big application market for the typical PC is running games, where parallel techniques are pretty standard. I would guess that drives the demand for processing power a lot more than faster word processing would.

    -phar
  • potatoheadpotatohead Posts: 10,255
    edited 2011-06-22 12:58
    A 50 core CPU, coupled with multiple graphics engines would be very attractive to the simulation markets. Things like mold flow analysis for plastic parts need serious compute, and can easily be parallelized. Same for FEA, CFD applications. The recent trend was to put much of the problem onto a GPU, or cluster of GPUs. This will be interesting because some problems are hard to fit into a GPU core, yet can still be parallelized.

    Looks like we are gonna get some seriously great technical computing workstations! Life sciences, mechanical simulation, electrical simulation, fluids, aeroelasticity, etc... All will rock on this CPU hard, probably replacing a cluster.
  • Heater.Heater. Posts: 21,230
    edited 2011-06-22 13:53
    Well, Intel have lots of bright ideas.
    Remember the i432 anyone? No, didn't think so.
    What about the i860 then? No.
    OK surely you have heard of the Itainium?. Good, see what mean?

    Problem always seemed to be that on paper these things had cutting edge performance as hardware but it was impossible for the software to realize it. For example it was just to hard for the compiler writers.

    The only thing they had that took off was the i86. Mostly people thought/think that this is not so cool. Just happened that IBM selected it for their brain dead PC.

    So I'm not going to get to excited over their 50 core dreams just yet.
  • LeonLeon Posts: 7,620
    edited 2011-06-22 14:13
    Meiko and Parsytec built massively parallel systems based on the i860.
  • potatoheadpotatohead Posts: 10,255
    edited 2011-06-22 15:57
    In the fields I mentioned, Itanium, for example, has been very successful.
  • localrogerlocalroger Posts: 3,451
    edited 2011-06-22 16:16
    All this interest in multiple cores by people like Intel is only happening because they've figured out how to put more transistors on the wafer than they can use but they can't figure out how to get the speed above 5 GHz or so, and that with water cooling.

    The next real break will be IC's made with a different material like diamond or graphene. That will get the speed up with less dense packing and more reasonable insulator thicknesses for the power dissipation, and nobody except us interruptphobes and some people working on very narrow specific problems will care about multicore architecture.
  • HollyMinkowskiHollyMinkowski Posts: 1,398
    edited 2011-06-22 16:17
    Creating optimized applications software for CPUs with a massive number of cores
    is in its infancy....actually not even quite born yet.

    The more I ponder it the more complex it all seems. Creating efficient software for
    these devices will probably be more like running a supercomputer array to solve
    complex problems like climate modeling. The more MIPS you can throw at the problem
    the better your solution will be. This will give the quality edge to big operators like
    Microsoft. They could create a very complex application that would run on a multi-core
    device that was a lot more efficient than some small shop could ever build. They could
    have a world class supercomputer chew on the problem for weeks to get good efficiency.
    It would be kinda like a render farm creating a sequence of video frames for a big
    Hollywood film. The big farm at ILM could do world class work but a guy with just a
    small computer array in his basement would turn out a mediocre series of frames.

    One problem is that the systems will be so complex that every innovation added
    or perhaps even core number expansion on end user devices will require rebuilding of the
    complex software that runs on a supercomputer array to generate application
    software. A kind of return to the days where every new CPU required you to learn
    a new asm variant in order to program it.

    I see augmented reality software being the first class of consumer applications that will make
    full use of a large number of cores efficiently. An augmented reality system will be hundreds of
    applications running in real-time and delivering a smooth virtual world to the user. It will
    blend this 3D virtual world into a representation of the real world that will be rendered
    to varying degrees. People will soon become addicted to using augmented reality gear and will
    demand faster and better. It will become unthinkable to go out in public without wearing your
    augmentation gear (probably some type of device worn like a visor or goggles)

    No, word processors don't need to run a million times as fast as they do now :lol:

    I for one will miss being able to really understand the hardware that will run my programs.
    Extending Moore's Law out over the next few decades means we will have to accept some
    changes. Just as we can't beat a supercomputer at chess anymore we soon won't be able
    to directly write our own software any more...we will just be describing what we need done
    and the rest will be a sort of magic.
  • lonesocklonesock Posts: 917
    edited 2011-06-22 16:17
    If we had 50 cores we'd use 50 cores. Sure, it's not useful for how we currently do word processing, with a manual keyboard for data entry. Now, say you have 50 cores available...in my case the computer would probably end up using 1 for the actual text book-keeping, 20 for the uber-accurate speech recognition algorithms, 10 to real-time decode my brain-wave input device, 5 to render my streaming (DRM'd, no doubt) video, and the remaining 14 trying to catch my hideous spelling errors [8^)

    (And I'm not exactly sure grandpa doing word-processing is the biggest market for the chip manufacturer...Dell's using the lowest budget cut-rate throttled down processor for the PC they sold him, so how much money actually makes it into Intel's pocket?)

    As the average number of cores has increased, so does the software adapt to take advantage of it. I can now run viruses full time on one of my 4 cores, without taxing the other 3. Not to mention I do a lot of computation type programming...multi-cores are the norm, and if you give me more I'll use more...libraries like OpenCL will even hide the complexity from me, at least after I switched from a single-core-only mindset.

    One =(difficult jump)=> two =(easy jump)=> many [8^)

    Jonathan
  • Martin_HMartin_H Posts: 4,051
    edited 2011-06-22 16:48
    For server computers 16 core servers isn't all that unusual right now, so I imagine a 50 core CPU will fit right in. For the desktop you need the right application to make it useful. Mathematical modeling being a likely candidate.
  • potatoheadpotatohead Posts: 10,255
    edited 2011-06-22 20:42
    Exactly!

    A 50 core CPU is pretty exciting for that computing niche. What I think is really interesting is GPU compute is getting very, very good. A simulation solve, on say a Dual Xeon, vs a few nVidia cards is almost no contest for many problems. Having GPU code in the solver is a big deal. Compute speed is very high, putting high end CPUs at a disadvantage, precisely because they do not offer anywhere near the cores, and it's possible to stuff several graphics cards into a machine for insane short solve times, on well distributed problems.

    I think Intel is feeling the pressure from the GPU manufacturers in these kinds of niches. This CPU is a response to that. GPU manufacturers are repurposing their stuff to do non-graphics, compute only tasks too.

    Multi-core, with a lot of math is a hot-spot right now. One example, I experienced recently, had to do with a plastics part mold flow simulation. Multi-dicipline, flow, thermal, etc... On a high end i7 3+ Ghz CPU, it took many hours, actually the better part of a day to do a solve. The same solution on a GPU was a small fraction of the time, coupla hours. A few graphics boards, and it could be under one hour, where the same scaling with CPUs is very expensive right now, meaning dollar per compute unit * power consumption isn't favorable at all.
  • HumanoidoHumanoido Posts: 5,770
    edited 2011-06-23 06:23
    You can have 724 cores (or more) right now if you buy Apple with AMD. There's no need to wait.

    You can buy Parallax Propeller chips and put together several partitions of 50 props each. Again, no need to wait.
  • Martin_HMartin_H Posts: 4,051
    edited 2011-06-23 07:01
    Quality of the cores counts as much as quantity. Each core introduces contention for shared resources like the system bus or mass storage. For example fifty cores without enough local cache RAM will end up in wait states and lose any advantages as the bottleneck shifts to RAM access. High performance computer design requires balancing advances in each sub-system so that all bottlenecks are raised equally.
  • jmgjmg Posts: 15,149
    edited 2011-06-23 13:57
    potatohead wrote: »
    One example, I experienced recently, had to do with a plastics part mold flow simulation. Multi-dicipline, flow, thermal, etc... On a high end i7 3+ Ghz CPU, it took many hours, actually the better part of a day to do a solve. The same solution on a GPU was a small fraction of the time, coupla hours. A few graphics boards, and it could be under one hour, where the same scaling with CPUs is very expensive right now, meaning dollar per compute unit * power consumption isn't favorable at all.

    This shows why Intel is now releasing a commercial chip for this sector.
    NVidia was getting commercial traction on their offering, and offering better performance, and companies like Intel cannot ignore that.

    Package and power envelopes could be interesting, as would on-chip resource, and what they had to 'throw overboard'.
Sign In or Register to comment.