Shop OBEX P1 Docs P2 Docs Learn Events
The Cores are Coming! — Parallax Forums

The Cores are Coming!

HumanoidoHumanoido Posts: 5,770
edited 2010-08-16 05:21 in Propeller 1
http://www.wired.com/gadgetlab/2009/10/tilera-100-cores/

I think future props will break the 8-Cog barrier. Even if speed is down, there's a considerable power to having numerous clustered processors operating in unison. Maybe we won't see more cores in the next prop, but the one after will be a monster of a chip!

It is interesting that a plateau is reached in microprocessor speed and the next level of evolution is going to multi-processors.

We are beginning to see some of the power of multiple cores. Eight cores can do fantastic things. Just wait until they morph into 800 or more.

Humanoido

Comments

  • BaggersBaggers Posts: 3,019
    edited 2010-08-15 16:03
    Interesting read.
    Also interesting how all external ram has to go through one core though :)
    I wonder if they'll have a go at taking on nVidia using it in a graphics card, to do Ray casting, as it's not locked to drawing polys.
  • jmgjmg Posts: 15,185
    edited 2010-08-15 16:12
    RAM may not be as 'sexy' as Core counts, but it is vitally important to practical projects.
    So the ultimate test of these fringe devices, is how well they run real software.
  • edited 2010-08-15 16:16
    I went through this discussion with people who actually build computers and the problem with having many cores is having to wait their turn. Let us propose it being used in a computer. You want one of the cores to access a hard drive. The core will have to wait its turn. That could create a bottleneck and cause latency. It may not be an efficient design.

    The problem will be heat and having to cool the chip. The other problem will be die size. Imagine producing die and one of the cores doesn't work. If too many of the chips are half defective then the problem becomes an economic issue.

    It isn't an area where I specialize in so I can't tell you how to handle it or if chip makers will find it affordable. The other problem will be they will be marketing it towards the high end range for computers which will be expensive so that will leave most hobbyists out.
  • BradCBradC Posts: 2,601
    edited 2010-08-15 16:39
    Baggers wrote: »
    Interesting read.
    Also interesting how all external ram has to go through one core though

    Not familiar with NUMA architectures then?

    Also, have a look at the architecture used in the late AMD processors, where the memory controller sits on a high speed internal bus as a peer to the cores.
  • Kevin WoodKevin Wood Posts: 1,266
    edited 2010-08-15 17:28
    Chuckz wrote:
    I went through this discussion with people who actually build computers and the problem with having many cores is having to wait their turn.

    This is an area where the Propeller doesn't scale very well. The more cogs you add, the longer each waits for hub access.
  • edited 2010-08-15 17:42
    Kevin Wood wrote: »
    This is an area where the Propeller doesn't scale very well. The more cogs you add, the longer each waits for hub access.

    Multitasking is probably the better word for what the prop can do. Multitasking is sometimes better than single tasking. Imagine if you wanted to run Microsoft Word, Excell and another program but kept having to close one to run the other. You would be wasting your time closing and loading programs.
  • Cluso99Cluso99 Posts: 18,069
    edited 2010-08-15 21:10
    The Prop II will have access to hub 2x faster and the 4x wider and the chip is 2x faster. So, 2x4x = 8x faster hub access at twice the core speed. I am leaving the core speed seperate.

    So the cog is 4x faster (1 instruction per clock) and 2x faster (clock).

    On top of this there is the block transfer instruction that will shift a lot of memory at 4x32bits from/to hub.

    The Prop II is touted to have extra fifo memory per cog - 256 or 512 longs???

    Two things I advocated was more cogs and a mechanism to allow cogs additional accesses to hub on a priority basis. So you could have 1 cog getting 4x access and the other cogs getting reduced access unless the priority cog did not require it (kills determinism though).

    Another suggestion was to allow each cog to pair up with an adjacent cog and take all its resources, meaning bank switching the extra 512 longs of code space, and it's hub access cycle.

    We may see these things on Prop III. Remember, this is not targetting an Intel/Atom or other high powered PC/Laptop/iPad replacement chip.
  • HumanoidoHumanoido Posts: 5,770
    edited 2010-08-15 23:14
    Chuckz wrote: »
    I went through this discussion with people who actually build computers and the problem with having many cores is having to wait their turn. Let us propose it being used in a computer. You want one of the cores to access a hard drive. The core will have to wait its turn. That could create a bottleneck and cause latency. It may not be an efficient design.
    This is true, however the power of these clusters of processors is not embedded in accessing hard drives at the same time. Unleashed simultaneous computational power holds the true gains.
    Chuckz wrote: »
    The problem will be heat and having to cool the chip. The other problem will be die size. Imagine producing die and one of the cores doesn't work. If too many of the chips are half defective then the problem becomes an economic issue.
    There are many ways to cool a chip. Small Peltier devices, like tiny electric refrigerators, have been used for decades in astronomical CCD imaging sensors for supercooling to -107 deg. F. The technology is available for multiple core chips when needed.

    Dies can be produced with "extra quantity" of computers available to compensate for defective ones. A chip rated for 100 cores, cast with 110, can have 10% defectives voided and still perform to specs.
    Chuckz wrote: »
    It isn't an area where I specialize in so I can't tell you how to handle it or if chip makers will find it affordable. The other problem will be they will be marketing it towards the high end range for computers which will be expensive so that will leave most hobbyists out.
    New technology often starts out at a premium, but with competition and demand, the price will go down.

    High end technology, and sometimes rather quickly, filters down in cost and availability. The hobbyists will have an exciting future with these new multiple core devices.

    Humanoido
  • Toby SeckshundToby Seckshund Posts: 2,027
    edited 2010-08-16 01:05
    With the Prop's multi-core way of life, we use the cores for hardware reasons, and the Hub forces us to accept the bottleneck on resources.

    With huge, bloated, OSs there becomes a necessity for one core to do software overheads such as virus scanning on a continious basis, one core to "phone home to the mothership" to check continiously that that program or content is really yours and if you can still be given permission to use it, even if it is. Then there is a core for .... and a core for ....

    100 cores need to be fed and watered, and the the fruits or their labours gathered back in, that will occupy a few more.

    I'm just an old cynic that is having to get a brain away from the "something else has to happen now, wheres that interupt?" way of thought.
  • HumanoidoHumanoido Posts: 5,770
    edited 2010-08-16 05:21
    With the Prop's multi-core way of life, we use the cores for hardware reasons, and the Hub forces us to accept the bottleneck on resources.
    It's a kind of resource sharing that will be experienced not by just the prop but other chips as well, and, it is a process that has been going on since the beginning of computing. For example, the IBM 360/50 was time shared academically from university to university and there were TSO, time sharing terminals, in the early days of computing.
    With huge, bloated, OSs there becomes a necessity for one core to do software overheads such as virus scanning on a continious basis, one core to "phone home to the mothership" to check continiously that that program or content is really yours and if you can still be given permission to use it, even if it is. Then there is a core for .... and a core for ....
    The idea is to either be a clever programmer and avoid huge bloated OS' or with MCS, multiple core systems, that OS can be distributed from processor to processor.
    100 cores need to be fed and watered, and the the fruits or their labours gathered back in, that will occupy a few more. I'm just an old cynic that is having to get a brain away from the "something else has to happen now, wheres that interupt?" way of thought.
    Its possible that each core will handle the transfer of its information, and deliver the fruit at harvest time, so you don't need one core hand picking the entire garden and getting all worn out.

    Humanoido
Sign In or Register to comment.