Shop OBEX P1 Docs P2 Docs Learn Events
Death notice for moores law — Parallax Forums

Death notice for moores law

Interesting article in The Register. https://www.theregister.co.uk/2018/01/24/death_notice_for_moores_law/

I wonder if that will result in a shift back to coding with more concern for the size and efficiency of the resulting software.

Comments

  • Heater.Heater. Posts: 21,230
    Indeed.

    They are a bit late to the funeral. I recall a video showing a bunch of chip designers at a chip conference drinking a toast to the end of Moore's Law a few years ago now.

    From the article: "In lieu of flowers, donations should be lavished on Intel shares."

    Not so fast. There is a reason why Intel bought Altera.

    One fundamental problem here is: Why do we have CPU's with their sequential execution of instruction sets?

    Well, the major reason is that we were never sure quite what we wanted all that digital logic and compute power to do. So it needed to be easily adapted to different tasks. The easy way to do that is to create a CPU and let software engineers program it.

    That's great and all but it is horribly inefficient.

    Now a days we know more about what we want to do. Over the decades we have come up with all kind of super fast algorithms to do useful things: Graphics rendering, video codecs, the Fourier transform, neural networks...

    It's time to freeze those into hardware and get them running even faster. Hence GPU's, Google's Tensor neural net chips and so on. Like we did with floating point maths a long time ago.

    But wait, freezing such things into silicon is still to ridgid. We need a halfway house. What about the FPGA idea? Microsoft and others are already offering FPGA on their cloud services.

    Hence Intel clutches at Altera for future progress.

    But as for good old general purpose code. You are right kwinn. There is no point in writing anymore unless you have the skills to make it performant.

  • Clock LoopClock Loop Posts: 2,069
    edited 2018-01-29 04:47
    All this regression, is good progress, since optical processors still need good design, even when we get back on track.

    Or should I say, get us so far ahead on track that we do a tailspin... ;)

    Im not complaining, even if my pc takes a 30% hit, its ssd tech that arrived not so long ago changed the feel of computers so drastically, even a few gigahertz can't compare with what ssd did for us...
    Im still recovering from that speed burst....

    Optical processing will do the same.


    stanfordbrea.png
    https://phys.org/news/2015-05-breakthrough-heralds-super-efficient-light-based.html


    computingatt.jpg
    https://phys.org/news/2015-05-team-big-faster.html#nRlv

    So I must place this here.

    So what happens when we jump decades and then have sudden innovation?
    Do the low points in between bring the average down that much to call it dead?

    Geez. Its almost like some want a socially believable excuse to NOT release optical processors to public use... or get them popular.
    Mililtary sector secrets, tisk, tisk? Well, well, who didn't account for the "bitcoin" lust for cpu cycles, theres something curious about money that chases cpu cycles.

    sl15.jpg
  • Moore certainly made ton of $$$$ at Intel. Still going at 89
  • Cluso99Cluso99 Posts: 18,069
    Moored death is premature!

    While the feature size shrink is almost over around 6nm, there is layer stacking. This has already proved extremely successful in FLASH parts with 64 layers now common.

    Even just stacking the RAM (DRAM?) on its own layer will give a huge boost to Moores Law.

    Because feature shrinkage has been so successful, that has been the fixation of the next gen dies. Now that is near the end, alternatives need to be found, and stacking (not die stacking using separate dies, but stacking a full chip layer set by further wafer processing). It may even prove to be more successful to back off the feature size a bit.
    It has become time to think outside the box of the previous generations.

    So IMHO Moore is not dead, just taking a little breather before the next ride comes!
  • kwinnkwinn Posts: 8,697
    Cluso99 wrote: »
    Moored death is premature!

    While the feature size shrink is almost over around 6nm, there is layer stacking. This has already proved extremely successful in FLASH parts with 64 layers now common.

    Even just stacking the RAM (DRAM?) on its own layer will give a huge boost to Moores Law.

    Because feature shrinkage has been so successful, that has been the fixation of the next gen dies. Now that is near the end, alternatives need to be found, and stacking (not die stacking using separate dies, but stacking a full chip layer set by further wafer processing). It may even prove to be more successful to back off the feature size a bit.
    It has become time to think outside the box of the previous generations.

    So IMHO Moore is not dead, just taking a little breather before the next ride comes!

    I hope you're right, but I think the next ride will be a bit rougher. Power dissipation will be a bigger problem for stacking layers on a microprocessor chip than it is for memory chips. A memory chip can have much smaller portions of the chip powered at any one time than a microprocessor can.
  • Heater.Heater. Posts: 21,230
    edited 2018-01-30 06:18
    Hmm...let me check the back of my envelope....

    If we apply the progress that we have seen in chip density to stacking then we would be doubling the number of layers in the stack every 2 years of so.

    After 16 years we would have had 8 doublings, 2, 4, 8, 16, 32, 64, 128, 256.

    Given that we are stacking because we are stuck with the technology we have then we have now squeezed 256 times the thermal power into a tiny chip sized box. Or about 25,000 watts power consumption!

    I see a problem here...

    Moore's Law has a subtly that is often overlooked. It talks about that fact that at any particular time, given the technological possibilities at the time, there is an optimum number of transistors you can integrate on a chip for minimal cost. It might be possible to do more but the cost goes up. Doing less is a waste. It is that minima in cost/(transistor/chip) that has been moving up the transistor axis.

    I suspect integrating bazillions of transistors vertically will be exponentially harder to do. With every layer your yield goes down dramatically. So that minima in the cost does not move in the direction you want. It's not economically viable. Even of it is possible to do.

  • kwinnkwinn Posts: 8,697
    @Heater

    Exactly, although there may be ways to mitigate those problems to an extent. No doubt some folks will find clever ways to increase density and decrease power requirements but probably not at the rate we have seen so far. IOW the easy advances have been harvested.
  • Cluso99Cluso99 Posts: 18,069
    IMHO stacking will not be the problem it was once thought to be.

    But, there will be other ways to get performance boosts. As others have said in many journals etc, engineers have had access to more and more transistors for way too long. Its time they looked at smarter ways to use those transistors!!!
  • Heater.Heater. Posts: 21,230
    Cuso99,

    I don't see your first point. Stacking is a problem. See below.

    I do agree with your second point. A smarter way to make use of the available transistors is to get away from the horribly inefficient idea of a CPU with an instruction set.

    As to the first point. Let's forget about heat generation and other pesky physics problems for a moment.

    I get the idea that a big issue in making chips economically is yield. You make some thousands of devices on a wafer and it's sure that some percentage of them do not work. If you make those devices bigger there is more chance they do not work. Your yield goes down. As does your profit. There is an economically optimal size for those devices.

    That is what Moore's law was originally about.

    So now, as an example we take Intel. They make chips, in two dimensions, at the optimal size for the technology available at the time. They have been the best in the world at doing that.

    If there is no progress to be made in two dimensions that means stacking. As you say.

    But, for every extra layer in the stack you have increased the probability of device failure. Yield goes down. Profitability goes down.

    It's not economically viable.

  • TorTor Posts: 2,010
    Many years ago there was an article in Scientific American about using an FPGA as a dynamically changing chameleon CPU that would constantly adjust its own instruction set for the target application it was running. I don't know how long it takes to reprogram an FPGA, but I have a feeling they're a bit too slow for what was proposed. But this was a long time ago - I had barely heard about FPGAs at the time. Maybe they meant to design an FPGA optimized for fast reprogramming.
  • Cluso99Cluso99 Posts: 18,069
    heater,
    I am not saying stacking is the only method that will result in the basic premise of Moore's Law being achieved.
    IMHO the real premise of Moore's Law is the doubling of processing performance on chip. This is certainly not the original premise, but it has been tweeked over time.

    What I believe is way overdue is a radical rethink of the computer design.

    If you notice, I suggested going back a few generations to something like 22-40nm. The first stacking would be to put a whole layer of RAM (DRAM or SRAM ??)

    If it were me, I would get rid of all that caching levels Smile, and just put the main external DRAM inside. I think at least 8GB would curretnly be easily achievable, maybe more.

    The next thing I would do would be make the external memory interface SSD & DRAM, and the HDD interface would be optional for really big storage, and of course a fast off-chip link(s).

    As for on-chip power dissipation, have you noticed, the current chips are dissipating much less power than the oder chips of generations ago. That has been progress, that is also being retro applied to older generation feature size. That's why we are seeing micro-power ARM, AVR etc coming out. They are just tweeking older lines with newer discoveries. That is also why the P2 is using a newer 160nm OnSemi process. Main point is there is less leakage per transistor - IIRC it is a magnitude or more imrovement. Add to that, power down features of unused portions while unused, and we get order of magnitudes improvements.

    Of course, much of future progress will likely be predicated on new computer architecture designs. Things like true parallel processing of code sections, where the compiler has worked out which sections of code can be done in parallel, and fired additional cores to perform these thing in parallel when possible and practical.

    IMHO, the cpu performing code in parallel just in case, and then discarding the unused pieces, is a total waste of silicon and power. It was just the easiest route to follow.

    Anyway, I don't see the end of the basic premise of increasing performance for a while yet, although it's possible there may be a little bump while other more engineering thought goes into the process. IMHO they need to poise and rethink a little, and instead of racing ahead blindly throwing more transistors into the mix, use those transistors for better rewards.

  • Heater.Heater. Posts: 21,230
    Clusso,

    I do agree. The road block we have in physical progress here will have to be tackled by a rethink of computer design. Hardware and software.

    There is an awful lot of cruft in computer design we don't need. And moving whatever processing closer to the memory sounds like a good idea.

    I'm not so confident about the idea of "compiler has worked out which sections of code can be done in parallel, and fired additional cores to perform these thing in parallel when possible and practical."

    People have been working on such "smart compiler" ideas for decades. The failed Intel Itanium depended on it. So far there is no sign of progress.

    Meanwhile, this might be the first time ever we see performance decrease. As a result of the slowdowns introduced by the Spectre/Meltdown mitigations.







Sign In or Register to comment.