Crazy idea...
MarkS
Posts: 342
I was thinking about how current processors are limited to 64-bits. This affects everything from the amount of memory that can be accessed to the size of computations that can be performed. Naturally, it can be expected that we'll see 128-bit processors with the next decade and possibly even 256-bit processors.
However, I was thinking about how numbers are stored and operated on in memory. I came to the sudden realization that if you extract, say, any single byte out of a 32-bit integer, modify it and re-insert it, you change the value of the entire integer. Of course this is not exactly profound and anyone with the slightest grasp of computer programming and hardware will know this, but it got me thinking about what would happen if memory was broken up into "chunks" with each chunk operated on by a single, 32-bit processor. Imagine 5 32-bit processors. Processor 1 is a host processor that relays information to and from four 32-bit slave processors. The fours slave processors operate on 32-bit chunks of memory, spaced 128 bits apart in the memory map, so processor 2 operates on memory locations 0 - 31, 128-159 and so on, while processor 3 operates on addresses 32 - 63, 160 - 192 and so on, through the use of logic to remap the memory addresses requested by the processors.
The ultimate effect of this would be a computer capable of doing immense calculations by breaking up the calculations into small 32-bit chunks. Essentially, you'd have a 128-bit computer consisting of only 5 32-bit processors. This should work as any number stored in memory can be thought of as a contiguous block of bits, bytes, words, longs, etc. Calculations are only limited by the ALU of the processor, which is usually equal to the address or data bus of the processor. Whether one processor performs calculation on a single, 32-bit value or that value is broken up amongst 4 8-bit processors, the end result will be the same so long as the processors put the processed value back into the correct memory locations.
I can imagine the actual physical implementation of this would not be quite as simple as the explanation, but would it work? If it would, we could possibly have a computer that could do massive calculations that would seem beyond today's technology, with today's technology. This idea would be scalable to infinity (actually to physical space and power limits) and memory limitations would all but disappear.
However, I was thinking about how numbers are stored and operated on in memory. I came to the sudden realization that if you extract, say, any single byte out of a 32-bit integer, modify it and re-insert it, you change the value of the entire integer. Of course this is not exactly profound and anyone with the slightest grasp of computer programming and hardware will know this, but it got me thinking about what would happen if memory was broken up into "chunks" with each chunk operated on by a single, 32-bit processor. Imagine 5 32-bit processors. Processor 1 is a host processor that relays information to and from four 32-bit slave processors. The fours slave processors operate on 32-bit chunks of memory, spaced 128 bits apart in the memory map, so processor 2 operates on memory locations 0 - 31, 128-159 and so on, while processor 3 operates on addresses 32 - 63, 160 - 192 and so on, through the use of logic to remap the memory addresses requested by the processors.
The ultimate effect of this would be a computer capable of doing immense calculations by breaking up the calculations into small 32-bit chunks. Essentially, you'd have a 128-bit computer consisting of only 5 32-bit processors. This should work as any number stored in memory can be thought of as a contiguous block of bits, bytes, words, longs, etc. Calculations are only limited by the ALU of the processor, which is usually equal to the address or data bus of the processor. Whether one processor performs calculation on a single, 32-bit value or that value is broken up amongst 4 8-bit processors, the end result will be the same so long as the processors put the processed value back into the correct memory locations.
I can imagine the actual physical implementation of this would not be quite as simple as the explanation, but would it work? If it would, we could possibly have a computer that could do massive calculations that would seem beyond today's technology, with today's technology. This idea would be scalable to infinity (actually to physical space and power limits) and memory limitations would all but disappear.
Comments
(Not the same problem with ADD, SUB and similar, though)
For the rest, feel free to Google for 'Bit-slice processor'
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Don't visit my new website...
· On the scale of massive parallelism, NASA and Thinking Machines corp. each made computers consisting of tens of thousands a bit-serial processing elements. In NASA's machine, the processing elements were connected in a 2D array, in Thinking Machines Connection Machine, the elements were connected in a 2D array on each chip, and the chips were connected by a 12-dimension hypercube network. All of these use a single instruction stream, which reduces the amount of hardware needed for each processing element and simplifies communication, but lowers·the nature of parallelism that can be exploited (pretty much just data parallelism).
· I made a simple SIMD computer out of simple PLD's once. With an FPGA, you could probably make a pretty interesting one.
-phar
·