Hardware always has to work
potatohead
Posts: 10,261
https://media.ccc.de/v/32c3-7171-when_hardware_must_just_work#video&t=1129
This is a great CCC talk on Intel x86 CPU development issues. For me, watching this put some of what we are attempting on the P2 into some new perspective. (Chip is a frickin' genius, and we all know that, but still.)
This is a great CCC talk on Intel x86 CPU development issues. For me, watching this put some of what we are attempting on the P2 into some new perspective. (Chip is a frickin' genius, and we all know that, but still.)
Comments
Of course those chips probably worked to spec. Only problem was the specs were wrong.
Itanium did work, but it required a "god mode" capable compiler to make effective use of the chips. SGI, got really good at this when optimizing the Smile out of MIPS to get performance in the 90's. A ~400Mhz R10k / R12K chip, with a nice, big cache would perform and sometimes outperform x86 at more than 2X the clock. Seems all that intense work making large NUMA systems work well can favor devices like Itanium, which SGI moved to when MIPS stopped improving.
Sort of like that i860. If you've got the right compiler, good things can happen. Mere mortals see a lot less joy.
But yes! Intel has taken a few hits and have come out of it fine. They are an aggressive organization filled with really smart people and it's all funded well enough to allow for contingencies. The ISA lock in helps too.
When the 286 was new my colleague discovered a great bug in it. If you multiplied by an immediate value that was negative, and you were in protected mode I believe, you got a totally wrong result.
On contacting Intel we eventually got, under NDA, a fat document describing all the bugs in the 286. Our multiply problem was in there. There were dozens more. Mostly to do with the protected mode features.
That was OK though. All the world was still using MS-DOS which did not use protected mode.
Wasn't there a problem with an intel co-processer (or was it the math co-processer in the 486) that gave back wrong results specific cases?
https://en.wikipedia.org/wiki/Pentium_FDIV_bug
http://arstechnica.com/gadgets/2016/01/intel-skylake-bug-causes-pcs-to-freeze-during-complex-workloads/
Their argument is that chips are very complex and almost impossible to verify. For example, simulating the whole thing only happens at one instruction per second.
Nature tells us that the way around this is to make bazillions of approximately the same thing. The ones that work, work, the ones that don't die off.
We will soon have 10 billion people on this planet. Many of them will not be stupid. Open source the whole specification and let them at it. Soon we will have a bunch of implementations, some of which work, some of which have bugs. They can be used to check each other at full speed.
How cool would that be?