Shop OBEX P1 Docs P2 Docs Learn Events
probabilistic computing -- Just had to post this for some interesting discussio — Parallax Forums

probabilistic computing -- Just had to post this for some interesting discussio

potatoheadpotatohead Posts: 10,261
edited 2009-02-08 20:16 in General Discussion
I've often referred to the x86 series of chips as hot running, buggy lotto machines. The engineering path taken on that series of chips has been to basically scale kludge after kludge, iterating until the speed reaches the limits, wash, rinse, repeat!

These days, the upper limits on clock speed have been reached. Well, for now. I'm sure those barriers will fall. And it's not just Intel either, others do this also. They are just a great example! Don't want to slam anybody.

They moved from one instruction to multiple ones being fetched and executed, to multiple cores. Along the way, an increasingly complex system of caching, pre-fetching, pre-executing, and other things has evolved like this giant sieve. Pour as much as you can into the top, ramp the clock, cool it off, and apply pressure to squirt out instructions at the other end!

Check this out!

http://www.chron.com/disp/story.mpl/metropolitan/6252697.html

The core idea is to trade accuracy for speed! It's the polar opposite of the Propeller, and I thought it would be interesting to hear others take on this idea! One analogy in news, would be to "embrace the bias!". Just go whole hog and provide highly entertaing news so that the max people are informed, by contrast, eliminating the bias for that objective, pure news is supposed to do the same thing. With CPU's then, this means embrace and manage the bugs, to a similar effect.

(this isn't a political thread, just an analogy please!!)

I am particularly interested in your thoughts about this -vs- the deterministic and simple behavior of the Propeller. The two extremes are interesting to me, just for their value in advancing computing. On one hand, keeping silicon simple means power savings and speed, where the cost is having to spend more grey matter time factoring problems to fit. I was going to say, "on the other hand" and follow that with some comments about buggy silicon, when it hit me!

These are two approaches to the same thing! If grey matter time is spent learning to compute in this probablity based environment, the yields are essentially the same; namely, more getting done on less overall power. And that's always a key interest in that I consider the mainstream CPU's to be overkill for a lot of stuff, and power is the primary reason why.

Thoughts, discussion, flames, what?

(It's a Sunday, why not engage in a bit of tech chatter?)

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Propeller Wiki: Share the coolness!
Chat in real time with other Propellerheads on IRC #propeller @ freenode.net
Safety Tip: Life is as good as YOU think it is!
Sign In or Register to comment.