Having multiple processors has one very great advantage over interrupts in the ease of development.
On a single CPU + interrupts system whenever I want to add a new hardware device driver to my project I have to integrate it's interrupt handling with with whatever I've got running already. I have to figure out how to hook into some new priority level or chain onto a level that is already in use. I have to figure out what priority every new part should run at and if it will disturb something else. If I already have something like a video driver than needs deterministic timing (highest priority maybe) and then I need another device with stringent deterministic requirements then I'm out of luck. Just can't be done. Not to mention having to figure out how to set up some interrupt controller hardware with masks and modes etc etc.
Compare with the Prop where I can just grab a nice object from Obex, or create a new one, throw it at a COG and I'm done !
Of course that is why (partly) real time operating systems were developed. But that is another boatload of complexity to have to learned. Of which there have been many over the years.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
For me, the past is not over yet.
I beg to differ about the reason why interrupts were invented.
They existed on the first 8008 and 6800 microprocessors (circa 1975). There was no such thing (on micros) as multi-tasking, or even an operating system. The interrupts were used to interface to peripheral chips. Remember, back then, there were no internal periperals. The external peripherals were UARTs (which predated the micros (from about 1972). Motorola and Intel both produced parallel peripheral chips (Motorola PIA). The interrupts were used to identify service to the chip was required. Zilog later (with the Z80) used vectors for interrupts.
Later, board(s) full of chips were made to interface to the micros to do video displays using composite video (monochrome and initially 32x16 and 40x16).
The prop doesn't need interrupts. This takes away it's inherent design objective of not requiring them due to 8 independant cogs. If you want interrupts, use another chip with the complexity it brings. The prop is a simple chip designed for hobbyists that just happens to also have great professional uses (although vastly under-recognised!).
Now try doing VGA on other single chips without dedicated VGA hardware on-chip
Cluso99>> I beg to differ about the reason why interrupts were invented.
Ok
The other reason interrupts "are used" which is very rarely mentioned here ....
It's been while since I posted on this forum and this thread really tickles me.
Comparing the prop to other microconrollers - the small memory size of the prop.
When it comes to coding 32K is adequate for a lot of embedded applications however when you have an application that requires embedded bitmaps/wave files etc then 32k is an issue.
Now some of you will respond and say use an object that allows interfacing to an SD card etc but this presents issues with additional parts not to mention there then needs to be a method to load the data into the card etc.
As for interupts, well if the prop had hardware support for serial interfaces, SPI, I2C etc then say no more.
Praxis said...
... if the prop had hardware support for serial interfaces, SPI, I2C etc then say no more.
The Propeller does have hardware support for these interfaces, and a lot more. The hardware is a set of configurable peripherals that can be microcoded to emulate a whole host of interfaces and other functions. You see, when I replace "cogs" with "configurable peripherals" and "programmed" with "microcoded", "hardware support" doesn't sound so far-fetched, does it?
And that's the sticking point in most comparisons. Determinism gets missed in that code as hardware discussion. It's a very significant differentiator for the Propeller. Once an interface is up and running on a COG, it IS hardware to the other ones.
Edit: In fact, I think discussing the differences would be a very enlightening discussion!
Anyone game?
How is either adding hardware, or custom hardware that resides on chip, different from a COG running dedicated code for that task?
Okay, but I'm not sure I can contribute anything more "enlightening" than this: From a behavioral (i.e. black-box or I/O) point of view, they're identical. From a programming standpoint, considering assigned hub memory as the equivalent of peripheral communication registers, they're identical. Heck, the "peripherals" even support DMA. What else matters? To ascribe significance to other structural departures one from the other would be to create a "distinction without a difference".
Um, how's that? Now it's time to for me hit the sack. I'll see where this "stick poked in the wasp's nest" went come morning.
I'm sure interrupts have been around long before the 8080 etc. Why does do so many have a problem with not having them ?
To me a background task and an interrupt handler one CPU is logically equivalent to the task and the handler running on two processors (COGs) with shared RAM. Physically things are different, a lot more silicon, more performance for the background thread, potentially deterministic timing, potentially lower latency for the interrupt handler. All sounds good.
That is up to the point where you run out of COGS and start to think "damn, if I could only hook this new code into an interrupt handling chain and share a COG between tasks" But guess what, by that stage you are probably pushing a little micro controller to far anyway and heading for problems.
To me having a bunch of processors that I can program to behave like peripheral devices is logically equivalent to having one processor surrounded by hardware peripheral devices. Again physically things are different for single chip solutions I have to be careful to select the chip with the devices I want in the latter case and changes in hardware requirements will be harder as development goes on. All sounds good for the former.
That is up until the point that I find the programmable software solution in a COG is just not fast enough e.g. for USB....
Conclusion: Logically old style interrupts and hardware devices is logically equivalent to a pile of programmable COGs and shared RAM. Practically there may be limitations in speed of COGS or number of possible devices(COGs). which may make one long for the traditional approach. But that is a limitation of the available technology and implementation rather than in concept.
In the early 70s Marconi Radar Ltd built it's own mini computer the Locus 16 for use in radar systems. It was almost a micro as it used bit slice chips. Anyway it had interrupts and peripherals but it also had LOCKS (Which did not turn up on micros till the 8086 I think) Why? because it was intended to be used as a multi-processor system with a general purpose CPU card a display processor, shared RAM etc etc. All be it in a rack full of cards rather than on a single chip. Looking at a top level architecture of a Locus 16 system you would see something rather like a Prop. This was not uncommon in those times.
This software driven approach is an ever growing trend as silicon gets smaller and cheaper. See the latest graphics cards or the XMOS chip. The Propeller just happens to be a small example of this.
Why do my posts always get so looooong... ?
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
For me, the past is not over yet.
When I worked for English-Electric-Leo-Marconi Computers in the 1960s, we manufactured a mainframe computer with an optional seven levels of interrupt.
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Suzuki SV1000S motorcycle
In the 70's I worked on an ICL (formerly Singer and Friden) mini-computer. It survived to the 90's. It had no interrupts as such. I/O (terminal, disk and tape) were just reads and writes which waited for a response. It had hardware multi-tasking (like 20 cogs with their own cog memory) and hub memory. There were software locks in the operating system. In many respects, it reminds me of the prop.
@Phil: I agree, the Prop has a multipurpose cogs to implement peripheral hardware.
Interrupts just complicate the software in my opinion.
I think part of the reason the prop shines is that you don't have to spend half the time getting your header files straight and in proper sequence, and followed by getting the right include paths set and other annoying details.
The sad thing is that for low-lowmid complexity projects a pic24 can do a lot of what the prop can. MIPS is not a fair comparison in a way as when you can offload CRC,USB,I2C, and other peripherals that is really the same as having another cog, it seems to me. Plus the ability to program all these details in a high level language is an advantage.
One thing that is clear to me though is that most of the projects I have done vastly under utilize the props capabilities.
@Leon: Most coding was done in assembler - 16 x 60bit risc instructions, dual instruction, all decimal (including memory), no registers (except 3 index registers). eg MULT 10 decimal (digits) by 10 decimal gives 20 decimal result. Cobol was possible but rarely used.
It was SNOBOL, the string-processing language, not COBOL. I was told that the SNOBOL pre-processor was needed because of the complexity of the 60-bit instructions, to make things easier for the programmer
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Suzuki SV1000S motorcycle
Maybe someone should write an "interrupt" object to put in the obex to satisfy newbies. It might not be appreciated as it is "not as expected", but would at least show how to do it. We were all newbies once who didn't exactly get it.
@phil, I understand what you mean by DMA in prop-sense, but DMA has generally been a hardware assistant. High performance peripherals are just not possible without hardware DMA. A more fitting term would be soft-DMA since it is a software thing in prop.
@Cluso99 I wrote software for Singer Link Flight Simulation in the 80's - many fond memories. In school in the 70's I wrote progams on punch cards; the advance to paper-tape was very welcome [noparse]:)[/noparse] The early "personal computer" C=, etc... cassette-tape and floppy drives were just wonderful.
@Leon: Think we are referring to different machines. The 60bit instruction was 10 x 6bit ASCII characters and most of us engineers and programmers could write object code from our head as the instruction set was so regular.
@Jazzed: A friend used to repair the Qantas Link Simulators in the late 70's. I hardly used punched cards or paper-tape. Went straight to (hard) disk in 1974.
Anyway, back to the Interrupt topic (and DMA). Until you have experienced the prop you do not realise how simple it is without the interrupts.
See my thread "What are your cogs doing now?". There doesn't seem to be anyone requiring interrupts there yet.
@Phil, that's where I was going. I really can't see any significant difference.
@soshimo, isn't that just latched instead of deterministic? In that model, the single CPU has a speed that exceeds that necessary to address the events in a timely manner. It's gonna spend some time on idle. Has to, otherwise things break down. Time assurances come in the form of minimum latency. That's true of the Propeller too, of course.
However, on a non-multiprocessor system, it's not really possible to write a series of instructions (in general) and count on their execution time being constant. Something's gonna happen. It's going to happen because the interrupts are there. And there is one CPU essentially. It's also highly likely to have caching and such, further impacting the time to execute.
A kernel is needed to manage everything. It is necessary because the system isn't deterministic. Where there is trouble, you add clock speed and power, or some dedicated bit of silicon and power comes with that too and move on.
Multi-core systems help with this, but still, it's essentially one CPU, and those cores conflict with memory / bus access. The dynamics are the same.
To me, this all boils down to that kernel being there and systems level programming often being necessary to write it, or manage somebody else's kernel, or work within the limits of it, if that kind of programming is to be avoided. One CPU, where there are a balance of tasks is why we have operating systems, kernels and such. Have to.
On Propeller, that kernel isn't necessary for a very large number of tasks. Scale and complexity can require it, and if that's the case, one can be written, and that's that. Factoring that extreme out however, leaves us with a machine that simply does not have ONE CPU. It is, in fact a multi-processor, and there is no bus / memory contention! Code complexity will absolutely go down. To a Propeller user, those tasks are overhead where attention is required to manage the solution itself, not address the problem at hand.
I think the significance of this goes under rated a very high percentage of the time in these discussions.
The trade off (and there always is one) is often having to factor your compute problem differently. IMHO, it is this re-factoring that is foreign to people, and thus the source of contention, not core functionality differences. There is a case that this is a wash, given the task of also having to select and learn about add on silicon solutions. That and cost are kind of expensive compared to a measure of grey matter time one would normally spend on a Propeller to accomplish the same thing.
I know it's not a uController but the new Intel i7 core processor still cannot hold a candle to the prop I in the respect that it is still basically a single core but detects by software as 8 core, where the prop is ACTUALLY 8 cores bundled by the HUB.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔ Quicker answers in the #propeller chat channel on freenode.net. Don't know squat about IRC? Download Pigin! So easy a caveman could do it... http://folding.stanford.edu/ - Donating some CPU/GPU downtime just might lead to a cure for cancer! My team stats.
Phil Pilgrim (PhiPi) said...
Okay, but I'm not sure I can contribute anything more "enlightening" than this: From a behavioral (i.e. black-box or I/O) point of view, they're identical. From a programming standpoint, considering assigned hub memory as the equivalent of peripheral communication registers, they're identical. Heck, the "peripherals" even support DMA. What else matters? To ascribe significance to other structural departures one from the other would be to create a "distinction without a difference".
Um, how's that? Now it's time to for me hit the sack. I'll see where this "stick poked in the wasp's nest" went come morning.
-Phil
From a hardware standpoint, peripherals can often be built with static logic and therefore operate at low power independent of the CPU clock. That is combined with wake-up-on-change logic that is more or less linked with the interrupt system. It might be a single pin or keypad, or a static counter or an SPI input, but it can operate at essentially zero power. The Prop can throttle back to ~20khz, and quickly come up to ~12mhz and then to full xtal operation on the PLL, but there is latency. I guess what I am trying to say is that from a programming standpoint the hub memory and microcode may be equivalent, but other system considerations will still make it worthwhile to consider a dedicated peripheral.
nutson said...
I fear we will see more comparisons prop vs xxx appearing in this forum this year. The vast amount of transistors that can be put on a silicon chip, is driving many chipmakers into multi-core designs. Yesterday Creative unveiled its VII chip, with 2 ARM processors, 24 stream processors and a lot of peripherals look here http://creativezii.com/2009/01/creative-zii-platform-unveiled/#more-30 . This 24 core chip, clearly aimed at the multimedia OEM market, seems to be a SIMD design that can process 3 instruction streams with 8 data streams each, delivering 10GFLOP total, suggesting a 400MHZ core speed. Impressive is the library of multimedia processing functions, virtually all thinkable picture, sound and video processing functions are available. Once Nvidia and ATI enter this market we can expect some fireworks.
For the DIY, hobby and enthusiasts market all these numbers do not mean much, here the ease of programming, debugging, and building systems with their choice of peripherals interfaced is more important, and Parallax has done a very good job with the propeller. Hope the propII will expand on that. Software controlled image processing??·
Nutson
There, your post is in the right thread. Now delete your two other posts by clicking the red X in the upper right hand corner of the bottom post, then the red X in the top post, but you must do this before anyone else replies to the (no subject) thread or you won't be able to delete the top post.
You're right: microprogrammed hardware is typically somewhat slower and/or less energy-efficient than that which is hard-wired. There's always a tradeoff between speed/power consumption and flexibility. But cogs can also idle at low power while waiting for a change. Consider the WAITPEQ instruction. While it's waiting for the right input at full clock speed, the cog is consuming about a tenth its normal power. Throttled back to 20KHz, that can be as low as 3uA, albeit with higher "resume" latency, as you point out.
I'm not sure what you mean by "static logic" in this context. To me "static" implies that the clock can be turned off, with the hardware state maintained until clocking is resumed. Many micros allow indpendent clock control of individual peripherals to save power. But for the peripherals to operate after wakeup, there still has to be a clock. Peripherals operating completely independent of the CPU clock would have to use "asynchronous logic". I'm not aware of any micros (yet) that have taken this step. Such logic is much more difficult to design, although the savings in power consumption can be substantial. But perhaps I misunderstood your comment.
I see where you're coming from, nonetheless, since a lot of what you do is battery-operated and can sit for months on end, waking up periodically to collect data unattended. For such apps, the Prop may be power overkill, both in compute and current-consumptive terms, with the BS2pe and TI's MSP430 still reigning supreme.
I never said the 8088 was the first interrupt driven chip - I submitted it as an example of a well used and well known chip that used interrupts. And what someone else pointed out is correct, interrupts are design specifically to interface the cpu with hardware. It eliminates the need for the cpu to have to waste duty cycles polling. Publish/Subscribe mechanisms have been around for a very long time and are still in wide use today, from hardware to software solutions. To discount them is to say that whole sectors of computing have it all wrong just because an 8 core chip emulates hardware when over 10% of its resources are used up. You are still limited to 8 perephrials (unless I am sorely misunderstanding the cog concept of one task per cog) whereas in an interrupt driven system you have essentially no limit to the amount of perephrial devices you can talk to.
@potatohead - you are correct, without hardware determinacy you must rely on software synchronization which effectively emulates the hardware determinacy. This requires a kernel and support software. In the case of the 8088 and the XT architecture this was partially in BIOS (the bootstrap) and the rest was loaded off disk (floppy and later, if you were rich, hard drive). Without the boostrap it was basically a really expensive 4.77Mhz oscillator . I can also see your point on bus contention (although, the limited amount of RAM really makes that a moot point since there isn't much TO contend).
Are you saying that the Prop's total chip current consumption at 20Khz is 30uA while one cog is active and 3uA while WAITCNT?
I can't find this info anywhere on the datasheet.
Funny thing is that the datasheet always mentions current consumption per cog but never mentions current consumed by the hub.
This is interesting because I am also currently working on another battery powered project.
The wake up latency is not a problem to me for communications because the message sender can always send the wake up signal a few times until the Prop has woken up.
@soshimo IMHO, contention is a relative thing in that scale might move the problem into play, or out of play, depending on both size and overall complexity. Prop II will scale, highlighting this point much greater than we see currently, as size will then play a greater role in the differentiator.
I think, referring to a Propeller as "8 core" isn't all that valid. It's really a true, deterministic multi-processor. This is quite different from a CPU, with 8 cores.
William, your're back! I thought you just threw the ball into the scrum and left. I was wondering the same thing about the difference between asm and spin current consumptions.
Without knowing definatively, I would be reasonably certain the spin access to hub would require the hub logic and hub memory access to use additional power
To me it would make more sense if it were the other way around. The hub is running all the time, regardless of what the cogs are doing, so that should be a constant current draw. When a cog accesses the hub, it may need to wait its turn, so the current draw should go down during the ensuing idle intervals, resulting in a net current decrease compared to cog-only execution. It makes me wonder if the two lines in the graph got switched.
-Phil
Addendum: I just tested my theory regarding switched graph lines and disproved it. At 80MHz, a Spin cog consumes about 3mA more than an Asm cog. 'Still can't understand why.
BTW, did you know that the Demo Board has a handy jumper that you can remove for doing current measurements? It's labeled IVDD.
Post Edited (Phil Pilgrim (PhiPi)) : 1/11/2009 6:24:50 AM GMT
Comments
On a single CPU + interrupts system whenever I want to add a new hardware device driver to my project I have to integrate it's interrupt handling with with whatever I've got running already. I have to figure out how to hook into some new priority level or chain onto a level that is already in use. I have to figure out what priority every new part should run at and if it will disturb something else. If I already have something like a video driver than needs deterministic timing (highest priority maybe) and then I need another device with stringent deterministic requirements then I'm out of luck. Just can't be done. Not to mention having to figure out how to set up some interrupt controller hardware with masks and modes etc etc.
Compare with the Prop where I can just grab a nice object from Obex, or create a new one, throw it at a COG and I'm done !
Of course that is why (partly) real time operating systems were developed. But that is another boatload of complexity to have to learned. Of which there have been many over the years.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
For me, the past is not over yet.
They existed on the first 8008 and 6800 microprocessors (circa 1975). There was no such thing (on micros) as multi-tasking, or even an operating system. The interrupts were used to interface to peripheral chips. Remember, back then, there were no internal periperals. The external peripherals were UARTs (which predated the micros (from about 1972). Motorola and Intel both produced parallel peripheral chips (Motorola PIA). The interrupts were used to identify service to the chip was required. Zilog later (with the Z80) used vectors for interrupts.
Later, board(s) full of chips were made to interface to the micros to do video displays using composite video (monochrome and initially 32x16 and 40x16).
The prop doesn't need interrupts. This takes away it's inherent design objective of not requiring them due to 8 independant cogs. If you want interrupts, use another chip with the complexity it brings. The prop is a simple chip designed for hobbyists that just happens to also have great professional uses (although vastly under-recognised!).
Now try doing VGA on other single chips without dedicated VGA hardware on-chip
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Prop Tools under Development or Completed (Index)
http://forums.parallax.com/showthread.php?p=753439
cruising][noparse][[/noparse]url=http://www.bluemagic.biz]cruising[noparse][[/noparse]/url][/url]
This is a [noparse][[/noparse]b]bold[noparse][[/noparse]/b] test.
Ok
The other reason interrupts "are used" which is very rarely mentioned here ....
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
--Steve
Comparing the prop to other microconrollers - the small memory size of the prop.
When it comes to coding 32K is adequate for a lot of embedded applications however when you have an application that requires embedded bitmaps/wave files etc then 32k is an issue.
Now some of you will respond and say use an object that allows interfacing to an SD card etc but this presents issues with additional parts not to mention there then needs to be a method to load the data into the card etc.
As for interupts, well if the prop had hardware support for serial interfaces, SPI, I2C etc then say no more.
Cheers.
Further reading:
en.wikipedia.org/wiki/Interrupt
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Adequatio Rei Et Intellectus
-Phil
And that's the sticking point in most comparisons. Determinism gets missed in that code as hardware discussion. It's a very significant differentiator for the Propeller. Once an interface is up and running on a COG, it IS hardware to the other ones.
Edit: In fact, I think discussing the differences would be a very enlightening discussion!
Anyone game?
How is either adding hardware, or custom hardware that resides on chip, different from a COG running dedicated code for that task?
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Propeller Wiki: Share the coolness!
Chat in real time with other Propellerheads on IRC #propeller @ freenode.net
Safety Tip: Life is as good as YOU think it is!
Post Edited (potatohead) : 1/10/2009 8:07:17 AM GMT
Um, how's that? Now it's time to for me hit the sack. I'll see where this "stick poked in the wasp's nest" went come morning.
-Phil
To me a background task and an interrupt handler one CPU is logically equivalent to the task and the handler running on two processors (COGs) with shared RAM. Physically things are different, a lot more silicon, more performance for the background thread, potentially deterministic timing, potentially lower latency for the interrupt handler. All sounds good.
That is up to the point where you run out of COGS and start to think "damn, if I could only hook this new code into an interrupt handling chain and share a COG between tasks" But guess what, by that stage you are probably pushing a little micro controller to far anyway and heading for problems.
To me having a bunch of processors that I can program to behave like peripheral devices is logically equivalent to having one processor surrounded by hardware peripheral devices. Again physically things are different for single chip solutions I have to be careful to select the chip with the devices I want in the latter case and changes in hardware requirements will be harder as development goes on. All sounds good for the former.
That is up until the point that I find the programmable software solution in a COG is just not fast enough e.g. for USB....
Conclusion: Logically old style interrupts and hardware devices is logically equivalent to a pile of programmable COGs and shared RAM. Practically there may be limitations in speed of COGS or number of possible devices(COGs). which may make one long for the traditional approach. But that is a limitation of the available technology and implementation rather than in concept.
In the early 70s Marconi Radar Ltd built it's own mini computer the Locus 16 for use in radar systems. It was almost a micro as it used bit slice chips. Anyway it had interrupts and peripherals but it also had LOCKS (Which did not turn up on micros till the 8086 I think) Why? because it was intended to be used as a multi-processor system with a general purpose CPU card a display processor, shared RAM etc etc. All be it in a rack full of cards rather than on a single chip. Looking at a top level architecture of a Locus 16 system you would see something rather like a Prop. This was not uncommon in those times.
See here for nice Lous16 system diagram www.radartutorial.eu/19.kartei/karte112.en.html
This software driven approach is an ever growing trend as silicon gets smaller and cheaper. See the latest graphics cards or the XMOS chip. The Propeller just happens to be a small example of this.
Why do my posts always get so looooong... ?
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
For me, the past is not over yet.
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Suzuki SV1000S motorcycle
@Phil: I agree, the Prop has a multipurpose cogs to implement peripheral hardware.
Interrupts just complicate the software in my opinion.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Prop Tools under Development or Completed (Index)
http://forums.parallax.com/showthread.php?p=753439
My cruising website http://www.bluemagic.biz
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Suzuki SV1000S motorcycle
Post Edited (Leon) : 1/10/2009 1:22:55 PM GMT
The sad thing is that for low-lowmid complexity projects a pic24 can do a lot of what the prop can. MIPS is not a fair comparison in a way as when you can offload CRC,USB,I2C, and other peripherals that is really the same as having another cog, it seems to me. Plus the ability to program all these details in a high level language is an advantage.
One thing that is clear to me though is that most of the projects I have done vastly under utilize the props capabilities.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Prop Tools under Development or Completed (Index)
http://forums.parallax.com/showthread.php?p=753439
My cruising website http://www.bluemagic.biz
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Suzuki SV1000S motorcycle
@phil, I understand what you mean by DMA in prop-sense, but DMA has generally been a hardware assistant. High performance peripherals are just not possible without hardware DMA. A more fitting term would be soft-DMA since it is a software thing in prop.
@Cluso99 I wrote software for Singer Link Flight Simulation in the 80's - many fond memories. In school in the 70's I wrote progams on punch cards; the advance to paper-tape was very welcome [noparse]:)[/noparse] The early "personal computer" C=, etc... cassette-tape and floppy drives were just wonderful.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
--Steve
@Jazzed: A friend used to repair the Qantas Link Simulators in the late 70's. I hardly used punched cards or paper-tape. Went straight to (hard) disk in 1974.
Anyway, back to the Interrupt topic (and DMA). Until you have experienced the prop you do not realise how simple it is without the interrupts.
See my thread "What are your cogs doing now?". There doesn't seem to be anyone requiring interrupts there yet.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Prop Tools under Development or Completed (Index)
http://forums.parallax.com/showthread.php?p=753439
My cruising website http://www.bluemagic.biz
@soshimo, isn't that just latched instead of deterministic? In that model, the single CPU has a speed that exceeds that necessary to address the events in a timely manner. It's gonna spend some time on idle. Has to, otherwise things break down. Time assurances come in the form of minimum latency. That's true of the Propeller too, of course.
However, on a non-multiprocessor system, it's not really possible to write a series of instructions (in general) and count on their execution time being constant. Something's gonna happen. It's going to happen because the interrupts are there. And there is one CPU essentially. It's also highly likely to have caching and such, further impacting the time to execute.
A kernel is needed to manage everything. It is necessary because the system isn't deterministic. Where there is trouble, you add clock speed and power, or some dedicated bit of silicon and power comes with that too and move on.
Multi-core systems help with this, but still, it's essentially one CPU, and those cores conflict with memory / bus access. The dynamics are the same.
To me, this all boils down to that kernel being there and systems level programming often being necessary to write it, or manage somebody else's kernel, or work within the limits of it, if that kind of programming is to be avoided. One CPU, where there are a balance of tasks is why we have operating systems, kernels and such. Have to.
On Propeller, that kernel isn't necessary for a very large number of tasks. Scale and complexity can require it, and if that's the case, one can be written, and that's that. Factoring that extreme out however, leaves us with a machine that simply does not have ONE CPU. It is, in fact a multi-processor, and there is no bus / memory contention! Code complexity will absolutely go down. To a Propeller user, those tasks are overhead where attention is required to manage the solution itself, not address the problem at hand.
I think the significance of this goes under rated a very high percentage of the time in these discussions.
The trade off (and there always is one) is often having to factor your compute problem differently. IMHO, it is this re-factoring that is foreign to people, and thus the source of contention, not core functionality differences. There is a case that this is a wash, given the task of also having to select and learn about add on silicon solutions. That and cost are kind of expensive compared to a measure of grey matter time one would normally spend on a Propeller to accomplish the same thing.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Propeller Wiki: Share the coolness!
Chat in real time with other Propellerheads on IRC #propeller @ freenode.net
Safety Tip: Life is as good as YOU think it is!
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Quicker answers in the #propeller chat channel on freenode.net. Don't know squat about IRC? Download Pigin! So easy a caveman could do it...
http://folding.stanford.edu/ - Donating some CPU/GPU downtime just might lead to a cure for cancer! My team stats.
From a hardware standpoint, peripherals can often be built with static logic and therefore operate at low power independent of the CPU clock. That is combined with wake-up-on-change logic that is more or less linked with the interrupt system. It might be a single pin or keypad, or a static counter or an SPI input, but it can operate at essentially zero power. The Prop can throttle back to ~20khz, and quickly come up to ~12mhz and then to full xtal operation on the PLL, but there is latency. I guess what I am trying to say is that from a programming standpoint the hub memory and microcode may be equivalent, but other system considerations will still make it worthwhile to consider a dedicated peripheral.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Tracy Allen
www.emesystems.com
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Paul Baker
You're right: microprogrammed hardware is typically somewhat slower and/or less energy-efficient than that which is hard-wired. There's always a tradeoff between speed/power consumption and flexibility. But cogs can also idle at low power while waiting for a change. Consider the WAITPEQ instruction. While it's waiting for the right input at full clock speed, the cog is consuming about a tenth its normal power. Throttled back to 20KHz, that can be as low as 3uA, albeit with higher "resume" latency, as you point out.
I'm not sure what you mean by "static logic" in this context. To me "static" implies that the clock can be turned off, with the hardware state maintained until clocking is resumed. Many micros allow indpendent clock control of individual peripherals to save power. But for the peripherals to operate after wakeup, there still has to be a clock. Peripherals operating completely independent of the CPU clock would have to use "asynchronous logic". I'm not aware of any micros (yet) that have taken this step. Such logic is much more difficult to design, although the savings in power consumption can be substantial. But perhaps I misunderstood your comment.
I see where you're coming from, nonetheless, since a lot of what you do is battery-operated and can sit for months on end, waking up periodically to collect data unattended. For such apps, the Prop may be power overkill, both in compute and current-consumptive terms, with the BS2pe and TI's MSP430 still reigning supreme.
-Phil
@potatohead - you are correct, without hardware determinacy you must rely on software synchronization which effectively emulates the hardware determinacy. This requires a kernel and support software. In the case of the 8088 and the XT architecture this was partially in BIOS (the bootstrap) and the rest was loaded off disk (floppy and later, if you were rich, hard drive). Without the boostrap it was basically a really expensive 4.77Mhz oscillator . I can also see your point on bus contention (although, the limited amount of RAM really makes that a moot point since there isn't much TO contend).
Are you saying that the Prop's total chip current consumption at 20Khz is 30uA while one cog is active and 3uA while WAITCNT?
I can't find this info anywhere on the datasheet.
Funny thing is that the datasheet always mentions current consumption per cog but never mentions current consumed by the hub.
This is interesting because I am also currently working on another battery powered project.
The wake up latency is not a problem to me for communications because the message sender can always send the wake up signal a few times until the Prop has woken up.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
www.fd.com.my
www.mercedes.com.my
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Paul Baker
I think, referring to a Propeller as "8 core" isn't all that valid. It's really a true, deterministic multi-processor. This is quite different from a CPU, with 8 cores.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Propeller Wiki: Share the coolness!
Chat in real time with other Propellerheads on IRC #propeller @ freenode.net
Safety Tip: Life is as good as YOU think it is!
Why does spin loops consume more current than assembly loops?
Is it due to hubs accesses using more current?
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
www.fd.com.my
www.mercedes.com.my
-Phil
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Prop Tools under Development or Completed (Index)
http://forums.parallax.com/showthread.php?p=753439
My cruising website http://www.bluemagic.biz
Post Edited (Cluso99) : 1/11/2009 9:09:30 AM GMT
-Phil
Addendum: I just tested my theory regarding switched graph lines and disproved it. At 80MHz, a Spin cog consumes about 3mA more than an Asm cog. 'Still can't understand why.
BTW, did you know that the Demo Board has a handy jumper that you can remove for doing current measurements? It's labeled IVDD.
Post Edited (Phil Pilgrim (PhiPi)) : 1/11/2009 6:24:50 AM GMT