I've always used something with plenty of performance, like an ARM or a 16-bit PIC, for anything complex. There are millions of reliable applications using interrupts. RTOSs don't impact system performance if the application is designed properly, and are even used on safety-critical systems.
Except for the fact they don't work. An iterrupt steals time from what ever was running before the interrupt occurred.
Whaaaaaaaaaaaat!
What do you mean they "don't work", if that was the case your PC cannot work.
And the second part of the quote is just as bad, "steals time".
I have designed many embedded systems that have several IRQ sources i.e multiple timers, UARTS and SPI etc etc all running at the same time and it works because it was designed to work that way.
IRQ's be afraid be very afraid.....................
I have no problem with any of that. Spec'ing out the hardware with at least twice the performance and memory that you expect to has been normal in the industries I have worked in. Interrupts and RTOS' do indeed work in millions of applications, safety-critical or not.
Still you must admit that in the limit, when pushed to extreme the single CPU and interrupt model will fail. The only way out is to throw more silicon at the problem. Normally this is by way of implementing peripherals in hardware, UARTS, USB, Ethernet etc. Then get into DMA etc. That takes the pressure of the CPU and it's interrupts.
But what if your events are not coming from standard sources? What if you have your own weird real-world to interface to that does not conform to any standard peripheral hardware? Then you have to throw more silicon at the job by way of FPGA or ASIC or whatever.
OR you can throw more silicon at the problem in the way of more cores and a bit of software. That is the world of Prop, XMOS and other vendors now a days.
If everything was bliss in the CPU + interrupt world there would be no reason for the existence of XMOS and other multi-core devices popping up.
As has already been pointed out most of the time the advantages of the prop having multiple cores is wasted in using them to simulate hardware peripherals.
But what if your events are not coming from standard sources? What if you have your own weird real-world to interface to that does not conform to any standard peripheral hardware?
That quote makes no sense to me at all, perhaps you can give an actual example.
IRQ's be afraid be very very afraid.....................
The whole idea of interrupts is to give the effect of having multiple processors available to rapidly respond to external events as and when they arrise. There is some performance gain here in that one does not have to contiuously poll, in software, any I/O ports to check for events. There is a reduction in latency in that one can respond to an interrupt very quickly rather than whenever the next polling time is. All of this is good stuff and of course does work in millions of applications as Leon points out.
However, pushed to the limit the illusion breaks down. If event A and event B happen at the same time and they both need the next few clocks of the CPU handle them, then the system fails. There just is not time for both. No matter how fast your CPU and interrupt hardware there will always be such a limit somewhere.
It can also break down with only a single interrupt, for example:
I have a nice 6 line assembler loop performing Direct Digital Synthesis (DDS) of a nice sine wave on a little MCU (An AVR say). Now I want to be able to change the generated frequency by pressing buttons or by command from a serial connection. Oops, can't do it. Any interrupt from buttons or serial will glitch my sine wave output.
As for my PC. I rest my case. It can't handle anything much by way of real time external events at all.
And the second part of the quote is just as bad, "steals time".
OK. I'll say it again. An interrupt steals time from what ever was running before the interrupt occurred.
If you cannot accept that fundamental fact then I can only assume you have not understood what interrupts are and how they work. The processing power required to handle an interrupt does not come out of thin air. If you only have a single core it has to take time away from whatever task that core was working on. The simple DDS example above shows that.
I have designed many embedded systems that have several IRQ sources i.e multiple timers, UARTS and SPI etc etc all running at the same time and it works because it was designed to work that way.
Yep, me to, as have thousands of others. Luckily our processors and peripheral hardware were/are fast enough to make it all work.
I've also bumped into cases where it cannot be made to work without more processing power. So, either add dedicated peripheral hardware to handle it. Or, hey, why not just use another CPU?
I've implemented a DDS on an AVR that didn't glitch when the frequency was changed with buttons. I used timer interrupts to output the values to the DAC.
Not know the frequency range of course, use a device with multi channel PWM and load the values using IRQ's or again using IRQ's load the values into a DAC and still have other IRQ's or the main loop to read buttons etc.
Have done something like that with a Freescale 9S12E128 in the past for audio apps.
Good solutions. Proves my point though. Interrupts do not work:) You had to find some other hardware to do the job. Besides it must have had a lower frequency limit than the simple ASM loop I described.
As has already been pointed out most of the time the advantages of the prop having multiple cores is wasted in using them to simulate hardware peripherals.
That may seem to be the case. And I will admit that using a whole 20MHz 32 bit CPU to implement a UART does seem to be a bit wasteful.
BUT...
What if I'm developing a flexible product or I'm just prototyping many different things. Perhaps sometimes I just need a single UART. OK I could use any old MCU and get that, why waste a 32 bit processor on a UART right. But hey next day I find I want three or four UARTS. Damn, now I have to rebuild the thing with a different MCU that has that, if I can find one. Or may be I want to use USB instead of a UART. Damn, now I have to rebuild the thing again with another MCU that has USB.
You see where I'm going with this. I can do all kinds of different things, with the same stock of chips, if I have the flexibility to reprogram my hardware. Having multiple cores and implementing peripheral blocks in software gives us that flexibility. We don't have to keep finding new MCU's with just the righ combination of stuff for the next job.
That is the advantage of the Prop, and the XMOS devices, and others coming along. There is no waste.
That quote makes no sense to me at all, perhaps you can give an actual example.
Yep. What if I have a serial protocol input to my device that does not conform to any the predominant standards? Then I can't use any of the nice peripheral blocks on a typical MCU. Historically I would have had to build by own hardware to accept that protocol and deliver bytes to my processor. Perhaps I would use an FPGA or ASIC or just build it out of discrete logic. Now a days, if the data rate is not to extreme, I might do it by selecting a multi-core MCU and writing a little code for it.
I am confused when you say interrupts do not work in the scenarios that Leon and I posted, Why??
And what other hardware are you referring to that we had to add?
FYI the 9S12 I used has the PWM and DAC internal to the device, add crystal and use.
You do not describe your DDS hardware implementation, just a vague reference to 6 six lines of assembly code.
What if I have a serial protocol input to my device that does not conform to any the predominant standards? Then I can't use any of the nice peripheral blocks on a typical MCU. Historically I would have had to build by own hardware to accept that protocol and deliver bytes to my processor. Perhaps I would use an FPGA or ASIC or just build it out of discrete logic. Now a days, if the data rate is not to extreme, I might do it by selecting a multi-core MCU and writing a little code for it.]
I am confused when you say interrupts do not work in the scenarios that Leon and I posted, Why??
And what other hardware are you referring to that we had to add?
; main loop
;
; r28,r29,r30 is the phase accumulator
; r24,r25,r26 is the adder value determining frequency
;
; add value to accumulator
; load byte from current table in ROM
; output byte to port
; repeat
;
LOOP1:
add r28,r24 ; 1
adc r29,r25 ; 1
adc r30,r26 ; 1
lpm ; 3
out PORTB,r0 ; 1
rjmp LOOP1 ; 2 => 9 cycles
As you see it runs flat out, any interrupt occurring in order to change the adder value will glitch the thing.
Anyway the whole point my DDS example is to show a how interrupts "steal time" from the CPU. How they can upset time critical things going on else where in the code etc. This AVR DDS is some of the simplest code I know to demonstrate the failure of the interrupt idea at the extreme end of the scale.
You can bit bang on other processors you know
Yes, of course you can. And if your data rate is slow enough you can probably do it under interrupt control, bit by bit.
Speed things up and it fails. Or throw in some other time critical task for that CPU and it fails.
Oops we need some hardware for it...or why not just use another core and some code?
I noticed that you did not answer my first question i.e. "I am confused when you say interrupts do not work in the scenarios that Leon and I posted"
And what timer was added, the chip I used has an abundance of timers internally.
And the examples posted by Leon and myself show how to do it glitch free by using IRQ's, so therefor nothing stolen.
I think that your are missing the point i.e. a well thought out design will work in any given scenario multi-core or not.
OK. When I say "Interrupts do not work" I guess either I'm stating my case too forcefully and/or you guys are taking me too literally.
Obviously interrupts do work, they have been in use for decades for millions of applications. No issue there. Yes of course interrupts can be used in the scenarios we discussed, up to some level of performance. And yes a well thought out design, that is not finding itself pushed over the limits of what the CPU, peripheral hardware and interrupt mechanism can handle, will work.
BUT...
The fundamental fact is that interrupts divert the CPU from whatever it was doing, they cause the CPU to spend some time handling whatever it was that caused the interrupt, then they allow the CPU return and continue doing whatever it was doing. Interrupts "steal time" from somewhere.
Let's try and think of a concrete(ish) scenario:
1) You have a processor delivering one billion operations per second. 1 ns per operation.
2) You have a couple of interrupts from a couple of external events.
3) To make it easy there is going to only be one of each interrupt per second.
3) Let's say handling those interrupts consumes a mere 10% of the available processor power, 50 million operations each.
Looks like we are in business. The processor can easily provide 50 million ops per second for each interrupt handler. The background process will be 10% slower, because the interrupt handlers will "steal" that time, but we probably don't mind.
OK Now let's add serious real-time requirements.
1) You can expect both interrupts to happen simultaneously some times.
2) Each one of those interrupts MUST be completely handled within a 50 million nano seconds.
Ooops, bang, we can't do it. We have far more than enough processing power to handle the average load but we cannot meet that peek demand. This is why I say "interrupts don't work". Interrupts cannot always put the processing power where you need it when you need it, even if you have heaps more processing power than you actually need.
As you see with this example we have the processing power but we can't do the job.
What can we do:
1) Get a processor that is twice as fast. That may not be possible given the technology available at the time.
2) Build some hardware to deal with one or both of those interrupts. That is hard and expensive.
3) Throw another core at it.
Option 3) is where XMOS and others are heading with multi-core MCU's. I believe the Prop II will also be a contender in that space.
The way most programmers handle the latency issue is to do most of the "interrupt" processing in the background, and not when servicing the interrupt. In your example, the latency requirement can be met if half of the 50 million cycles can be executed outside of the ISR. Normally an ISR will just buffer data or set a flag, and the rest of the processing happens after exiting from the ISR. Of course, there will always be cases where all of the operations need to be performed in the ISR, and parallel processing makes sense in that case.
If I was given the choice between a Prop that runs at 80 MHz with 8 processors or one that runs at 640 MHz with one processor and interrupts I would go for the interrupts every time.
IRQ's are never You friends --> Them was made in Single-core CPU's to maintain Semi-parallel funktions --> BUT for that YOU pay with broken timing on MAIN program.
Next problem You need consider with them are writing correct code to them that give control back to MAIN program. Some more problems with them even be if You use nested IRQ's --> that one IRQ will suspend IRQ with lower priority. As if it is not much problems YOU need think always on time IRQ program to complete as IF Yours IRQ program need more time to complete before NEXT IRQ occurs that will never give MAIN program control.
That said IRQ's can't be your friend ---> Not mention that all I said are only some of problems YOU can have with interupts!.
100 MIPS for a Cortex-M3 LPC17xx, but you won't actually get 160 MIPS of real processing power out of a Propeller. Don't forget all the hardware peripherals, you can get: two UARTs, 12-bit ADC, USB OTG, CAN, Ethernet, PWM, 10-bit DAC, timers, I2C, I2S, and lots of I/O. Up to 512k flash and 64k SRAM. They cost about $6 in quantity, but the much cheaper Cortex-M0 has many of those features.
An NXP LPCXpresso board with an LPC1768 costs under $30 from Digi-Key. Development tools are free.
The way most programmers handle the latency issue is to do most of the "interrupt" processing in the background, and not when servicing the interrupt.
Yep.
In your example, the latency requirement can be met if half of the 50 million cycles can be executed outside of the ISR
Err..Yep. Except if you do that it's not my example anymore it is a less stringent problem.
The latency is only part of the issue here, getting the entire job done is the point.
Of course, there will always be cases where all of the operations need to be performed in the ISR, and parallel processing makes sense in that case.
Exactly my point.
If I was given the choice between a Prop that runs at 80 MHz with 8 processors or one that runs at 640 MHz with one processor and interrupts I would go for the interrupts every time.
Why is that?
When you write your interrupt handlers you then have to do something to hook them up to the appropriate interrupt signals.
What if starting those interrupt handlers as threads on different cores was no harder to do than what you have to do for interrupts? I.e. there are isn't any complicated extra code to write, just a few lines a bit different.
Presumably in that scenario one would not care if the handling was done by interrupt or by another core. Logically it comes out the same. There are languages and systems where this is entirely possible. See "The chip that shall remain nameless".
Except, having the work done on a different core is guaranteed not to mess up the timing of what you already have. In that way it might be easier to reason about and manage the entire system.
How on earth did we get into all this? I just wanted to poll for language usage:)
Sapieha, do you know the meaning of "tongue in cheek".
A simple case of receiving a byte from a internal UART and putting that byte into a buffer an IRQ is ideal, with the prop you have to use a whole core (every time I see the term Cog I have a dystopian flash of something steampunk) to do the same thing.
I had one application some years ago (looking back) where using a prop would have been ideal, where the cores would not have been used simulating peripherals but each would have run a part of the program, for that I used a HC11 with lots of interrupts:)
I've got identical answers for both the Propeller and the SX: Assembly. I can't even conceive of running BASIC on the SX. What would possibly be the point?
As for the Prop vs the ARM Cortex M3, I agree with Leon that the M3 is made to run C efficiently, and with the Code Red IDE, it's very convenient and easy. But if a particular algorithm will parallelize well and fit within the memory limitations of a cog, the Propeller will out-perform the Cortex M3 handily.
The Prop also gets my vote for being easier to prototype with. It takes just a few minutes to get a bare Propeller chip wired up and running on a blank proto board. The Cortex M3 takes a lot more fussing and time before you can program and run it. Furthermore, if standalone use with a keyboard and monitor is part of your plan, there is absolutely no comparison.
Personally, I'm glad we live in a world that has both chips.
If I was given the choice between a Prop that runs at 80 MHz with 8 processors or one that runs at 640 MHz with one processor and interrupts I would go for the interrupts every time.
A single processor at 8 times the speed of the Prop would allow me to use the entire compute power of the chip with a simple program. If I want to use the entire compute power of the Prop with 8 processor I would need to break up my simple program into a more complex parallel-processing program. If I need to perform real-time I/O I could write a fairly simple ISR to perform that task. I admit that there is a learning curve to writing ISRs, but it's not any more complicated than making parallel processors work with each other.
Most Prop programs run in a single cog with the additional cogs providing peripheral dirvers such as serial, I2C and SPI support. The cog that is running the main program is only operating at 20 MIPS, so most programs can only run at 20 MIPS, and that's if they're written in PASM. A program written in Spin runs at 500 KIPS or less.
So the dissapointing fact is that the Prop with it's 8 parallel processors and a potential of deliviering 160 MIPS really only yields 500 KIPS or less, plus up to 7 programmable perpherial devices. The Prop does have nice hardware features with its counters and video driving capability. Unfortunately, the 32K of RAM isn't enough to provide high resolution graphics.
I'm looking forward to the Prop 2, which should solve some of the problems with execution speed and graphics resolution. Spin performance will be much better on the Prop 2, but I think that C will be the real winner on this processor. Unfortunately, it will still have the same limited cog memory and problems associated with trying to get full performance out of parallel processors.
That's why I prefer a single high-speed processor with interrupts.
A single processor at 8 times the speed of the Prop would allow me to use the entire compute power of the chip with a simple program. If I want to use the entire compute power of the Prop with 8 processor I would need to break up my simple program into a more complex parallel-processing program.
It's not really related to the language choice (the original topic of this thread) but I agree that if you have an essentially single-threaded application and just want straight-line speed, you probably shouldn't choose a Propeller. There are lots of other choices available here - the ARM range alone offers many alternatives (depending on how fat your wallet is!).
On the other hand, if you have a multi-threaded application and want true determinism (in up to 7 threads) plus as many other non-deterministic threads as you might need at the time - all without resorting to interrupts (interrupts! gosh - how quaint! **) - then a Propeller is the natural choice. It is also extremely affordable.
In between these two there is (as usual) a large grey area where the choice is not so clearcut - full of other good offerings such as the Chip That Dare Not Speak It's Name, and also the NXP offerings currently being discussed in another thread (here).
The reality is that all of these are niche chips that address different (but sometimes overlapping) niche markets.
Ross.
** I hope you all realize I'm joking here - but in fact Parallax might think about playing up the "interrupts! gosh - how quaint!" line a bit more. Love 'em or hate 'em, there is no doubt that interrupts are mostly used as a mechanism for allowing non-concurrent processors to simulate what the Propeller does naturally.
Some of this is still language related and so somewhat on topic so here goes:
A single processor at 8 times the speed of the Prop would allow me to use the entire compute power of the chip with a simple program. If I want to use the entire compute power
Yep, true.
I admit that there is a learning curve to writing ISRs, but it's not any more complicated than making parallel processors work with each other.
On a Prop starting up a new core is as easy as COGNEW(.....). I getting interrupts working, getting the priorities right, etc is always harder and often full of surprises.
A program written in Spin runs at 500 KIPS or less.
I just wrote my FFT in Spin and then PASM. It's a sizeable piece of code and so makes a good comparison between the two. The Spin version is 77 time slower. That's only the equivalent of about 260 KIPS.
One starts to see why people look at 100MIP's ARM processors for a couple of dollars and wonder why anyone bothers with the Prop.
The fact that it is interpreted kills speed, the fact that all code and data go in and out of slow HUB kills speed.
Makes me think it's shame that COGs are not able to execute Spin byte code directly from COG space, perhaps using data in COG space. How many differnet byte codes are there? Maybe that's even doable on a Prop II?
That's why I prefer a single high-speed processor with interrupts.
Horses for courses. Select what works best/cheapest/easiest for the job at hand.
The multi-core MCU's from Parallax and others are here to stay. As RossH says "all of these [MCUs] are niche chips" and the multi-cores have their niche to. Which, currently, is not running large wodges of code at speed.
Comments
Whaaaaaaaaaaaat!
What do you mean they "don't work", if that was the case your PC cannot work.
And the second part of the quote is just as bad, "steals time".
I have designed many embedded systems that have several IRQ sources i.e multiple timers, UARTS and SPI etc etc all running at the same time and it works because it was designed to work that way.
IRQ's be afraid be very afraid.....................
Still you must admit that in the limit, when pushed to extreme the single CPU and interrupt model will fail. The only way out is to throw more silicon at the problem. Normally this is by way of implementing peripherals in hardware, UARTS, USB, Ethernet etc. Then get into DMA etc. That takes the pressure of the CPU and it's interrupts.
But what if your events are not coming from standard sources? What if you have your own weird real-world to interface to that does not conform to any standard peripheral hardware? Then you have to throw more silicon at the job by way of FPGA or ASIC or whatever.
OR you can throw more silicon at the problem in the way of more cores and a bit of software. That is the world of Prop, XMOS and other vendors now a days.
If everything was bliss in the CPU + interrupt world there would be no reason for the existence of XMOS and other multi-core devices popping up.
That quote makes no sense to me at all, perhaps you can give an actual example.
IRQ's be afraid be very very afraid.....................
OK. I'll say it again. Interrupts do not work.
The whole idea of interrupts is to give the effect of having multiple processors available to rapidly respond to external events as and when they arrise. There is some performance gain here in that one does not have to contiuously poll, in software, any I/O ports to check for events. There is a reduction in latency in that one can respond to an interrupt very quickly rather than whenever the next polling time is. All of this is good stuff and of course does work in millions of applications as Leon points out.
However, pushed to the limit the illusion breaks down. If event A and event B happen at the same time and they both need the next few clocks of the CPU handle them, then the system fails. There just is not time for both. No matter how fast your CPU and interrupt hardware there will always be such a limit somewhere.
It can also break down with only a single interrupt, for example:
I have a nice 6 line assembler loop performing Direct Digital Synthesis (DDS) of a nice sine wave on a little MCU (An AVR say). Now I want to be able to change the generated frequency by pressing buttons or by command from a serial connection. Oops, can't do it. Any interrupt from buttons or serial will glitch my sine wave output.
As for my PC. I rest my case. It can't handle anything much by way of real time external events at all.
OK. I'll say it again. An interrupt steals time from what ever was running before the interrupt occurred.
If you cannot accept that fundamental fact then I can only assume you have not understood what interrupts are and how they work. The processing power required to handle an interrupt does not come out of thin air. If you only have a single core it has to take time away from whatever task that core was working on. The simple DDS example above shows that.
Yep, me to, as have thousands of others. Luckily our processors and peripheral hardware were/are fast enough to make it all work.
I've also bumped into cases where it cannot be made to work without more processing power. So, either add dedicated peripheral hardware to handle it. Or, hey, why not just use another CPU?
Not know the frequency range of course, use a device with multi channel PWM and load the values using IRQ's or again using IRQ's load the values into a DAC and still have other IRQ's or the main loop to read buttons etc.
Have done something like that with a Freescale 9S12E128 in the past for audio apps.
Cheers.
Good solutions. Proves my point though. Interrupts do not work:) You had to find some other hardware to do the job. Besides it must have had a lower frequency limit than the simple ASM loop I described.
Batang,
That may seem to be the case. And I will admit that using a whole 20MHz 32 bit CPU to implement a UART does seem to be a bit wasteful.
BUT...
What if I'm developing a flexible product or I'm just prototyping many different things. Perhaps sometimes I just need a single UART. OK I could use any old MCU and get that, why waste a 32 bit processor on a UART right. But hey next day I find I want three or four UARTS. Damn, now I have to rebuild the thing with a different MCU that has that, if I can find one. Or may be I want to use USB instead of a UART. Damn, now I have to rebuild the thing again with another MCU that has USB.
You see where I'm going with this. I can do all kinds of different things, with the same stock of chips, if I have the flexibility to reprogram my hardware. Having multiple cores and implementing peripheral blocks in software gives us that flexibility. We don't have to keep finding new MCU's with just the righ combination of stuff for the next job.
That is the advantage of the Prop, and the XMOS devices, and others coming along. There is no waste.
Yep. What if I have a serial protocol input to my device that does not conform to any the predominant standards? Then I can't use any of the nice peripheral blocks on a typical MCU. Historically I would have had to build by own hardware to accept that protocol and deliver bytes to my processor. Perhaps I would use an FPGA or ASIC or just build it out of discrete logic. Now a days, if the data rate is not to extreme, I might do it by selecting a multi-core MCU and writing a little code for it.
I am confused when you say interrupts do not work in the scenarios that Leon and I posted, Why??
And what other hardware are you referring to that we had to add?
FYI the 9S12 I used has the PWM and DAC internal to the device, add crystal and use.
You do not describe your DDS hardware implementation, just a vague reference to 6 six lines of assembly code.
You can bit bang on other processors you know:)
The timer at least:)
I would refer you to this DDS implementation on a AVR http://www.myplace.nu/avr/minidds/index.htm
It has a DDS loop like this:
As you see it runs flat out, any interrupt occurring in order to change the adder value will glitch the thing.
Anyway the whole point my DDS example is to show a how interrupts "steal time" from the CPU. How they can upset time critical things going on else where in the code etc. This AVR DDS is some of the simplest code I know to demonstrate the failure of the interrupt idea at the extreme end of the scale.
Yes, of course you can. And if your data rate is slow enough you can probably do it under interrupt control, bit by bit.
Speed things up and it fails. Or throw in some other time critical task for that CPU and it fails.
Oops we need some hardware for it...or why not just use another core and some code?
I noticed that you did not answer my first question i.e. "I am confused when you say interrupts do not work in the scenarios that Leon and I posted"
And what timer was added, the chip I used has an abundance of timers internally.
And the examples posted by Leon and myself show how to do it glitch free by using IRQ's, so therefor nothing stolen.
I think that your are missing the point i.e. a well thought out design will work in any given scenario multi-core or not.
IRQ's can be your friend too:)
_Sine_IRQ
movb value, dac
rti
Obviously interrupts do work, they have been in use for decades for millions of applications. No issue there. Yes of course interrupts can be used in the scenarios we discussed, up to some level of performance. And yes a well thought out design, that is not finding itself pushed over the limits of what the CPU, peripheral hardware and interrupt mechanism can handle, will work.
BUT...
The fundamental fact is that interrupts divert the CPU from whatever it was doing, they cause the CPU to spend some time handling whatever it was that caused the interrupt, then they allow the CPU return and continue doing whatever it was doing. Interrupts "steal time" from somewhere.
Let's try and think of a concrete(ish) scenario:
1) You have a processor delivering one billion operations per second. 1 ns per operation.
2) You have a couple of interrupts from a couple of external events.
3) To make it easy there is going to only be one of each interrupt per second.
3) Let's say handling those interrupts consumes a mere 10% of the available processor power, 50 million operations each.
Looks like we are in business. The processor can easily provide 50 million ops per second for each interrupt handler. The background process will be 10% slower, because the interrupt handlers will "steal" that time, but we probably don't mind.
OK Now let's add serious real-time requirements.
1) You can expect both interrupts to happen simultaneously some times.
2) Each one of those interrupts MUST be completely handled within a 50 million nano seconds.
Ooops, bang, we can't do it. We have far more than enough processing power to handle the average load but we cannot meet that peek demand. This is why I say "interrupts don't work". Interrupts cannot always put the processing power where you need it when you need it, even if you have heaps more processing power than you actually need.
As you see with this example we have the processing power but we can't do the job.
What can we do:
1) Get a processor that is twice as fast. That may not be possible given the technology available at the time.
2) Build some hardware to deal with one or both of those interrupts. That is hard and expensive.
3) Throw another core at it.
Option 3) is where XMOS and others are heading with multi-core MCU's. I believe the Prop II will also be a contender in that space.
If I was given the choice between a Prop that runs at 80 MHz with 8 processors or one that runs at 640 MHz with one processor and interrupts I would go for the interrupts every time.
How many MIPS would an $8 ARM chip do? If it's around 160 MIPS it would be comparable to a Prop. How much internal RAM does an ARM chip have?
Dave
You said "IRQ's can be your friend too"
IRQ's are never You friends --> Them was made in Single-core CPU's to maintain Semi-parallel funktions --> BUT for that YOU pay with broken timing on MAIN program.
Next problem You need consider with them are writing correct code to them that give control back to MAIN program. Some more problems with them even be if You use nested IRQ's --> that one IRQ will suspend IRQ with lower priority. As if it is not much problems YOU need think always on time IRQ program to complete as IF Yours IRQ program need more time to complete before NEXT IRQ occurs that will never give MAIN program control.
That said IRQ's can't be your friend ---> Not mention that all I said are only some of problems YOU can have with interupts!.
100 MIPS for a Cortex-M3 LPC17xx, but you won't actually get 160 MIPS of real processing power out of a Propeller. Don't forget all the hardware peripherals, you can get: two UARTs, 12-bit ADC, USB OTG, CAN, Ethernet, PWM, 10-bit DAC, timers, I2C, I2S, and lots of I/O. Up to 512k flash and 64k SRAM. They cost about $6 in quantity, but the much cheaper Cortex-M0 has many of those features.
An NXP LPCXpresso board with an LPC1768 costs under $30 from Digi-Key. Development tools are free.
Yep.
Err..Yep. Except if you do that it's not my example anymore it is a less stringent problem.
The latency is only part of the issue here, getting the entire job done is the point.
Exactly my point.
Why is that?
When you write your interrupt handlers you then have to do something to hook them up to the appropriate interrupt signals.
What if starting those interrupt handlers as threads on different cores was no harder to do than what you have to do for interrupts? I.e. there are isn't any complicated extra code to write, just a few lines a bit different.
Presumably in that scenario one would not care if the handling was done by interrupt or by another core. Logically it comes out the same. There are languages and systems where this is entirely possible. See "The chip that shall remain nameless".
Except, having the work done on a different core is guaranteed not to mess up the timing of what you already have. In that way it might be easier to reason about and manage the entire system.
How on earth did we get into all this? I just wanted to poll for language usage:)
In your posts before Sapieha's, exactly !
Sapieha, do you know the meaning of "tongue in cheek".
A simple case of receiving a byte from a internal UART and putting that byte into a buffer an IRQ is ideal, with the prop you have to use a whole core (every time I see the term Cog I have a dystopian flash of something steampunk) to do the same thing.
I had one application some years ago (looking back) where using a prop would have been ideal, where the cores would not have been used simulating peripherals but each would have run a part of the program, for that I used a HC11 with lots of interrupts:)
Cheers
You started it back on page 3 with the following
But in answer to the original question, C.
Cheers:)
I started it? But I was only replying to your questi....Oh never mind:)
An excellent choice.
bsnut,
Thank you, also excellent choices and with good reason.
As for the Prop vs the ARM Cortex M3, I agree with Leon that the M3 is made to run C efficiently, and with the Code Red IDE, it's very convenient and easy. But if a particular algorithm will parallelize well and fit within the memory limitations of a cog, the Propeller will out-perform the Cortex M3 handily.
The Prop also gets my vote for being easier to prototype with. It takes just a few minutes to get a bare Propeller chip wired up and running on a blank proto board. The Cortex M3 takes a lot more fussing and time before you can program and run it. Furthermore, if standalone use with a keyboard and monitor is part of your plan, there is absolutely no comparison.
Personally, I'm glad we live in a world that has both chips.
Most Prop programs run in a single cog with the additional cogs providing peripheral dirvers such as serial, I2C and SPI support. The cog that is running the main program is only operating at 20 MIPS, so most programs can only run at 20 MIPS, and that's if they're written in PASM. A program written in Spin runs at 500 KIPS or less.
So the dissapointing fact is that the Prop with it's 8 parallel processors and a potential of deliviering 160 MIPS really only yields 500 KIPS or less, plus up to 7 programmable perpherial devices. The Prop does have nice hardware features with its counters and video driving capability. Unfortunately, the 32K of RAM isn't enough to provide high resolution graphics.
I'm looking forward to the Prop 2, which should solve some of the problems with execution speed and graphics resolution. Spin performance will be much better on the Prop 2, but I think that C will be the real winner on this processor. Unfortunately, it will still have the same limited cog memory and problems associated with trying to get full performance out of parallel processors.
That's why I prefer a single high-speed processor with interrupts.
Dave
It's not really related to the language choice (the original topic of this thread) but I agree that if you have an essentially single-threaded application and just want straight-line speed, you probably shouldn't choose a Propeller. There are lots of other choices available here - the ARM range alone offers many alternatives (depending on how fat your wallet is!).
On the other hand, if you have a multi-threaded application and want true determinism (in up to 7 threads) plus as many other non-deterministic threads as you might need at the time - all without resorting to interrupts (interrupts! gosh - how quaint! **) - then a Propeller is the natural choice. It is also extremely affordable.
In between these two there is (as usual) a large grey area where the choice is not so clearcut - full of other good offerings such as the Chip That Dare Not Speak It's Name, and also the NXP offerings currently being discussed in another thread (here).
The reality is that all of these are niche chips that address different (but sometimes overlapping) niche markets.
Ross.
** I hope you all realize I'm joking here - but in fact Parallax might think about playing up the "interrupts! gosh - how quaint!" line a bit more. Love 'em or hate 'em, there is no doubt that interrupts are mostly used as a mechanism for allowing non-concurrent processors to simulate what the Propeller does naturally.
Some of this is still language related and so somewhat on topic so here goes:
Yep, true.
On a Prop starting up a new core is as easy as COGNEW(.....). I getting interrupts working, getting the priorities right, etc is always harder and often full of surprises.
I just wrote my FFT in Spin and then PASM. It's a sizeable piece of code and so makes a good comparison between the two. The Spin version is 77 time slower. That's only the equivalent of about 260 KIPS.
One starts to see why people look at 100MIP's ARM processors for a couple of dollars and wonder why anyone bothers with the Prop.
The fact that it is interpreted kills speed, the fact that all code and data go in and out of slow HUB kills speed.
Makes me think it's shame that COGs are not able to execute Spin byte code directly from COG space, perhaps using data in COG space. How many differnet byte codes are there? Maybe that's even doable on a Prop II?
Horses for courses. Select what works best/cheapest/easiest for the job at hand.
The multi-core MCU's from Parallax and others are here to stay. As RossH says "all of these [MCUs] are niche chips" and the multi-cores have their niche to. Which, currently, is not running large wodges of code at speed.
I want more voters.
Besides someone is asking what languages we use on the Prop. So here it is so far.