"indented languages can be unpredictable." sort of makes sense.
A language that relies solely on white-space context, I would consider less than ideal.
A general use language should be able to $INCLUDE lines from another file, and just work, and conditional defines should also just work.
- but I will accept that Spin is not a General use language.
Uhm, seems to me that there are a number of y'all on the anti white space argument who are also frequent C programmers. Do any of you still write your own make files? What was the difference between 5 spaces and tab????? See what happens with make when you use the wrong whitespace chars. Yet another white space driven language........
Uhm, seems to me that there are a number of y'all on the anti white space argument who are also frequent C programmers. Do any of you still write your own make files? What was the difference between 5 spaces and tab????? Yet another white space driven language........
These days all my programming seems to be done in Perl or Spin and have forgotten more about C than I still remember. I'm also one of those that hates the blocking with whitespace that Spin and Python use. A couple of interesting things to consider about make. It was written as a quick hack and although the original author intended to "fix" it after the project he developed it to assist with was done he never did and has apologized to the world for neglecting to do so!! If you use the world's greatest text editor to write you C code and Makefiles you could simply ":set list" and the differences between spaces and tabs will be clear!! (that should add 8-9 pages in an argument on which editor is in fact the best!! :cool:)
Yes us C programmers use make files. And yes the use of tab indentation in make files is just plain stupid. That is not C's fault. However as there is only ever one level of indentation in make files it is not so annoying.
One annoying thing with white space indentation is that tools like diff don't work.
For example if I change my the indentation on a file from 4 spaces to 2 then diff will show that almost every line has changed compared to the old version.
I can tell tell diff to ignore white space changes but then it is going to think that the odd line or two where indentation has changed, moving the code into or out of a block, is not a change.
That's not a problem with whitespace-delimited languages; it's a problem with diff. It just needs to be smarter and recognize which spaces are significant and which ones aren't.
The decision to not have interrupts was not made trivially. It was well thought out and deliberate. You're not going to get them. It's simply not part of the design philosophy. There's nothing evil about them. They're just not needed in a multiprocessing environment and they do mess up determinism unless you carefully turn them off some of the time. They're really a bear to handle when you've got a pipelined processor design since you either have to put them off while you empty the pipeline or you have to abort the pipeline in mid-process and undo some of the operations already performed to get things into a stable state to switch to another instruction stream with a saved state.
I know the whole interrupts, no interrupts is a religious thing and that we're going to have to disagree about them. While I wish the Propeller had interrupts ( of course they would have to be maskable) the fact is that it doesn't and neither will the prop-2 and since Chip doesn't like them there is little chance that any Parallax designer micro-controller will have interrupts. Although I don't understand the objections to including them in some form as there would be no requirement that they be used if you don't want to. I managed to write a lot of assembly code for the 8085, 6809/6309 and 68hc11 without using interrupts!!
I also think that the "determinism" argument is a red herring for a number of reasons. The first is that in sections of code that need to be executed without interrupts they would simply be masked and disabled. The second is that interrupts are most useful when writing event driven code. When a given handler needs to be run it needs to be run NOW! We also like to pretend that the propeller is purely deterministic in that we know how many cycles a given section of code will take to run. Sounds good until hub access is needed. At that point reads and writes to hub ram can take an indeterminate number of cycles.
While I don't think the propeller is perfect it is far and away the best micro controller I've worked with!
Sounds good until hub access is needed. At that point reads and writes to hub ram can take an indeterminate number of cycles.
Not so. If the hub accesses occur at the same points, in a loop with a fixed period -- as they often do in tight, deterministic code -- the hub requests will hop on the carousel at the same point every time. No indeterminism there! But if your program is written from the get-go with non-deterministic conditional branches and the like, hub indeterminism is hardly an additional concern.
The only time I've ever had to deal with anything approaching indeterminism in the Propeller, was when using waitvids whose timing was controlled by a PLL that was not tied to the system clock. But you plan around stuff like that.
Some of you seem to believe that since I support the NO INTERRUPTS I don't know how to use them. I have used them longer than most of you have been alive. I have done soft UARTs and modem code and the AT command set in an MC68705U3S as far back as 1985, all in 3.7KB of eprom. In about 1981 (with but propr to the actual release of the MC68705P3S - only 1.7KB eprom) I did similar with a USART (special gate level protocol) running at 56Kbps (19us bit timing) that required bit syncing and using a 4us instruction set. I have a lot of other examples. I used multiprocessor techniques in the 80's by combining 1x 68705P3S, 1x 68705U3S and 1x 68701. The P3S was just a glorified clock (baud generater) for the other 2 chips and it was cheaper and easier that a proper baud generator chip.
I have written a mini-computer emulator on an Intel 486 micro in assembler and mainly 486 instructions to get performance and it was commercially validated.
So, I have mastered interrupts. But just because I have does not mean that there are not easier ways. In fact there are... use the prop chip.
On the Propeller, one would just have that event handler waiting. It can be masked and or have modes depending on what it needs to do. In any case, it's just there, waiting to run based on either a communication from some other process running on one of the multiple CPUS, or it responds to some I/O state, etc..
Doing this can be a matter of frustration for sure! There were a lot of discussions early on about this, and one notable comment Chip made was related to timing. In a multi-processor environment, really putting all the cores to use comes down to timing and planning.
There is a time investment one way or the other, and there are really easy cases one way or the other too.
One Propeller easy case is video! It is really hard to beat how easy doing that is on the Prop. One can build up a video system, fire it off, and from then on consider it just like a piece of custom hardware, and it's robust. Doing that on a system with interrupts is a lot harder as some kernel code needs to be written to manage things. Arguably, the video code on a Prop is a kernel, but it's simple, and can be dedicated to that task, and reused very easily.
One really easy case on an interrupt system might be capturing some input, or responding to something while processing something else. The easy case is just service the interrupt and insure it's time consumption doesn't impact other code. Of course the harder case is doing that with multiple tasks, some consuming significant resources. On a Prop, doing multiple resource intensive things is easy, where doing one simple thing can be seen as a waste of a core, or requires polling / kernel to stack up multiple simple things onto one core.
It's all just trade-offs really. The key to the Prop is to explore the non-interrupt scenarios and get good at them, then maximize the use of resource intensive tasks and re-use.
Not that interrupts are bad. It's just that they don't exist on a Prop. Not that not having interrupts is bad either. Other designs can be difficult to use without them.
The skill investment people made to master interrupts is not unlike the skill investment they would make to really take advantage of a concurrent multi-processor. Because the Prop features that concurrency in it's round-robin shared memory model, optimal use of it really does involve making that same kind of investment, and when it is done, the programmer will then get the same kinds of returns as they did going the interrupt route, with the benefits of the Propeller more fully realized.
It should be possible to build a bus arbiter that provides deterministic accesses as well as a first-come first-served approach. Each cog could set a bit to indicate whether it wants a dedicated hub slot or wants as-soon-as-possible access. For each memory cycle, the arbiter would give priority to the cog associated with that cycle if that cog requested dedicated access and it has an access request pending. Otherwise, it would give access to the next cog on the list. This would reduce hub access waits for cogs that don't need deterministic timing.
As far as interrupts versus mutliple cores -- it is easier to build an N-core chip that runs at a certain clock rate than to build a single core chip that runs at N times that clock rate. However, in my view a single-core chip with interrupts is easier to program than a multi-core chip, and uses the chip more efficiently. How many Prop programs do you know that really do parallel execution? Most of them run a main program on one cog, and use the remaining cogs for peripherals.
So really the main feature of the Prop is to support soft peripherals with the potential of doing parallel processing. That's why I find the Propeller so interesting to work with. Sometimes "interesting" can be "challenging" and "frustrating".
How many Prop programs do you know that really do parallel execution?
Hi Dave
Most of the stuff that I am doing makes use of parallel processing - that's the Prop's USP and I want to leverage that capability. Interrupt-driven systems are a different paradigm. While not being a programmer I have written lots of PIC interrupt driven stuff. The annoying thing there was that interrupts on each chip seems to be a bit different from the next one. Ahh, those thousand page datasheets!
I am enjoying the way that the Prop can do most stuff on the silicon. There is no perfect one size fits all PIC (or if there is Leon will let us know) and the Prop seems more flexible than most PICs that I have played with (about 6 types out of hundreds admittedly). The Prop seems to me to be a Leatherman among microcontrollers; it does lots of things pretty well and it's nice to own too. For all my grumbles; I think Parallax does a pretty good job... If only they would make their stuff easier to ship to the UK!
I suppose a related question is how many programs need parallel execution, and or need peripherals?
In the video / audio domain, parallel execution works nicely and isn't too difficult to leverage, now that some ground work has been done. Controlling lots of things fits in there too, as does need for configuration flexibility. What doesn't fit so well is larger, "let's pretend it's just a multi-core CPU" type programs. Seems to me, those things aren't fully overlapping.
When we get into "let's pretend it's a multi-core CPU", lack of interrupts starts to get a little painful, and I would argue as it should because that's pushing an edge case, where it's possible to do, but not necessarily optimal. In that scenario, BTW, can't the kernel running larger programs provide interrupt capability? Seems to me, that's doable at the overall cost of speed, which again is already a consideration on the edge case anyway.
As for "easier", yeah and I think I would argue a whole bunch of that is familiarity built up over time. The way the Propeller does things doesn't have that same familiarity over time, and a quick look at what we generally do now with ease, compared to what was done before with ease shows a very significant jump! Some more time passes, and that same kind of jump will happen again, and have the newer design in play too.
On the matter of speed, right now the P1 is right at a speed that makes a lot of things possible, and giving up some of it can be painful. P2 has many optimizations that should relieve some of this, though there will always be those edge cases, but they will be different, more demanding ones than we see now. Depending on where that all falls, we could find much of this discussion moot with the right kernel running stuff, libraries and such all playing along in a more roomy, resource rich design than we have today too.
I think the optimal use case path is to continue to consider optimal ways to do stuff both in parallel and without interrupts. I suspect the number of hard cases will continue to drop as optimal paths are found, just as they did with other designs, with all designs having various trade-offs or other, and edge cases where the case for alternative design ideals competes with those edge case needs.
Let's say you want to do something like JPEG encoding or decoding. I've done some limited testing on this, and it is painfully slow on a single cog on Prop 1. A single cog runs at 20 MIPS. If I could use all 8 cogs I should be able to get 160 MIPS. However, it's not trivial to split the processing up amoung 8 cogs. I wouldn't have to worry about splitting up the processing on a single 160-MIPS processor.
The other thing I realized recently is that multi-cores waste processing when they are waiting on a single resource. Let's say I have two cogs that are accessing an SD card. I need to use a lock so they don't step on each other. If both cogs are trying to use the SD card at the same time one of them will sit in a polling loop while the other one has the lock. The polling loop is wasting cycles. In a single processor scenario, the two cogs would be implemented as two threads. When one thread is accessing the SD card the other thread is sleeping and not wasting cycles.
As I said, I do enjoy the challenge of programming the Prop, but it can be frustrating at times. The nice thing about the Prop is that it is very versatile, and you don't have to bother with interrupts.
Yep! That's the "let's wish it were one fast CPU" case. Agreed, though some planning may well relieve the need for locks, depending on what the tasks really are.
On a higher level note, I've been watching CAD software authors struggle with similar dynamics. A solid model, for example, makes use of a geometry kernel, and generally speaking a model history. These things are not easy to do in parallel.
The ideal case is peak CPU compute, which means single, or dual core for servicing the OS and other stuff, running with interrupts as needed. The faster the clock, the faster the RAM access, the better period. We never got those 6Ghz CPU designs, instead topping out somewhere under 4Ghz, with multi-core now the norm.
What has begun to happen is some slow understanding of how to do things more efficiently, breaking up tasks into chunks that can leverage multi-processing much better. Many new software features for building models in ways that allow multiple sub-bodies to be computed at the same time, options to model sans-history for variational solutions instead of linear parametric ones, multi-user concurrent part design instead of just concurrent assembly model design, etc...
It's taken considerable time for these things to appear! And that all points to the fast, interrupt capable CPU being easier. Had we somehow not gotten there, branching out into multi-processing much sooner, or perhaps at smaller scales, I wonder how differently things would be done today... I guess that was kind of the point I was making above, just expressing it another way.
The developers grok it much better now. It's another 3-5 years before the larger scale user community really begins to adopt the stuff, with a 10 year total timeline to really get everyone running more optimal use cases. By then, maybe we will crank single thread speed again, who knows?
On the overall scale of things, the progress this community has made isn't out the realm of reasonable expectations.
Let's say you want to do something like JPEG encoding or decoding. I've done some limited testing on this, and it is painfully slow on a single cog on Prop 1. A single cog runs at 20 MIPS. If I could use all 8 cogs I should be able to get 160 MIPS. However, it's not trivial to split the processing up amoung 8 cogs. I wouldn't have to worry about splitting up the processing on a single 160-MIPS processor.
The other thing I realized recently is that multi-cores waste processing when they are waiting on a single resource. Let's say I have two cogs that are accessing an SD card. I need to use a lock so they don't step on each other. If both cogs are trying to use the SD card at the same time one of them will sit in a polling loop while the other one has the lock. The polling loop is wasting cycles. In a single processor scenario, the two cogs would be implemented as two threads. When one thread is accessing the SD card the other thread is sleeping and not wasting cycles.
As I said, I do enjoy the challenge of programming the Prop, but it can be frustrating at times. The nice thing about the Prop is that it is very versatile, and you don't have to bother with interrupts.
The reality is there are some tasks that are better suited to multithreading than other and bottlenecks are always an issue! An examples of a problem almost custom made for multiprocessors are just about anything involving matrices (including images in photoshop). Another in the realm of microcontrollers could be reading a variety of sensors while displaying the output and logging at the same time. In the case of a single core the tasks would have to be spread out on that single core. With multiple processors one core could monitor the sensor write the information to shared memory (think hub ram) and signal other cores with semaphores ( I've been using the locks) when the data is ready to be read. One core would read the raw data and process it for display and possibly send that send that data to a different core for actual display. Another would read the raw data and store it to an SD card or other mass storage device.
One thing I learn in my years of working as an engineer is that very rarely does a "one size fits all" sollution fit all problems.
I would not say it is frustrating, limited perhaps as follows;
Our core business is OEM embedded devices and there have been many times that the prop would have been the ideal solution.
However each time we rule it out for one or more of the following reasons;
1. Cost per chip.
2. Limited memory.
3. Limited I/O.
4. Cost of external EEPROM.
5. Cost of Xtal.
6. Code security concerns.
Considering that Parallax Semi is (was?) being pitched towards companies like mine I would think that above points have merit.
Now the mythical prop2 is supposed to address the points above and therefore make it a more attractive proposition for large volume "commercial" use.
However for lot of common embedded apps this amounts to cracking a nut with a sledge hammer in terms of features but we will still have these points;
1. Cost per chip.
2. Cost of another regulator.
3. Cost of external EEPROM.
4. Cost of Xtal.
5. Increased PCB real estate cost.
Personally I think a better move would have been to introduce a prop variant in a 64 pin QFN (and dip) that added a second I/O port (to max out 64 pin count) with 64K RAM. We could have consumed about 10K per year of such a device.
In my opinion the porp2 is make or break for Parallax, assuming that it sees the light of day.
And because of having to read 5,000 pages of documentation for the various candidate chips, maximizing the cost of development.
The alternatives being what?
1) Try to find the answer buried as an OT reply to a forum post about a completely different subject?
2) Post a question to a forum and then wait for someone in a different time-zone to answer?
3) Read through hundreds of lines of example code trying to figure out how it works?
Sorry, but the 'lack of complete documentation = good' arguement is totally flawed.
And because of having to read 5,000 pages of documentation for the various candidate chips, maximizing the cost of development.
One only has to skim the sections of the documentation for the peripheral one is going to use, perhaps 10 pages or so per peripheral for a particular device. Usually, one doesn't even have to do that as manufacturers provide suitable on-line selection tools like this:
Anyway, if it enables a $7 device to be replaced by one costing $1, with much lower power consumption and a smaller package, it's worth it. Hobbyists needn't bother about such matters, but they are important to professional designers.
Except for when they aren't. Lots of ways to skin that cat Leon, all hashed here many times before. Thanks again for making sure we hear the way you prefer. Nice reminder.
You could save yourself a lot of typing each year by writing a script for the keyboard F keys:
F1 a pic is always better suited than a prop
F2 interrupts are better
F3 a pic is cheaper than a prop
F5 always choose the cheapest option, which is the pic
F6 i love the pic over the prop
F7 xmos does way more than a prop
F8 parallax is no contest, just use xmos or pic
Then you can just scroll through the threads, and only hit one button for a reply!
You can't make generalisations like that. Sensible designers choose the optimum device for a particular application: for one application a Propeller might be the best choice, for another, an AVR, MSP430 or something else might make sense.
Who is arguing that point? Wouldn't it rather be more an insult to other real engineers that they need someone to explain to them such obvious points? Or do you feel that you are the only one that knows this?
Comments
Uhm, seems to me that there are a number of y'all on the anti white space argument who are also frequent C programmers. Do any of you still write your own make files? What was the difference between 5 spaces and tab????? See what happens with make when you use the wrong whitespace chars. Yet another white space driven language........
These days all my programming seems to be done in Perl or Spin and have forgotten more about C than I still remember. I'm also one of those that hates the blocking with whitespace that Spin and Python use. A couple of interesting things to consider about make. It was written as a quick hack and although the original author intended to "fix" it after the project he developed it to assist with was done he never did and has apologized to the world for neglecting to do so!! If you use the world's greatest text editor to write you C code and Makefiles you could simply ":set list" and the differences between spaces and tabs will be clear!! (that should add 8-9 pages in an argument on which editor is in fact the best!! :cool:)
One annoying thing with white space indentation is that tools like diff don't work.
For example if I change my the indentation on a file from 4 spaces to 2 then diff will show that almost every line has changed compared to the old version.
I can tell tell diff to ignore white space changes but then it is going to think that the odd line or two where indentation has changed, moving the code into or out of a block, is not a change.
-Phil
I know the whole interrupts, no interrupts is a religious thing and that we're going to have to disagree about them. While I wish the Propeller had interrupts ( of course they would have to be maskable) the fact is that it doesn't and neither will the prop-2 and since Chip doesn't like them there is little chance that any Parallax designer micro-controller will have interrupts. Although I don't understand the objections to including them in some form as there would be no requirement that they be used if you don't want to. I managed to write a lot of assembly code for the 8085, 6809/6309 and 68hc11 without using interrupts!!
I also think that the "determinism" argument is a red herring for a number of reasons. The first is that in sections of code that need to be executed without interrupts they would simply be masked and disabled. The second is that interrupts are most useful when writing event driven code. When a given handler needs to be run it needs to be run NOW! We also like to pretend that the propeller is purely deterministic in that we know how many cycles a given section of code will take to run. Sounds good until hub access is needed. At that point reads and writes to hub ram can take an indeterminate number of cycles.
While I don't think the propeller is perfect it is far and away the best micro controller I've worked with!
The only time I've ever had to deal with anything approaching indeterminism in the Propeller, was when using waitvids whose timing was controlled by a PLL that was not tied to the system clock. But you plan around stuff like that.
-Phil
I have written a mini-computer emulator on an Intel 486 micro in assembler and mainly 486 instructions to get performance and it was commercially validated.
So, I have mastered interrupts. But just because I have does not mean that there are not easier ways. In fact there are... use the prop chip.
Doing this can be a matter of frustration for sure! There were a lot of discussions early on about this, and one notable comment Chip made was related to timing. In a multi-processor environment, really putting all the cores to use comes down to timing and planning.
There is a time investment one way or the other, and there are really easy cases one way or the other too.
One Propeller easy case is video! It is really hard to beat how easy doing that is on the Prop. One can build up a video system, fire it off, and from then on consider it just like a piece of custom hardware, and it's robust. Doing that on a system with interrupts is a lot harder as some kernel code needs to be written to manage things. Arguably, the video code on a Prop is a kernel, but it's simple, and can be dedicated to that task, and reused very easily.
One really easy case on an interrupt system might be capturing some input, or responding to something while processing something else. The easy case is just service the interrupt and insure it's time consumption doesn't impact other code. Of course the harder case is doing that with multiple tasks, some consuming significant resources. On a Prop, doing multiple resource intensive things is easy, where doing one simple thing can be seen as a waste of a core, or requires polling / kernel to stack up multiple simple things onto one core.
It's all just trade-offs really. The key to the Prop is to explore the non-interrupt scenarios and get good at them, then maximize the use of resource intensive tasks and re-use.
Not that interrupts are bad. It's just that they don't exist on a Prop. Not that not having interrupts is bad either. Other designs can be difficult to use without them.
The skill investment people made to master interrupts is not unlike the skill investment they would make to really take advantage of a concurrent multi-processor. Because the Prop features that concurrency in it's round-robin shared memory model, optimal use of it really does involve making that same kind of investment, and when it is done, the programmer will then get the same kinds of returns as they did going the interrupt route, with the benefits of the Propeller more fully realized.
As far as interrupts versus mutliple cores -- it is easier to build an N-core chip that runs at a certain clock rate than to build a single core chip that runs at N times that clock rate. However, in my view a single-core chip with interrupts is easier to program than a multi-core chip, and uses the chip more efficiently. How many Prop programs do you know that really do parallel execution? Most of them run a main program on one cog, and use the remaining cogs for peripherals.
So really the main feature of the Prop is to support soft peripherals with the potential of doing parallel processing. That's why I find the Propeller so interesting to work with. Sometimes "interesting" can be "challenging" and "frustrating".
Hi Dave
Most of the stuff that I am doing makes use of parallel processing - that's the Prop's USP and I want to leverage that capability. Interrupt-driven systems are a different paradigm. While not being a programmer I have written lots of PIC interrupt driven stuff. The annoying thing there was that interrupts on each chip seems to be a bit different from the next one. Ahh, those thousand page datasheets!
I am enjoying the way that the Prop can do most stuff on the silicon. There is no perfect one size fits all PIC (or if there is Leon will let us know) and the Prop seems more flexible than most PICs that I have played with (about 6 types out of hundreds admittedly). The Prop seems to me to be a Leatherman among microcontrollers; it does lots of things pretty well and it's nice to own too. For all my grumbles; I think Parallax does a pretty good job... If only they would make their stuff easier to ship to the UK!
Cheers
Richard
In the video / audio domain, parallel execution works nicely and isn't too difficult to leverage, now that some ground work has been done. Controlling lots of things fits in there too, as does need for configuration flexibility. What doesn't fit so well is larger, "let's pretend it's just a multi-core CPU" type programs. Seems to me, those things aren't fully overlapping.
When we get into "let's pretend it's a multi-core CPU", lack of interrupts starts to get a little painful, and I would argue as it should because that's pushing an edge case, where it's possible to do, but not necessarily optimal. In that scenario, BTW, can't the kernel running larger programs provide interrupt capability? Seems to me, that's doable at the overall cost of speed, which again is already a consideration on the edge case anyway.
As for "easier", yeah and I think I would argue a whole bunch of that is familiarity built up over time. The way the Propeller does things doesn't have that same familiarity over time, and a quick look at what we generally do now with ease, compared to what was done before with ease shows a very significant jump! Some more time passes, and that same kind of jump will happen again, and have the newer design in play too.
On the matter of speed, right now the P1 is right at a speed that makes a lot of things possible, and giving up some of it can be painful. P2 has many optimizations that should relieve some of this, though there will always be those edge cases, but they will be different, more demanding ones than we see now. Depending on where that all falls, we could find much of this discussion moot with the right kernel running stuff, libraries and such all playing along in a more roomy, resource rich design than we have today too.
I think the optimal use case path is to continue to consider optimal ways to do stuff both in parallel and without interrupts. I suspect the number of hard cases will continue to drop as optimal paths are found, just as they did with other designs, with all designs having various trade-offs or other, and edge cases where the case for alternative design ideals competes with those edge case needs.
The other thing I realized recently is that multi-cores waste processing when they are waiting on a single resource. Let's say I have two cogs that are accessing an SD card. I need to use a lock so they don't step on each other. If both cogs are trying to use the SD card at the same time one of them will sit in a polling loop while the other one has the lock. The polling loop is wasting cycles. In a single processor scenario, the two cogs would be implemented as two threads. When one thread is accessing the SD card the other thread is sleeping and not wasting cycles.
As I said, I do enjoy the challenge of programming the Prop, but it can be frustrating at times. The nice thing about the Prop is that it is very versatile, and you don't have to bother with interrupts.
On a higher level note, I've been watching CAD software authors struggle with similar dynamics. A solid model, for example, makes use of a geometry kernel, and generally speaking a model history. These things are not easy to do in parallel.
The ideal case is peak CPU compute, which means single, or dual core for servicing the OS and other stuff, running with interrupts as needed. The faster the clock, the faster the RAM access, the better period. We never got those 6Ghz CPU designs, instead topping out somewhere under 4Ghz, with multi-core now the norm.
What has begun to happen is some slow understanding of how to do things more efficiently, breaking up tasks into chunks that can leverage multi-processing much better. Many new software features for building models in ways that allow multiple sub-bodies to be computed at the same time, options to model sans-history for variational solutions instead of linear parametric ones, multi-user concurrent part design instead of just concurrent assembly model design, etc...
It's taken considerable time for these things to appear! And that all points to the fast, interrupt capable CPU being easier. Had we somehow not gotten there, branching out into multi-processing much sooner, or perhaps at smaller scales, I wonder how differently things would be done today... I guess that was kind of the point I was making above, just expressing it another way.
The developers grok it much better now. It's another 3-5 years before the larger scale user community really begins to adopt the stuff, with a 10 year total timeline to really get everyone running more optimal use cases. By then, maybe we will crank single thread speed again, who knows?
On the overall scale of things, the progress this community has made isn't out the realm of reasonable expectations.
The reality is there are some tasks that are better suited to multithreading than other and bottlenecks are always an issue! An examples of a problem almost custom made for multiprocessors are just about anything involving matrices (including images in photoshop). Another in the realm of microcontrollers could be reading a variety of sensors while displaying the output and logging at the same time. In the case of a single core the tasks would have to be spread out on that single core. With multiple processors one core could monitor the sensor write the information to shared memory (think hub ram) and signal other cores with semaphores ( I've been using the locks) when the data is ready to be read. One core would read the raw data and process it for display and possibly send that send that data to a different core for actual display. Another would read the raw data and store it to an SD card or other mass storage device.
One thing I learn in my years of working as an engineer is that very rarely does a "one size fits all" sollution fit all problems.
I would not say it is frustrating, limited perhaps as follows;
Our core business is OEM embedded devices and there have been many times that the prop would have been the ideal solution.
However each time we rule it out for one or more of the following reasons;
1. Cost per chip.
2. Limited memory.
3. Limited I/O.
4. Cost of external EEPROM.
5. Cost of Xtal.
6. Code security concerns.
Considering that Parallax Semi is (was?) being pitched towards companies like mine I would think that above points have merit.
Now the mythical prop2 is supposed to address the points above and therefore make it a more attractive proposition for large volume "commercial" use.
However for lot of common embedded apps this amounts to cracking a nut with a sledge hammer in terms of features but we will still have these points;
1. Cost per chip.
2. Cost of another regulator.
3. Cost of external EEPROM.
4. Cost of Xtal.
5. Increased PCB real estate cost.
Personally I think a better move would have been to introduce a prop variant in a 64 pin QFN (and dip) that added a second I/O port (to max out 64 pin count) with 64K RAM. We could have consumed about 10K per year of such a device.
In my opinion the porp2 is make or break for Parallax, assuming that it sees the light of day.
Cheers.
And because of having to read 5,000 pages of documentation for the various candidate chips, maximizing the cost of development.
The alternatives being what?
1) Try to find the answer buried as an OT reply to a forum post about a completely different subject?
2) Post a question to a forum and then wait for someone in a different time-zone to answer?
3) Read through hundreds of lines of example code trying to figure out how it works?
Sorry, but the 'lack of complete documentation = good' arguement is totally flawed.
One only has to skim the sections of the documentation for the peripheral one is going to use, perhaps 10 pages or so per peripheral for a particular device. Usually, one doesn't even have to do that as manufacturers provide suitable on-line selection tools like this:
http://www.microchip.com/productselector/MCUProductSelector.html
Anyway, if it enables a $7 device to be replaced by one costing $1, with much lower power consumption and a smaller package, it's worth it. Hobbyists needn't bother about such matters, but they are important to professional designers.
Except for when they aren't. Lots of ways to skin that cat Leon, all hashed here many times before. Thanks again for making sure we hear the way you prefer. Nice reminder.
F1 a pic is always better suited than a prop
F2 interrupts are better
F3 a pic is cheaper than a prop
F5 always choose the cheapest option, which is the pic
F6 i love the pic over the prop
F7 xmos does way more than a prop
F8 parallax is no contest, just use xmos or pic
Then you can just scroll through the threads, and only hit one button for a reply!