Well, it appears to be time slices of equal size in rotation. So with three tasks taking turns on one CPU, everything gets done three times slower. There are situations where that is adequate.
With the Propeller, you don't have to do time slices, just start another Cog to handle another task. In some cases one task can be made faster by running slices of code on more than one Cog. In Forth on the Propeller, it is quite easy to grab a Cog and let it run until a task is completed and then release it for a completely different task. So you can have a lot more tasks running at the same time, and you can easily move into and out of a new agenda of tasks.
Yes, it must be a copy of my picture! I've used a similar illustration before in explaining multi-TASKING. What comment would you like to leave....it's easy to get an account.
The Propeller architecture is a multi-PROCESSING architecture, not a multi-TASKING architecture (as of P1). It is also possible to build a multi-TASKING model on top of the Propeller's multi-PROCESSING architecture.
Perhaps you want to study up on multi-TASKING versus multi-PROCESSING before you leave your comment?
Parallel architecture. multiprocessing and multitasking are the same thing - well this is always how the word
used to be used. A multiprocessing operating system was one with pre-emptive scheduler, basically.
Having grown up with true multiprocessing (multiple symmetrical CPU's) since 1980, I'll agree to disagree with you. Multi-PROCESSING involves more than one CPU (or cores in this modern world) executing independent tasks (execution units, threads, etc.) giving true simultaneous execution of typically independent programs. Multi-TASKING is the capability to have one or more CPU switch context between execution units to give the appearance of simultaneous execution of programs.
Parallel architectures are different beasts with different goals in mind.
From the point of view of a program running under a preemptive multi-tasking operating system or a multi-processor system or both it is all logically the same.
As soon as you have an interrupt that takes control away from the program in order to run some other part of the program then it may as well be different parts running on different processors.
My contention is that apart from performance a multi-threaded program cannot tell if it is running on one CPU or many.
Unless of course you can provide an example code where that is not true:)
With the smiley I cannot tell if you are joking or celebrating a logical victory point.
Anyway, just in case, sure you can call a function to ask for CPU affinity. Of course the OS can return a success value. Like OK, done that. Even if there is actually only one CPU.
Ah, you say, what if I write my code in assembler or even binary? What if there is no OS?
Sure, OK, the hardware can lie to your program just as well.
I actually haven't worried about multi-processing since my mainframe days when I was writing OS and system software. As a user, you are just one or more tasks competing in the pool for the resources just like anyone else. If there's just one processor, fine we share....if there's more than one processor, fine we share. 99% if the time it just doesn't matter at the user level.
I honestly don't remember how the details of how the OS I worked on handled a request for a non-existent CPU - failure seems unlikely, probably more of a graceful, "sorry, you can't have it but you still get to run"- task scheduling was my favorite part of the OS to work in.It did all kinds of cool stuff.
The Propeller continues to interest me because of the multiple COGs and how it takes me back to the old days when you were aware of the processors and what they were doing and could have control over them.
You are right, if you are lied to by OS or hardware then there is no reality except for what you believe about what you are told.
Maybe the Propeller COGID instruction should just return the bottom 3 bits of the clock. (that would be joking)
I'm happy you are here too. And everyone else. And me. Good for us:)
Any way. This multi-processor, multi-tasking, stuff has been interesting to think about over the years. There have been many attempts at parallel processing. Think The Connection Machine, The Transputer, or the multi-processor Intel mother boards. They did not succeed. Generally it seemed that by the time you got the thing working Moore's Law would have out paced you and a single processor was faster and more cost effective.
Today however, Moore's Law is not holding up so well. Processor speeds have not been doubling every two years like they did. So again we look to many-core processors and things like the Adaptiva Epiphany chip.
Sperry had a very good multi-tasking (real-time capable) mainframe OS that ran on top of several generations of true tightly coupled symmetrical multi-processor hardware from the early 70's well into the 90's when large scale UNIX mini-computers took over the landscape and "micro" mainframes started to appear. This wasn't parallel processing, this was actual multiple CPUs running tasks through a multi-priority, pre-emptive scheduler.
I just Googled, it is still around under the name of OS2200 (OS1100 when I worked on it) and still appears to have most I the features I knew. Since it is still around from the early 70's, I'd call that a successful multi-processor architecture.
We had a better product but IBM had better marketing - we know who usually wins those battles.
Moore's Law can't be a law if it can be broken. It might have applied in days when growth was more linear but now the growth curve is more exponential.
The complexity and cost of new processors is growing like crazy. Currently it costs a Billion to build a new fabrication facility so I wouldn't be surprised if the next generation will be 2 or 5 Billion.
Yeah, most people understand that Moore's law is not a actual physical law. It's something of a misnomer. Perhaps it should be called "Moore's Observation" or "Moore's prediction" or "Moore's off the cuff remark at a cocktail party":)
It does have something of a "law" about it though. It predicted something that turned out to hold true for decades. Laws don't have to hold and be true all the time. Not even physical laws. Think Newtonian mechanics vs relativity.
It might have applied in days when growth was more linear but now the growth curve is more exponential.
Here I think you have it backwards. Moore's Law posits that "Over the history of computing hardware, the number of transistors in a dense integrated circuit doubles approximately every two years." That is an observation and prediction of exponential growth. Which has been going on for decades.
In the last couple of years there has been much discussion about the end of Moore's Law, the end of that exponential growth. A lot of that is about hitting the limits of process technology, but even if we can over come that the economics don't make sense.
Have a google for "the end of Moore's Law" and you will find guys from Intel, AMD, DARPA, Broadcom, all over saying the same thing.
Now generally, shrinking components results in increased speed. Have you noticed that PC speeds have not been doubling every two years like they used to?
Hence my statement about "Moore's Law taking a bashing recently" and how in order to get more processing done we will need to go parallel.
Moore is less, and less is Moore.. or maybe it should be.
What I do see is that the amount of retail space for computer devices in Taiwan has gone way down. And in general, vacant shops and office building have gone up. The computer retail stores that do remain open have closed their upper floors and halved their inventory. Shelf space for computer books in retail book stores has also been cut in half.
It would be wonderful if Moore's Law had found the El Dorado of economic expansion. But regardless of what it states, the industry can't rest on increases in speed to drive economic prosperity. What we have may be more than adequate for most things for a long time to come.
It is like the days when Detroit would come out with a bigger and bigger automotive engine to drive sales. Less and less fuel efficiency, more and more excess capacity, and more pollution.
There comes a day of recognition when the customer (known these days as the consumer) considers the possibility of spending less foolishly. And their choices are the true driver of economic growth.
You can multitask a PIC 16c57 without any concern for Moore's Law. And the knowledge gained is likely to be of great value.
Comments
With the Propeller, you don't have to do time slices, just start another Cog to handle another task. In some cases one task can be made faster by running slices of code on more than one Cog. In Forth on the Propeller, it is quite easy to grab a Cog and let it run until a task is completed and then release it for a completely different task. So you can have a lot more tasks running at the same time, and you can easily move into and out of a new agenda of tasks.
The Propeller architecture is a multi-PROCESSING architecture, not a multi-TASKING architecture (as of P1). It is also possible to build a multi-TASKING model on top of the Propeller's multi-PROCESSING architecture.
Perhaps you want to study up on multi-TASKING versus multi-PROCESSING before you leave your comment?
used to be used. A multiprocessing operating system was one with pre-emptive scheduler, basically.
A multi-processor machine is another matter.
Parallel architectures are different beasts with different goals in mind.
As soon as you have an interrupt that takes control away from the program in order to run some other part of the program then it may as well be different parts running on different processors.
My contention is that apart from performance a multi-threaded program cannot tell if it is running on one CPU or many.
Unless of course you can provide an example code where that is not true:)
With the smiley I cannot tell if you are joking or celebrating a logical victory point.
Anyway, just in case, sure you can call a function to ask for CPU affinity. Of course the OS can return a success value. Like OK, done that. Even if there is actually only one CPU.
Ah, you say, what if I write my code in assembler or even binary? What if there is no OS?
Sure, OK, the hardware can lie to your program just as well.
I actually haven't worried about multi-processing since my mainframe days when I was writing OS and system software. As a user, you are just one or more tasks competing in the pool for the resources just like anyone else. If there's just one processor, fine we share....if there's more than one processor, fine we share. 99% if the time it just doesn't matter at the user level.
I honestly don't remember how the details of how the OS I worked on handled a request for a non-existent CPU - failure seems unlikely, probably more of a graceful, "sorry, you can't have it but you still get to run"- task scheduling was my favorite part of the OS to work in.It did all kinds of cool stuff.
The Propeller continues to interest me because of the multiple COGs and how it takes me back to the old days when you were aware of the processors and what they were doing and could have control over them.
You are right, if you are lied to by OS or hardware then there is no reality except for what you believe about what you are told.
Maybe the Propeller COGID instruction should just return the bottom 3 bits of the clock. (that would be joking)
Any way. This multi-processor, multi-tasking, stuff has been interesting to think about over the years. There have been many attempts at parallel processing. Think The Connection Machine, The Transputer, or the multi-processor Intel mother boards. They did not succeed. Generally it seemed that by the time you got the thing working Moore's Law would have out paced you and a single processor was faster and more cost effective.
Today however, Moore's Law is not holding up so well. Processor speeds have not been doubling every two years like they did. So again we look to many-core processors and things like the Adaptiva Epiphany chip.
I just Googled, it is still around under the name of OS2200 (OS1100 when I worked on it) and still appears to have most I the features I knew. Since it is still around from the early 70's, I'd call that a successful multi-processor architecture.
We had a better product but IBM had better marketing - we know who usually wins those battles.
http://njnnetwork.com/2009/07/1202-computer-error-almost-aborted-lunar-landing/
The code appears to have be rock-solid. But as usual, nobody understood that not all error codes were fatal. So they nearly panicked.
The complexity and cost of new processors is growing like crazy. Currently it costs a Billion to build a new fabrication facility so I wouldn't be surprised if the next generation will be 2 or 5 Billion.
It does have something of a "law" about it though. It predicted something that turned out to hold true for decades. Laws don't have to hold and be true all the time. Not even physical laws. Think Newtonian mechanics vs relativity.
Here I think you have it backwards. Moore's Law posits that "Over the history of computing hardware, the number of transistors in a dense integrated circuit doubles approximately every two years." That is an observation and prediction of exponential growth. Which has been going on for decades.
In the last couple of years there has been much discussion about the end of Moore's Law, the end of that exponential growth. A lot of that is about hitting the limits of process technology, but even if we can over come that the economics don't make sense.
Have a google for "the end of Moore's Law" and you will find guys from Intel, AMD, DARPA, Broadcom, all over saying the same thing.
Now generally, shrinking components results in increased speed. Have you noticed that PC speeds have not been doubling every two years like they used to?
Hence my statement about "Moore's Law taking a bashing recently" and how in order to get more processing done we will need to go parallel.
What I do see is that the amount of retail space for computer devices in Taiwan has gone way down. And in general, vacant shops and office building have gone up. The computer retail stores that do remain open have closed their upper floors and halved their inventory. Shelf space for computer books in retail book stores has also been cut in half.
It would be wonderful if Moore's Law had found the El Dorado of economic expansion. But regardless of what it states, the industry can't rest on increases in speed to drive economic prosperity. What we have may be more than adequate for most things for a long time to come.
It is like the days when Detroit would come out with a bigger and bigger automotive engine to drive sales. Less and less fuel efficiency, more and more excess capacity, and more pollution.
There comes a day of recognition when the customer (known these days as the consumer) considers the possibility of spending less foolishly. And their choices are the true driver of economic growth.
You can multitask a PIC 16c57 without any concern for Moore's Law. And the knowledge gained is likely to be of great value.