I've been programming microcomputers and microcontrollers on and off over the years, since 1975. I got my own MC6800 system as soon as they became affordable, and had to hand-assemble my code as an assembler wasn't available.
I hereby propose a new "Propeller Law" - along the same lines as "Godwin's Law":
As a Propeller discussion grows longer, the probability of a comparison involving programming directly in machine code, or loading such programs from paper tape or punched cards, approaches 1.
Like Godwin's Law, whenever the "Propeller Law" is invoked, the thread must be terminated forthwith.
Are you really going to trust a piece of software that doesn't come from the Parallax, or a known vendor for a production product?
Emphatically YES!
Let me tell you a story:
About 12 years ago I was hired by a company to help out on a certain embedded product that was being developed using MicroSofts WinCE. They had had such a hard time with that and it still did not work well so they switched to WindRivers VxWorks OS. Brilliant everything came together very quickly and we had a product on the market.
All was well until more features were demanded of this product. Whilst the required support for these features was available in VxWorks it was buggy and again we just could not get things working. Support from WindRiver was unable to help.
Eventually we ditched the commercially supported VxWorks and adopted Linux. Brilliant again, everything came together quickly and we had a new improved product on the market. That product is still doing well having been through a few upgrades of hardware including a processor change.
Since then all the embedded products I have worked on have been Linux based. A large number of them using "home brew" linux systems as opposed to something ready made from RedHat or MontaVista for example.
As you know a Linux OS and all its supporting software comes from a diverse array of sources, from large organizations down to individual programmers. With careful selection and testing we have been able to rely on these components more confidently than those form big name vendors who supposedly guarantee and support their products.
From time to time we have had to find and fix problems in such "off the street" components but that has been more successful than waiting on a vendor to do it.
Yeah? Well, I knew somebody, who knew Turing, and Babbage is in my lineage on my Fathers side.
So? I once saw the ghostly visage of Muḥammad ibn Mūsā al-Khwārizmī (after whom the word "algorithm" is derived) in a slice of burnt toast. (Unfortunately, Browser got hold of it before I could auction it off on eBay.)
About a year ago, I had the opportunity to experiment with LabView for a little bit. I do not claim any substantial experience with LabView, but that was a very neat setup and I was highly impressed with it. It would be nice if the Propeller could be programmed in a similar fashion.
As a hobbyist, I chose the Propeller because it was the most powerful chip I could find in a DIL package. I hope Parallax keeps the DIP option and that future chips will be more powerful, especially with regards to the amount of RAM available on-chip. Apart from that, the Propeller is perfect IMO.
Although I am not a hobbyist, I am right there with you when it pertains to keeping the DIP chip. When I use a DIP, I normally install a socket for the DIP to reside in (one of the ooopppsss insurance policies). I think the DIP is essential for a novice or experimenter.
idbruce, exactly. I would have loved it at engineering school if we had to learn digital electronics on a Prop. Instead we used a 68000 which is also a cool chip. BTW, for educational purposes a debugger-ready IDE with C+ASM is very much needed imo (we didn't have that though, we only had this crappy serial debugger that mostly didn't work because the intern that coded hated assembly
I'm not a hardware guy, but I recall hearing that hybrid (fet + flash) chips are costlier as they require very specific fabrication process. Is this correct?
As a hobbyist, I chose the Propeller because it was the most powerful chip I could find in a DIL package. I hope Parallax keeps the DIP option and that future chips will be more powerful, especially with regards to the amount of RAM available on-chip. Apart from that, the Propeller is perfect IMO.
The new prop will not be in a DIP package but don't let that scare you away. Someone will create modules so that it can be used by those not capable of dealing with surface mount components. I went through this with the Atmel chips that are surface mount only. The little adapter boards for those came with the basic support devices mounted so you only needed to supply power to get it working. Eventually I just learned the "way" of working with surface mount chips. Making PCBs for them at home can be a bit tricky, but not impossible. Soldering the surface mount stuff also is very doable - aside from the eye strain, it isn't all that dificult.
Thanks for the info. I have no experience with soldering surface-mounted components, even with through hole 2.54mm I'm very clumsy I guess a module would be fine, though what I like with DIP is the ability to swap/replace components easily -- if you fry a module, it might cost a little more to replace.
The new prop will not be in a DIP package but don't let that scare you away. Someone will create modules so that it can be used by those not capable of dealing with surface mount components.
Chris
I can deal with sm parts now but I still love the ease of use of DIP parts.
Those modules are nice but esthetically I hate the sight of a small circuit
board grafted onto my project, it's just awful.
What I really don't care for is designing circuit boards..especially multi-layer boards
as I just stink at it. :-( I suppose it's because it is more a mechanical thing and my brain
seems to lack the neurons that handle mechanical.
What I want is an entirely new way to design and construct electronic devices.
Something that eliminates the need to design complicated boards and simplifies
the layout of components. The rats nest of copper connections is getting ridiculous
and just has to be tamed somehow.
ICs were a great leap forward and helped greatly by eliminating all those thousands
of individual parts...I can't imagine what it must have been like creating a computer
with 100,000 plus individual parts wired together...just OMG!
It seems to me the next step towards taming the rats nest of connections is to move
to an optical data scheme. If ICs were all able to communicate optically using the
board they were placed on as a waveguide then layout would be very simple and copper
connections would be far fewer. Imagine a powerful uc that had only 2 pins for power.
When you needed to do something like add some I/O pins you could just stick another
very tiny IC anywhere on the board and it would be as though its pins were wired directly
to the uc except there would be no copper traces needed for connection. The same for
other items like say you needed an HDMI output for your uc...just stick an optically
enabled HDMI chip anywhere on the board and it gets gets data from and is controlled
by the uc optically using the board as the waveguide.
I see controllers getting so powerful in a few years that they will be equivalent to the most
powerful multi-core processors found in today's desktop computers. There is just no way to use
such powerful beasts in tiny devices like cell-phones unless you move to optical data handling,
those processors have a forest of pins beneath them! that complexity has to be reduced.
Farther down the road I think that electronic devices will be designed and built using
some kind of desktop nanotechnology enabled assemblers...no circuit boards, no parts,
no wires.. you just end up with a solid object that does whatever is required and is so
cheap it is disposable. All the complexity will be hidden within the single part that is the
device... much like how complexity is hidden within one of today's ICs.
When it comes from a vendor, it's often missing pieces needed to build it. This is done so that they have control over and profit from the executable code. When there is trouble, they own that problem, which may be good or bad, depending on how they handle that ownership. The users options are limited. This is the black box.
Sometimes, vendors ship open code that people can build on their own. This is done so that they can control and profit from the development and use of said code. When there is trouble, both the vendor and the end user own that problem, with the license terms dictating who can do what and why. This is the grey box.
Some code is open and available to build and to use. This is done so that all users of the code have control over the building and executing of the code, and so that they all potentially profit from the development and use of the code. A vendor may offer support as a service, to profit from the ownership of problems associated with the code. Examples are Linux, BSD, and Source Forge in general. The user owns the problems, but can get help from other contributing users, experienced consultants, and vendors, depending on the code body under discussion. Often it's required that code derived from this code is also open code, and that's the "cost" of acquiring and using said code. This is the build your own box, or white box as all can be known and tested.
Whether or not one or more of those makes sense, has a whole lot to do with the business model in play, experience level of the product team, contracts, etc... A black box may only be tested. Knowing is largely a position of trust. The grey box can be known and tested within contract limits.
It's worth noting that when a vendor owns the problem, the developer or user typically CAN'T own the problem, which is the primary reason why open code has the appeal it does. It may not always be pretty to get yourself or a team boot strapped onto open code, but once that is done, there is no doubt as to where the ownership of problems lies, and where the control is, and most importantly, where the trust is.
A whole lot comes down to competency and trust. If somebody has low competency, they are in a position of forced trust. If they have high competency, their need for trust varies considerably, and can be moved and changed based on need. This too is why open code has the appeal it does.
I think the Propeller is an ideal device for hobbyists and certain niche professional applications. I just don't think it compares favourably with many of the other solutions that are available, as a general purpose MCU for use in large-quantity production..
That's arguably very true, but the real question is - so what, what does that matter ?
Maybe Propeller becomes widely adopted by the commercial and industrial sectors, maybe it gets completely rejected, remains a niche product. Again, so what, as long as those who want to use the Propeller have access to it, as long as Parallax remains in business with a healthy looking future.
How does 'commercial success' or even 'commercial failure' affect individual choice; is it not just hand-wringing on behalf of others ? Does some food not being universally popular actually affect my choice to eat that food ?
I still think the issue of Parallax's ubiquitous commercial future with the Propeller is rather an irrelevant question. I think it actually boils down to two underlying and unsaid things driving the question -
1) What guarantee do I have that I can rely on Parallax ? -- The fear that if Parallax is not a ubiquitous success it will fail and disapper.
2) I'd like to choose a Propeller but without being able to point to ubiquitous success others may consider me odd or of unsound mind -- That's basic insecurity and lack of faith in one's own judgement and justifications.
Commercial success is being taken as justification of use, so it becomes necessary to seek a positive answer that Parallax will have commercial success. It's asking the wrong question IMO.
The Parallax ecosystem is the metric that matters for anyone choosing the Prop. IMHO, if Prop II and Parallax "pro" sees any niche success, the Prop is likely secure for a long time, leaving the only other questions up to the user.
Edit: I think it's well worth noting that the Prop is a highly differentiated product, and Parallax operates as a business differently from XMOS or some larger vendor, like Microchip. "success" is simply whether or not the Propeller pays Parallax enough to continue doing business. It is not about overall share comparisons.
especially with regards to the amount of RAM available on-chip
Reading the Propeller Manual, I understand now that the register space IS the internal Cog memory, and furthermore the ISA has no room for a larger space, for example the MOV instruction is encoded as:
101000 001i 1111 ddddddddd sssssssss
Is Parallax going to change the ISA? Because that's the only way I can think of to make additional register space adressable.
EDIT: BTW, I have no idea what the transistor count of the Propeller is, but it must not be very large apart from RAM -- the architecture is beautifully simple.
1. What's the average microcontroller product lifetime in the commercial/industrial arena?
2. How long does it take to go from cutting-edge to mature to "venerable" to obsolete?
3. Can Parallax Semiconductor's chip development process keep up with this schedule?
By focusing on a hobbyist and education market, Parallax has tapped a loyal clientele for whom products mature to obsolescence very slowly. (Witness the long-term success of the BASIC Stamp line.) This is possible because Parallax is very good at continuing to add value to mature products via accessories and a growing array of educational materials. As a consequence, new core products can be introduced at a fairly leisurely pace. (The Prop I, for example, is now five years old.) Now Parallax is about to enter the fast lane. Does this necessarily entail that a Prop III be already in gestation with a Prop IV close behind in the visualization stage? In short, how much does this new reality shorten the required design cycle, and can Parallax Semiconductor keep up?
The addressing range could be increased by using bank switching, similar to the way the SX works. Bank switching is a pain to deal with, but it works. Extra cog memory would be nice. The Spin interpreter loses some speed efficiency by having to fit in 496 longs. I have tinkered with it, and have gotten about a 33% improvement in speed by using the memory from a second cog or by using an LMM interpreter to move some of the code to hub RAM. A cog with twice as much memory might be able to achieve a 2X speedup in Spin execution.
No, the ISA is not going to change. The cogs are limited to 512 addressable locations. There have been several very long threads discussing possible changes to the architecture, back before decisions were made. Rather than trying to recap that information yet another time, I suggest you find the threads and read through them. Bank switching was discussed and rejected.
Regarding transistor count, remember that there are 8 cogs, each identical. Each Prop II cog has its own multiport 2K byte RAM plus the shared 256K byte hub ram. Each I/O pin has a quite complex controller associated with it
One of my ideas that I have discussed with Chip on PM's was to have Dual space COG (COG-01+COG-11) instead of COG+256 ram.
And as that configuration has possibility to RUN in 3 modes.
1. As One COG + 512 longs User Data buffer.
2. As 2 Separate COG's that have same time window to HUB (as it is possible to have 4 longs in one window) that give no problems to work.
3. AS Banked One COG with DUAL code length.
BUT Chip rejected that. He said he have much nicer solution (And He hate Banked memory model)
One of that solutions problems More silicon to achieve COG's (Smaller place for HUB ram).
BUT I still think that is most usable way to GO to achieve more performance from NEW PROPELLER
and have possibility to run much more complex code on it.
No, the ISA is not going to change. The cogs are limited to 512 addressable locations. There have been several very long threads discussing possible changes to the architecture, back before decisions were made. Rather than trying to recap that information yet another time, I suggest you find the threads and read through them. Bank switching was discussed and rejected.
Regarding transistor count, remember that there are 8 cogs, each identical. Each Prop II cog has its own multiport 2K byte RAM plus the shared 256K byte hub ram. Each I/O pin has a quite complex controller associated with it
That's arguably very true, but the real question is - so what, what does that matter ?
Maybe Propeller becomes widely adopted by the commercial and industrial sectors, maybe it gets completely rejected, remains a niche product. Again, so what, as long as those who want to use the Propeller have access to it, as long as Parallax remains in business with a healthy looking future.
I agree !
i dont get it .. I like parallax the way they are .
not to big . not to small .
I dont see why the Prop needs to compete with other chips .
as long as parallax is not going under then they are doing a good job .
If\
the Only part is the SW side .i can wish all I want but considering how many employees are there . its not cost effective to have them pay a SW desginer to make a OSX or Liunx version .
The cogs are gonna stay 2K. What's gonna happen is those will be used to run code out of HUB ram LMM style. Three ways to use a COG.
1. Run PASM on it directly, providing some very fast functions. Video, sound, serial, math...
2. Run the SPIN intrepeter on it, providing large programs that can see the whole HUB memory. This will be the slower option, but easy!
3. Run a PASM kernel, or "supervisor" on a COG that runs PASM code LMM style. That cog will fetch instructions, or groups of instructions from the HUB and execute them locally. Some new "opcodes" will be recognized by the kernel to perform things like jumps and potentially math, etc... so that the COG essentially becomes a CPU, running code out of HUB memory. This will be very fast on Prop II, but not as fast as native PASM will be. Code running this way will look a lot like it does on other multi-core CPUs.
The larger HUB memory space means other languages, like C, will then be useful on the Prop II, using all that has been learned on Prop I.
Thanks for the quick response. I've only been involved with this forum since very recently (like, today ), it's very nice to have such a good-mannered and helpful community. Kudos to Parallax for having brought together so many people!
BTW the Propeller somehow reminds me of the CELL architecture, which also has 8 SPU each with a dedicated SRAM and access to main memory.
Let me help you... It's all about profit margins. (These are not evil things, they make it possible to write paychecks and stay in business)
The Propeller is far more inexpensive than the BASIC STAMP, which has been Parallax's core product. The Propeller and it's successors are on the horizon. It takes a lot more Propellers sold to equate the sale of a STAMP. If all of us were willing to purchase 10 Propellers every time we needed to purchase a single unit, then there would be no problem. Truth is, Parallax simply needs to sell more. A commercial arm of the company will permit us to "have our cake, and eat it too."
In short, how much does this new reality shorten the required design cycle, and can Parallax Semiconductor keep up?
Interesting question. I make the observation that the TTL family was introduce by Texas Instruments in 1966. The humble Signetics NE555 timer is from 1971. Meanwhile in the analog world the Fairchild uA741 was introduced in 1968.
All of these devices are still available in similar specs. and packages and they are still in widespread use, albeit in a multitude of variations using different more modern technologies. Presumably whoever has been making such things has been happy with the income they generate and users have been happy to continue demanding them.
One conclusion from this observation might be that if your product is simple enough, cheap enough and has a unique utility then it's position in the market is assured no matter what new improved, all singing, all dancing, faster technology comes along.
Perhaps, dare I say, it's a mistake for Parallax to want to play with the big boys, a mistake to try and push the Prop II into the "big CPU" world. Perhaps the Propeller, like the 555 is now as perfect as it will ever need to be. Don't forget there's always a sea of sharks swimming out there always chasing the market for bigger, better faster.
Dave Hein,
The addressing range could be increased by using bank switching,
Nooooo.
The COG addressing range could be increased by moving to 64 bit COGs and a simple extension of the src and dest fields of the instructions. Giving another 16 bits for addressing or a 16 million long COG space!
I think Parallax should get away from the 8-bit byte model and go with a 6-bit byte. A word would be 3 bytes, or 18 bits and a long would be 36 bits. This would allow packing 6 truncated ASCII characters per long, which is much more efficient than putting 4 ASCII characters in a 32-bit long. We don't really need lower case characters. Upper case should be sufficient. After all, the Spin language is case insensitive.
The PASM instruction would be 36 bits instead of 32 bits. This would allow adding 2 bits to the source and destination addresses so that a cog could address 2048 36-bit longs. An 18-bit word would be able to directly address the 256K of hub RAM, or 128K RAM and 128K ROM, or some other 256K combination of RAM and ROM.
Comments
I hereby propose a new "Propeller Law" - along the same lines as "Godwin's Law":
Ross.
If Parallax's opening up the Semicon firm, that'll be too good.
The educational stuff provided there is awesome and fantastic. The Propeller is so fun to use and program!
Unfortunately, Basic Stamps and Propellers are hard-to-reach stuff in SE Asia. I hope many schools in my area enjoys the Parallax products.
Mike actually said something along the same lines.
It's a very good way to learn the instruction set of a new device. You ought to try it some time.
(ba-da-boom!!)
@Ross: Perfect!
Emphatically YES!
Let me tell you a story:
About 12 years ago I was hired by a company to help out on a certain embedded product that was being developed using MicroSofts WinCE. They had had such a hard time with that and it still did not work well so they switched to WindRivers VxWorks OS. Brilliant everything came together very quickly and we had a product on the market.
All was well until more features were demanded of this product. Whilst the required support for these features was available in VxWorks it was buggy and again we just could not get things working. Support from WindRiver was unable to help.
Eventually we ditched the commercially supported VxWorks and adopted Linux. Brilliant again, everything came together quickly and we had a new improved product on the market. That product is still doing well having been through a few upgrades of hardware including a processor change.
Since then all the embedded products I have worked on have been Linux based. A large number of them using "home brew" linux systems as opposed to something ready made from RedHat or MontaVista for example.
As you know a Linux OS and all its supporting software comes from a diverse array of sources, from large organizations down to individual programmers. With careful selection and testing we have been able to rely on these components more confidently than those form big name vendors who supposedly guarantee and support their products.
From time to time we have had to find and fix problems in such "off the street" components but that has been more successful than waiting on a vendor to do it.
So? I once saw the ghostly visage of Muḥammad ibn Mūsā al-Khwārizmī (after whom the word "algorithm" is derived) in a slice of burnt toast. (Unfortunately, Browser got hold of it before I could auction it off on eBay.)
-Phil
About a year ago, I had the opportunity to experiment with LabView for a little bit. I do not claim any substantial experience with LabView, but that was a very neat setup and I was highly impressed with it. It would be nice if the Propeller could be programmed in a similar fashion.
I know, it is just a dream.
Bruce
Although I am not a hobbyist, I am right there with you when it pertains to keeping the DIP chip. When I use a DIP, I normally install a socket for the DIP to reside in (one of the ooopppsss insurance policies). I think the DIP is essential for a novice or experimenter.
Bruce
I'm not a hardware guy, but I recall hearing that hybrid (fet + flash) chips are costlier as they require very specific fabrication process. Is this correct?
The new prop will not be in a DIP package but don't let that scare you away. Someone will create modules so that it can be used by those not capable of dealing with surface mount components. I went through this with the Atmel chips that are surface mount only. The little adapter boards for those came with the basic support devices mounted so you only needed to supply power to get it working. Eventually I just learned the "way" of working with surface mount chips. Making PCBs for them at home can be a bit tricky, but not impossible. Soldering the surface mount stuff also is very doable - aside from the eye strain, it isn't all that dificult.
Chris
I can deal with sm parts now but I still love the ease of use of DIP parts.
Those modules are nice but esthetically I hate the sight of a small circuit
board grafted onto my project, it's just awful.
What I really don't care for is designing circuit boards..especially multi-layer boards
as I just stink at it. :-( I suppose it's because it is more a mechanical thing and my brain
seems to lack the neurons that handle mechanical.
What I want is an entirely new way to design and construct electronic devices.
Something that eliminates the need to design complicated boards and simplifies
the layout of components. The rats nest of copper connections is getting ridiculous
and just has to be tamed somehow.
ICs were a great leap forward and helped greatly by eliminating all those thousands
of individual parts...I can't imagine what it must have been like creating a computer
with 100,000 plus individual parts wired together...just OMG!
It seems to me the next step towards taming the rats nest of connections is to move
to an optical data scheme. If ICs were all able to communicate optically using the
board they were placed on as a waveguide then layout would be very simple and copper
connections would be far fewer. Imagine a powerful uc that had only 2 pins for power.
When you needed to do something like add some I/O pins you could just stick another
very tiny IC anywhere on the board and it would be as though its pins were wired directly
to the uc except there would be no copper traces needed for connection. The same for
other items like say you needed an HDMI output for your uc...just stick an optically
enabled HDMI chip anywhere on the board and it gets gets data from and is controlled
by the uc optically using the board as the waveguide.
I see controllers getting so powerful in a few years that they will be equivalent to the most
powerful multi-core processors found in today's desktop computers. There is just no way to use
such powerful beasts in tiny devices like cell-phones unless you move to optical data handling,
those processors have a forest of pins beneath them! that complexity has to be reduced.
Farther down the road I think that electronic devices will be designed and built using
some kind of desktop nanotechnology enabled assemblers...no circuit boards, no parts,
no wires.. you just end up with a solid object that does whatever is required and is so
cheap it is disposable. All the complexity will be hidden within the single part that is the
device... much like how complexity is hidden within one of today's ICs.
LOL I am right there with you on that one. It is kind of like gluing a square peg on top of a round hole.
Bruce
When it comes from a vendor, it's often missing pieces needed to build it. This is done so that they have control over and profit from the executable code. When there is trouble, they own that problem, which may be good or bad, depending on how they handle that ownership. The users options are limited. This is the black box.
Sometimes, vendors ship open code that people can build on their own. This is done so that they can control and profit from the development and use of said code. When there is trouble, both the vendor and the end user own that problem, with the license terms dictating who can do what and why. This is the grey box.
Some code is open and available to build and to use. This is done so that all users of the code have control over the building and executing of the code, and so that they all potentially profit from the development and use of the code. A vendor may offer support as a service, to profit from the ownership of problems associated with the code. Examples are Linux, BSD, and Source Forge in general. The user owns the problems, but can get help from other contributing users, experienced consultants, and vendors, depending on the code body under discussion. Often it's required that code derived from this code is also open code, and that's the "cost" of acquiring and using said code. This is the build your own box, or white box as all can be known and tested.
Whether or not one or more of those makes sense, has a whole lot to do with the business model in play, experience level of the product team, contracts, etc... A black box may only be tested. Knowing is largely a position of trust. The grey box can be known and tested within contract limits.
It's worth noting that when a vendor owns the problem, the developer or user typically CAN'T own the problem, which is the primary reason why open code has the appeal it does. It may not always be pretty to get yourself or a team boot strapped onto open code, but once that is done, there is no doubt as to where the ownership of problems lies, and where the control is, and most importantly, where the trust is.
A whole lot comes down to competency and trust. If somebody has low competency, they are in a position of forced trust. If they have high competency, their need for trust varies considerably, and can be moved and changed based on need. This too is why open code has the appeal it does.
That's arguably very true, but the real question is - so what, what does that matter ?
Maybe Propeller becomes widely adopted by the commercial and industrial sectors, maybe it gets completely rejected, remains a niche product. Again, so what, as long as those who want to use the Propeller have access to it, as long as Parallax remains in business with a healthy looking future.
How does 'commercial success' or even 'commercial failure' affect individual choice; is it not just hand-wringing on behalf of others ? Does some food not being universally popular actually affect my choice to eat that food ?
I still think the issue of Parallax's ubiquitous commercial future with the Propeller is rather an irrelevant question. I think it actually boils down to two underlying and unsaid things driving the question -
1) What guarantee do I have that I can rely on Parallax ? -- The fear that if Parallax is not a ubiquitous success it will fail and disapper.
2) I'd like to choose a Propeller but without being able to point to ubiquitous success others may consider me odd or of unsound mind -- That's basic insecurity and lack of faith in one's own judgement and justifications.
Commercial success is being taken as justification of use, so it becomes necessary to seek a positive answer that Parallax will have commercial success. It's asking the wrong question IMO.
The Parallax ecosystem is the metric that matters for anyone choosing the Prop. IMHO, if Prop II and Parallax "pro" sees any niche success, the Prop is likely secure for a long time, leaving the only other questions up to the user.
Edit: I think it's well worth noting that the Prop is a highly differentiated product, and Parallax operates as a business differently from XMOS or some larger vendor, like Microchip. "success" is simply whether or not the Propeller pays Parallax enough to continue doing business. It is not about overall share comparisons.
Reading the Propeller Manual, I understand now that the register space IS the internal Cog memory, and furthermore the ISA has no room for a larger space, for example the MOV instruction is encoded as:
101000 001i 1111 ddddddddd sssssssss
Is Parallax going to change the ISA? Because that's the only way I can think of to make additional register space adressable.
EDIT: BTW, I have no idea what the transistor count of the Propeller is, but it must not be very large apart from RAM -- the architecture is beautifully simple.
EDIT2: nvm, found http://propeller.wikispaces.com/Propeller+II
1. What's the average microcontroller product lifetime in the commercial/industrial arena?
2. How long does it take to go from cutting-edge to mature to "venerable" to obsolete?
3. Can Parallax Semiconductor's chip development process keep up with this schedule?
By focusing on a hobbyist and education market, Parallax has tapped a loyal clientele for whom products mature to obsolescence very slowly. (Witness the long-term success of the BASIC Stamp line.) This is possible because Parallax is very good at continuing to add value to mature products via accessories and a growing array of educational materials. As a consequence, new core products can be introduced at a fairly leisurely pace. (The Prop I, for example, is now five years old.) Now Parallax is about to enter the fast lane. Does this necessarily entail that a Prop III be already in gestation with a Prop IV close behind in the visualization stage? In short, how much does this new reality shorten the required design cycle, and can Parallax Semiconductor keep up?
-Phil
Regarding transistor count, remember that there are 8 cogs, each identical. Each Prop II cog has its own multiport 2K byte RAM plus the shared 256K byte hub ram. Each I/O pin has a quite complex controller associated with it
One of my ideas that I have discussed with Chip on PM's was to have Dual space COG (COG-01+COG-11) instead of COG+256 ram.
And as that configuration has possibility to RUN in 3 modes.
1. As One COG + 512 longs User Data buffer.
2. As 2 Separate COG's that have same time window to HUB (as it is possible to have 4 longs in one window) that give no problems to work.
3. AS Banked One COG with DUAL code length.
BUT Chip rejected that. He said he have much nicer solution (And He hate Banked memory model)
One of that solutions problems More silicon to achieve COG's (Smaller place for HUB ram).
BUT I still think that is most usable way to GO to achieve more performance from NEW PROPELLER
and have possibility to run much more complex code on it.
I agree !
i dont get it .. I like parallax the way they are .
not to big . not to small .
I dont see why the Prop needs to compete with other chips .
as long as parallax is not going under then they are doing a good job .
If\
the Only part is the SW side .i can wish all I want but considering how many employees are there . its not cost effective to have them pay a SW desginer to make a OSX or Liunx version .
Peter
1. Run PASM on it directly, providing some very fast functions. Video, sound, serial, math...
2. Run the SPIN intrepeter on it, providing large programs that can see the whole HUB memory. This will be the slower option, but easy!
3. Run a PASM kernel, or "supervisor" on a COG that runs PASM code LMM style. That cog will fetch instructions, or groups of instructions from the HUB and execute them locally. Some new "opcodes" will be recognized by the kernel to perform things like jumps and potentially math, etc... so that the COG essentially becomes a CPU, running code out of HUB memory. This will be very fast on Prop II, but not as fast as native PASM will be. Code running this way will look a lot like it does on other multi-core CPUs.
The larger HUB memory space means other languages, like C, will then be useful on the Prop II, using all that has been learned on Prop I.
Thanks for the quick response. I've only been involved with this forum since very recently (like, today ), it's very nice to have such a good-mannered and helpful community. Kudos to Parallax for having brought together so many people!
BTW the Propeller somehow reminds me of the CELL architecture, which also has 8 SPU each with a dedicated SRAM and access to main memory.
Let me help you... It's all about profit margins.
(These are not evil things, they make it possible to write paychecks and stay in business)
The Propeller is far more inexpensive than the BASIC STAMP, which has been Parallax's core product. The Propeller and it's successors are on the horizon. It takes a lot more Propellers sold to equate the sale of a STAMP. If all of us were willing to purchase 10 Propellers every time we needed to purchase a single unit, then there would be no problem. Truth is, Parallax simply needs to sell more. A commercial arm of the company will permit us to "have our cake, and eat it too."
OBC
All of these devices are still available in similar specs. and packages and they are still in widespread use, albeit in a multitude of variations using different more modern technologies. Presumably whoever has been making such things has been happy with the income they generate and users have been happy to continue demanding them.
One conclusion from this observation might be that if your product is simple enough, cheap enough and has a unique utility then it's position in the market is assured no matter what new improved, all singing, all dancing, faster technology comes along.
Perhaps, dare I say, it's a mistake for Parallax to want to play with the big boys, a mistake to try and push the Prop II into the "big CPU" world. Perhaps the Propeller, like the 555 is now as perfect as it will ever need to be. Don't forget there's always a sea of sharks swimming out there always chasing the market for bigger, better faster.
Dave Hein,
Nooooo.
The COG addressing range could be increased by moving to 64 bit COGs and a simple extension of the src and dest fields of the instructions. Giving another 16 bits for addressing or a 16 million long COG space!
That's what I want to see in the Cog III.
The PASM instruction would be 36 bits instead of 32 bits. This would allow adding 2 bits to the source and destination addresses so that a cog could address 2048 36-bit longs. An 18-bit word would be able to directly address the 256K of hub RAM, or 128K RAM and 128K ROM, or some other 256K combination of RAM and ROM.
I never go the vibe they were short of sales when I visited ?
Peter