Nice idea, sadly I'm sure the answer is no. It's impossible. Makes no difference if we are talking about importing applications from ARM or MIPS or any other architecture.
Let's think about an example: The STM32 is a very nice ARM System On a Chip (SoC). Nice fast (~100MHz) ARM core, a ton of I/O pins, masses of hardware peripherals, SPI, I2C, UARTs, PWM, timers etc. Floating point math instructions. The STM32 is very easy to use. For example see these simple little boards: http://www.espruino.com/ http://micropython.org/
The STM32 is very cheap. A couple of dollars. Heck the dev board is only about 20 dollars.
Anyway what about moving code from that to a Propeller?
Well if my app is written "down to the metal" it is full of hardware interfaces/drivers, it probably uses interrupts. I cannot imagine anyway that any tool is going to disentangle all of that and create anything that will run on a Propeller.
If my app uses a real-time operating system or even just a hardware abstraction layer we are in with more luck. Then one only has to provide the same abstraction layer on the Propeller. Not easy but perhaps doable. But which one? There are many. I might be using my own. We are basically back to the same problem as above.
Finally if all that were possible I suspect performance on the Propeller would suck. Why am I using an STM32? Perhaps because it can churn through DSP work with its floating point and DSP support.
Yes, the speed limitations of SPin can be alleviated by using a good compiler. Basically don't use Spin use C. But it's not going to catch up with that 2 dollar STM.
I keep hearing that the Prop is slow compared to other microcontrollers, so in what applications does the Prop have an advantage?
Speed is not the only important thing in the world. If it were you would be using a Intel i7 to flash a LED on your Arduino board!
Other important things to think about:
Ease of use, hardware and software.
Fast development time.
Flexibility, with no fixed hardware peripherals a stock of Propellers can be used for many different tasks. One does not have to search for that exact PIC/ATMEL/ARM that has the right combination of UART, SPI, I2C etc etc.
Really low latency response to events on pins. Or, importantly, on many pins at the same time. A single core interrupt driven system cannot match that.
Confidence that the chip will be around for many years.
Let's face it, people love to be part of the herd. If they say they use an ARM they feel like they just about invented the whole thing themselves.
Parallax has traditionally been in the education business and so selling high margin development boards and giving world class support to education is their strength.
If you look at SPIN from that perspective you can see it's a nice educational 'glue logic' language but not a competitive general purpose language that many professionals will want to invest in learning
For there to exist 'jobs' for propeller there needs to exist production volume applications that use the part and for this there needs to be a financial advantage to use the part because at the end of the day it is a business calculus that selects silicon for volume customers.
Parallax might be quite happy to stay out of volume and keep with their educational advantages but I can tell you why I selected the propeller for my 'volume killer! app' -- the Virtual Shield (apologies for gratuitous self promotion )
Very simply, for my application, I needed a chip that could put any peripheral on any combination of pins even for peripherals not invented yet ( ie updatable firmware )
So I am using the propeller as a soft-peripheral co-processor and in this function for applications that need this functionality I think the propeller and multi-core PASM is the world class option for medium volume applications - up to ~10K parts.
The alternative is a FPGA with a correspondingly huge investment which for me and I suspect many smaller shops is out of reach.
If parallax have big ambitions as a 'semi' - what I would love to see from parallax would be for them to take a peripheral-less existing mainstream core like a picmicro (opencore ae18?) and put it onchip next to the propeller core for peripheral handling or alternatively license the propeller as a soft-peripheral-module to microchip/Atmel
A chip like that would give parallax a shot at the big-league - if you can't beat them, join them!
So I am using the propeller as a soft-peripheral co-processor and in this function for applications that need this functionality I think the propeller and multi-core PASM is the world class option for medium volume applications - up to ~10K parts.
The alternative is a FPGA with a correspondingly huge investment which for me and I suspect many smaller shops is out of reach.
Yes, well put.
That cheap and simple alternative to pulling out an FPGA is something many of us here have been proclaiming for many years.
The Propeller is not an ARM or ATMEL or PIC. It is not in anyway a general purpose processor like that. I almost think they should not be compared at all in the normal MCU comparison ways.
The Propeller is however a malleable chunk of logic that can be applied to many situations where one would like the programmability of an FPGA but does not need the speed, expense, massive complexity (in software and tools) and hardware complications of using an FPGA. Where one can use the same familiar tools to get that logic done, C/C++, as one uses on the rest of the project.
In that space here is what the Propeller is up against:
Yesterday I found a video of the first commercial Propeller using design I have ever seen outside of this forum. It's Connor Wulf showing off his Power Meter Project where he is using a board originally designed to monitor hundreds of thermistors. http://www.youtube.com/watch?v=rdVj9eKRBrw&list=UUlUP3SbgT2LGKvHKXkZAOAA
In passing he mentions that he is considering redesigning that board using an XMOS multi-core chip because having to program in assembler to get anything realistic done sucks.
I worry about Parallax and it's high margin educational hardware and materials.
I am no educator but given the task of introducing someone to programming and micro controllers I would get them an Espruino board or STM32F4 Discovery board at 20 dollars a pop. By way of introducing programming to rank beginners JavaScript or MicroPython make this dead easy. By way of introducing a bit of electronics and real-world interfacing there are a ton of cheap peripherals that can be used with ready made code and nice simple documentation to drive them. All this can be done from a WEB IDE on whatever computer the kids have. For a bit more advanced work I'd move to C/C++.
I worry about Parallax's position when educators realize all this is possible very easily.
No many ARMs do have integrated video, both the M series parts that are more targeted for embedded work or some A series parts that typically are running linux. In both cases those also have built in interfaces for external SDRAM or parallel RAM for video buffering.
I guess the comparison of the Prop to FPGA is probably more accurate, though the tradeoff is a software solution vs a hardware design solution in an RTL language which can be a bit daunting for someone learning electronics. But the software solution does restrict performance.
Where have you been? When the ARM was created and ARM based computers were launched,about 1983, they were the fastest PCs you could get. And 32 bit rather than 16.
Letting the hardware do the work is OK. But any amount of hardware logic gates has it's equivalent in software. If you can run the software fast enough you don't need to build that special hardware. If you can use that functionality from you application it matters not if it is implemented in software or hardware.
Where have you been? When the ARM was created and ARM based computers were launched,about 1983, they were the fastest PCs you could get. And 32 bit rather than 16.
Letting the hardware do the work is OK. But any amount of hardware logic gates has it's equivalent in software. If you can run the software fast enough you don't need to build that special hardware. If you can use that functionality from you application it matters not if it is implemented in software or hardware.
Yeah but the other side of that is that it probably takes at least 10 times more gates to implement a particular function in software. Of course, you can repurpose those gates for a different task where you can't do that with dedicated hardware. I wonder how much additional power is required for a software peripheral compared with a hardware one? Everything is about low power and mobile these days. Is that a good match for soft peripherals?
Yeah but the other side of that is that it probably takes at least 10 times more gates to implement a particular function in software.
But it doesn't, really. It only takes a few thousand gates to build a general purpose CPU core. One of the missteps with the P2 may have been making the cogs so powerful that they really were too expensive. But P1 cogs are very capable for their gate count, and it's a bigger problem that the only chip we have is on a 360nM process which is great for low power operation but not so great for clock speed.
The Prop's flexibility has made it possible for the small company I work for to actually manufacture stuff even though we have none of the usual resources like, you know, engineers. We are selling little embedded boxes that are basically QuickStarts with shields and laser-printed enclosures for $REDACTED and getting away with it because other solutions cost, at best, the same and those solutions have operating systems and versioning and licensing issues which we can proudly claim to not have.
And it gives us freaking amazing flexibility. Want a box with 20 serial ports? One prop. Want 3 VGA outputs instead? One prop. Want a full user interface with video display, keyboard, serial, and ethernet? One prop. Want to gather data from 100 SPI peripherals, and go to low power mode when not active? One prop.
I can do all those applications without reading a single data sheet or going to a catalog because I know the data sheet and where to order it. That's the advantage of the Propeller. Other chips will always be better than it at something, but nothing will be better than it is at everything it is good at.
Yeah but the other side of that is that it probably takes at least 10 times more gates to implement a particular function in software.
I don't buy it either. Why do you think so many early processors were microcoded? It was to save silicon real estate (at the expense of compute speed). PASM has a lot in common with microcoding, which is really nice for implementing soft peripherals.
Everything is about low power and mobile these days. Is that a good match for soft peripherals?
Almost certainly not.
...probably takes at least 10 times more gates to implement a particular function in software...
I wonder...
My first reaction was to think that you are seriously underestimating by a couple of orders of magnitude.
Starting with the simplest possible function, the humble NOT gate or inverter. In hardware that amounts to perhaps just a single transistor. Clearly using a whole 32 bit CPU to perform that NOT function is using 10's of thousands more transistors than required. If my program does nothing but perform the NOT function that is massively wasteful of transistors and hence energy consumption.
OK, let's go to the other extreme. A really big program. A million lines of code say. What then?
Well, we could say that each of those million lines of code is itself a little function, it does something to something. It has input, processing and output. Many lines of code of course will perform more than one little function, an expression with more than one term for example. But never mind the argument is the same.
So now we are doing a million little functions using the same number of transistors as we used for that NOT gate, plus of course the memory space required to hold the program and data.
We are not doing those million little functions at the same time of course, as we could do in hardware, but we get them done fast enough to achieve what we want.
Presumably as our overall function or task gets bigger at some point we are using less transistors if we do it in software than if we did it in hardware.
How big does a piece of software have to be to justify itself in terms of saving transistors?
I don't buy it either. Why do you think so many early processors were microcoded? It was to save silicon real estate (at the expense of compute speed). PASM has a lot in common with microcoding, which is really nice for implementing soft peripherals.
-Phil
Other factors - if you had a loadable control store then you could add/fix instructions after tapeout, or if a ROM/PLA array was used then you could easily spin a chip without a full-layer spin, logic synthesis back then was fairly primitive and this meant that you didn't need to synthesize the control logic. Maybe it was more approachable work than logic design too, but still it was a bit of a specialty. (And there were microcoded implementations that were faster than the hardcoded ones - but I'm thinking more of large PCB emulations of processors not small ICs.)
Let's say that Parallax had a business model more like ARM and sold IP cores rather than chips. (And had the vast ecosystem of customers producing variants.) Would everyone choose to instantiate 8 COGs, or would they substitute dedicated hardware for features like UARTs? How does the power consumption of a COG compare to a dedicated UART? Power consumption and cost are very important these days in the high-volume consumer space. What clock speed does a COG need to run a 1 MBit UART with 16x oversampling?
I don't think that the soft peripheral design is as trivial as people seem to imply. Maybe I'm misreading and it's the flexibility that's emphasized. Over the years people have found issues with some of the soft peripherals (e.g. excessive jitter) and I believe that you can find Kye saying how difficult it was for him two develop his full duplex serial object. This stuff is trivial in hardware and quite small - assuming you are DMAing to/from RAM with a simple ring buffer scheme, it's the RAM that's going to dominate. A hardware Fm+ I2C slave that operates at 1 Mbit and meets all of the NXP timing parameters is pretty easy to develop from scratch - can the prop do this? Nowadays our customers want HS I2C and I don't know that the prop can do that. e.g. 3.4 MBit, I don't think that this is much harder in hardware, but I haven't actually done it. Where I work all of high-volume chips have custom analog cores alongside the ARMs as well.
This isn't immediately relevant, but some people want Parallax to have ARM like volume, so I'll mention it. Assume that you've caught someone's eye, and now the company is going to approve the part for use. The high-volume customers ask many difficult questions during chip selection that practically require you to have semiconductor physicists or similar on staff. As a really simple starter type question - I haven't seen Parallax discuss things like manufacturing test coverage metrics in the forums, but maybe Parallax Semi made these available to customers. Is this coverage known? If not how do they guarantee the defects-per-million? This isn't usually info that's published on websites. Could Parallax ramp their manufacturing to support high-volume? Do they have second sources for wafers and packaging? Hopefully all solvable.
Considering that four of them can be implemented by a single cog with just 512 program instructions, obviously not many.
You still didn't answer the question. How many gates is a COG including COG RAM? How many gates does it take to implement four UARTs in hardware? Which is larger and by how much? Also, I think some standard UART functions are difficult or impossible to implement in PASM, for example, hardware flow control.
Also, I think some standard UART functions are difficult or impossible to implement in PASM, for example, hardware flow control.
Hardware flow control should be trivial to implement in PASM. The PASM code just needs to check the state of a control line before it starts sending the next byte.
And check the input buffer fill level and assert the control line out as necessary.
At the end of the day it does not matter how a hardware UART compares to a COG.
If we throw out the COG we have a less flexible device. We can no longer do UART today SPI tomorrow, or whatever we need.
If we keep all the COGs and start adding SPI, UART, PWM etc etc we blow the transistor budget. At least in the P1. And we end up with an ARM SoC like device with a 3000 page data sheet
Hardware flow control should be trivial to implement in PASM. The PASM code just needs to check the state of a control line before it starts sending the next byte.
Yeah, I realized that after I made my post. I think there were some things that were difficult in software though. I've been told that SPI slave is difficult to implement in software. Is that actually true?
And check the input buffer fill level and assert the control line out as necessary.
At the end of the day it does not matter how a hardware UART compares to a COG.
If we throw out the COG we have a less flexible device. We can no longer do UART today SPI tomorrow, or whatever we need.
If we keep all the COGs and start adding SPI, UART, PWM etc etc we blow the transistor budget. At least in the P1. And we end up with an ARM SoC like device with a 3000 page data sheet
I'm certainly not arguing against soft peripherals or that we should remove the COG. We wouldn't have a Propeller without the COG! I was just pointing out that there is a price to be paid for implementing peripherals in software. It takes many more gates and maybe more power. There is no substitute for the COG or something like it if you need to interface to non-standard hardware. That is a big strength of the Propeller.
Yeah, I realized that after I made my post. I think there were some things that were difficult in software though. I've been told that SPI slave is difficult to implement in software. Is that actually true?
I hear you when you describe the differences between the prop and other MCUs. What would have happened if Parallax developed a multi-core AVR or PIC? Is it possible that the adoption rate is small is because a small company developed the prop when compared to the competition?
Let me explain more clearly what I am thinking. In my experience, successful companies have more of the following characteristics:
- Strategic business model
- Ecosystem
--- Supporting environment
--- Defacto standard
--- Willing labor pool
--- Fundamental building blocks
- Constituencies
--- OEMs
--- Application developers
--- Consultants
--- End users
- Great marketing
- Profitable business model
- Solves a problem
Parallax clearly has some of the above, but, do you think the competition has more of the above?
Upthread I mentioned little boxes made by my company with VGA, ethernet, two serial ports. Today we were scoping what might be a very large job opportunity, and we realized that in some situations what we really need is a wider serial data concentrator before we route to ethernet. I said hey, no problem -- for about a day's work I can design a new box with four serial ports instead of the VGA, and it won't even need a new enclosure. It won't need new external UART chips because there are no CPU's made with 4 built-in UARTs, and it won't need complicated interrupt programming to make sure all those ports work right at full speed.
Unless you are manufacturing millions of units and competing with China, the extra cost of a Propeller over a more targeted chip with custom silicon to do a similar job is irrelevant, and the ability to do such special versions without special hardware can be a far more useful feature.
Parallax is not a semiconductor company? They are a "learning" company as far as i can see.
The Prop is very interesting and with man very dedicated people making amazing projects.
However the Prop is many years old without a viable follow on. It is designed to be an a language interpreter to get a large programs and now days commercial application use big programs.
If the Prop was made by a semiconductor company then it would have flash, much more ram, and have a native assembly level address apace in the megabytes. It would have a follow product line also.
The fact they wasted so much time on the Prop II shows they are not in the semiconductor business. It is a hobby of a very bright employee who gets to try new things out. A very good gig for a learning company and i think it is great that the Parrallax owners encourage such adventures. But we the users should not expect unrealistic "jobs" from their efforts.
Parallax designed and supplies an 8 core 32 bit MCU. They have customers for said device. They are as much a semi-conductor company as ARM, ATMEL, MicroChip, and even AMD. None of whom actually make semiconductors.
If the Prop was made by a semiconductor company then it would have flash, much more ram, and have a native assembly level address apace in the megabytes.
And quite likely if the Prop was to be made by such a company there would be no Prop. They would be just be making another common or garden "me too" MCU like the one you describe as you describe. There is no point in Parallax making such a device the market is totally flooded with them already and the vendors are on a race to the bottom on pricing.
It's a bit harsh to describe the Prop II as a waste of time. Let's see what we get first.
But yes, overall, given the volumes we have see to date expecting Propeller job posting is very optimistic.
I am curious what the potential is for a startup to park a few guys in a room and do nothing but learn the P1/P2 and offer hourly coding and consultation. There are many times I would have much preferred to call a number and give a credit card to solve a code question versus waiting on an answer from the forum. I would assume that there were also companies out there that would enjoying being able to send out some code, and give some instructions to add/change/delete etc. The other idea is to offer dedicated P1/P2 embedded services including PCB design, low volume assembly. This would be the ultimate for fast tracking an idea to product/prototype.
I am curious what the potential is for a startup to park a few guys in a room and do nothing but learn the P1/P2 and offer hourly coding and consultation. There are many times I would have much preferred to call a number and give a credit card to solve a code question versus waiting on an answer from the forum. I would assume that there were also companies out there that would enjoying being able to send out some code, and give some instructions to add/change/delete etc. The other idea is to offer dedicated P1/P2 embedded services including PCB design, low volume assembly. This would be the ultimate for fast tracking an idea to product/prototype.
I don't think there is a big enough market to support such an endeavor.
I've done a few jobs with the Propeller. They were small production run jobs though. If I need a small number of devices to be made and the priority is to get it to market quickly then I'll choose the Prop unless there is something preventing me from using it.
Reasons that I think that it isn't used too often:
* It is relatively slow for computing tasks.
* It has no multiply instruction
* It didn't have a C compiler. I'm not sure what the state of this is now. I tried the C compiler about a year ago and wasn't satisfied with it. The documentation was basically non-existent and it was doing crazy things like optimizing away code that needed to run. Hopefully it is better now. A good C compiler would go a long way towards making it be used more commonly.
* You can't protect your code unless you put epoxy or something over top of your circuit board.
* It's 'weird'. Having no interrupts and eight cores was a head-scratcher when I started using it.
* It has a low pin count.
Despite all of those issues, I absolutely love the propeller because:
* I don't have to dig through the data sheet to figure out how to configure a peripheral. I just write one of my own.
* I can have as many 'peripherals' as I want because it is so flexible.
* VGA. (OK, yea, it does have a couple of peripherals but they are really simple)
* NTSC.
* I don't have to spend hours figuring out just exactly which part variant to use.
* Any pin can do anything.
* Free development environment.
* Excellent documentation.
* The assembly language is *wonderful* to work in. It's super easy to understand compared to other assembly languages.
Comments
Nice idea, sadly I'm sure the answer is no. It's impossible. Makes no difference if we are talking about importing applications from ARM or MIPS or any other architecture.
Let's think about an example: The STM32 is a very nice ARM System On a Chip (SoC). Nice fast (~100MHz) ARM core, a ton of I/O pins, masses of hardware peripherals, SPI, I2C, UARTs, PWM, timers etc. Floating point math instructions. The STM32 is very easy to use. For example see these simple little boards:
http://www.espruino.com/
http://micropython.org/
The STM32 is very cheap. A couple of dollars. Heck the dev board is only about 20 dollars.
Anyway what about moving code from that to a Propeller?
Well if my app is written "down to the metal" it is full of hardware interfaces/drivers, it probably uses interrupts. I cannot imagine anyway that any tool is going to disentangle all of that and create anything that will run on a Propeller.
If my app uses a real-time operating system or even just a hardware abstraction layer we are in with more luck. Then one only has to provide the same abstraction layer on the Propeller. Not easy but perhaps doable. But which one? There are many. I might be using my own. We are basically back to the same problem as above.
Finally if all that were possible I suspect performance on the Propeller would suck. Why am I using an STM32? Perhaps because it can churn through DSP work with its floating point and DSP support.
Yes, the speed limitations of SPin can be alleviated by using a good compiler. Basically don't use Spin use C. But it's not going to catch up with that 2 dollar STM.
Speed is not the only important thing in the world. If it were you would be using a Intel i7 to flash a LED on your Arduino board!
Other important things to think about:
Ease of use, hardware and software.
Fast development time.
Flexibility, with no fixed hardware peripherals a stock of Propellers can be used for many different tasks. One does not have to search for that exact PIC/ATMEL/ARM that has the right combination of UART, SPI, I2C etc etc.
Really low latency response to events on pins. Or, importantly, on many pins at the same time. A single core interrupt driven system cannot match that.
Confidence that the chip will be around for many years.
I could go on.
Parallax has traditionally been in the education business and so selling high margin development boards and giving world class support to education is their strength.
If you look at SPIN from that perspective you can see it's a nice educational 'glue logic' language but not a competitive general purpose language that many professionals will want to invest in learning
For there to exist 'jobs' for propeller there needs to exist production volume applications that use the part and for this there needs to be a financial advantage to use the part because at the end of the day it is a business calculus that selects silicon for volume customers.
Parallax might be quite happy to stay out of volume and keep with their educational advantages but I can tell you why I selected the propeller for my 'volume killer! app' -- the Virtual Shield (apologies for gratuitous self promotion )
http://forums.parallax.com/showthread.php/156000-Fastest-Possible-FIFO-Buffer?highlight=virtualshield
Very simply, for my application, I needed a chip that could put any peripheral on any combination of pins even for peripherals not invented yet ( ie updatable firmware )
So I am using the propeller as a soft-peripheral co-processor and in this function for applications that need this functionality I think the propeller and multi-core PASM is the world class option for medium volume applications - up to ~10K parts.
The alternative is a FPGA with a correspondingly huge investment which for me and I suspect many smaller shops is out of reach.
If parallax have big ambitions as a 'semi' - what I would love to see from parallax would be for them to take a peripheral-less existing mainstream core like a picmicro (opencore ae18?) and put it onchip next to the propeller core for peripheral handling or alternatively license the propeller as a soft-peripheral-module to microchip/Atmel
A chip like that would give parallax a shot at the big-league - if you can't beat them, join them!
That cheap and simple alternative to pulling out an FPGA is something many of us here have been proclaiming for many years.
The Propeller is not an ARM or ATMEL or PIC. It is not in anyway a general purpose processor like that. I almost think they should not be compared at all in the normal MCU comparison ways.
The Propeller is however a malleable chunk of logic that can be applied to many situations where one would like the programmability of an FPGA but does not need the speed, expense, massive complexity (in software and tools) and hardware complications of using an FPGA. Where one can use the same familiar tools to get that logic done, C/C++, as one uses on the rest of the project.
In that space here is what the Propeller is up against:
Yesterday I found a video of the first commercial Propeller using design I have ever seen outside of this forum. It's Connor Wulf showing off his Power Meter Project where he is using a board originally designed to monitor hundreds of thermistors. http://www.youtube.com/watch?v=rdVj9eKRBrw&list=UUlUP3SbgT2LGKvHKXkZAOAA
In passing he mentions that he is considering redesigning that board using an XMOS multi-core chip because having to program in assembler to get anything realistic done sucks.
I worry about Parallax and it's high margin educational hardware and materials.
I am no educator but given the task of introducing someone to programming and micro controllers I would get them an Espruino board or STM32F4 Discovery board at 20 dollars a pop. By way of introducing programming to rank beginners JavaScript or MicroPython make this dead easy. By way of introducing a bit of electronics and real-world interfacing there are a ton of cheap peripherals that can be used with ready made code and nice simple documentation to drive them. All this can be done from a WEB IDE on whatever computer the kids have. For a bit more advanced work I'd move to C/C++.
I worry about Parallax's position when educators realize all this is possible very easily.
That's like comparing a Moped to a Super car.
Is the Prop the first chip with integrated video?
I guess the comparison of the Prop to FPGA is probably more accurate, though the tradeoff is a software solution vs a hardware design solution in an RTL language which can be a bit daunting for someone learning electronics. But the software solution does restrict performance.
I always preferred letting the hardware do the hard work.
Where have you been? When the ARM was created and ARM based computers were launched,about 1983, they were the fastest PCs you could get. And 32 bit rather than 16.
Letting the hardware do the work is OK. But any amount of hardware logic gates has it's equivalent in software. If you can run the software fast enough you don't need to build that special hardware. If you can use that functionality from you application it matters not if it is implemented in software or hardware.
But it doesn't, really. It only takes a few thousand gates to build a general purpose CPU core. One of the missteps with the P2 may have been making the cogs so powerful that they really were too expensive. But P1 cogs are very capable for their gate count, and it's a bigger problem that the only chip we have is on a 360nM process which is great for low power operation but not so great for clock speed.
The Prop's flexibility has made it possible for the small company I work for to actually manufacture stuff even though we have none of the usual resources like, you know, engineers. We are selling little embedded boxes that are basically QuickStarts with shields and laser-printed enclosures for $REDACTED and getting away with it because other solutions cost, at best, the same and those solutions have operating systems and versioning and licensing issues which we can proudly claim to not have.
And it gives us freaking amazing flexibility. Want a box with 20 serial ports? One prop. Want 3 VGA outputs instead? One prop. Want a full user interface with video display, keyboard, serial, and ethernet? One prop. Want to gather data from 100 SPI peripherals, and go to low power mode when not active? One prop.
I can do all those applications without reading a single data sheet or going to a catalog because I know the data sheet and where to order it. That's the advantage of the Propeller. Other chips will always be better than it at something, but nothing will be better than it is at everything it is good at.
-Phil
My first reaction was to think that you are seriously underestimating by a couple of orders of magnitude.
Starting with the simplest possible function, the humble NOT gate or inverter. In hardware that amounts to perhaps just a single transistor. Clearly using a whole 32 bit CPU to perform that NOT function is using 10's of thousands more transistors than required. If my program does nothing but perform the NOT function that is massively wasteful of transistors and hence energy consumption.
OK, let's go to the other extreme. A really big program. A million lines of code say. What then?
Well, we could say that each of those million lines of code is itself a little function, it does something to something. It has input, processing and output. Many lines of code of course will perform more than one little function, an expression with more than one term for example. But never mind the argument is the same.
So now we are doing a million little functions using the same number of transistors as we used for that NOT gate, plus of course the memory space required to hold the program and data.
We are not doing those million little functions at the same time of course, as we could do in hardware, but we get them done fast enough to achieve what we want.
Presumably as our overall function or task gets bigger at some point we are using less transistors if we do it in software than if we did it in hardware.
How big does a piece of software have to be to justify itself in terms of saving transistors?
Other factors - if you had a loadable control store then you could add/fix instructions after tapeout, or if a ROM/PLA array was used then you could easily spin a chip without a full-layer spin, logic synthesis back then was fairly primitive and this meant that you didn't need to synthesize the control logic. Maybe it was more approachable work than logic design too, but still it was a bit of a specialty. (And there were microcoded implementations that were faster than the hardcoded ones - but I'm thinking more of large PCB emulations of processors not small ICs.)
Let's say that Parallax had a business model more like ARM and sold IP cores rather than chips. (And had the vast ecosystem of customers producing variants.) Would everyone choose to instantiate 8 COGs, or would they substitute dedicated hardware for features like UARTs? How does the power consumption of a COG compare to a dedicated UART? Power consumption and cost are very important these days in the high-volume consumer space. What clock speed does a COG need to run a 1 MBit UART with 16x oversampling?
I don't think that the soft peripheral design is as trivial as people seem to imply. Maybe I'm misreading and it's the flexibility that's emphasized. Over the years people have found issues with some of the soft peripherals (e.g. excessive jitter) and I believe that you can find Kye saying how difficult it was for him two develop his full duplex serial object. This stuff is trivial in hardware and quite small - assuming you are DMAing to/from RAM with a simple ring buffer scheme, it's the RAM that's going to dominate. A hardware Fm+ I2C slave that operates at 1 Mbit and meets all of the NXP timing parameters is pretty easy to develop from scratch - can the prop do this? Nowadays our customers want HS I2C and I don't know that the prop can do that. e.g. 3.4 MBit, I don't think that this is much harder in hardware, but I haven't actually done it. Where I work all of high-volume chips have custom analog cores alongside the ARMs as well.
This isn't immediately relevant, but some people want Parallax to have ARM like volume, so I'll mention it. Assume that you've caught someone's eye, and now the company is going to approve the part for use. The high-volume customers ask many difficult questions during chip selection that practically require you to have semiconductor physicists or similar on staff. As a really simple starter type question - I haven't seen Parallax discuss things like manufacturing test coverage metrics in the forums, but maybe Parallax Semi made these available to customers. Is this coverage known? If not how do they guarantee the defects-per-million? This isn't usually info that's published on websites. Could Parallax ramp their manufacturing to support high-volume? Do they have second sources for wafers and packaging? Hopefully all solvable.
Considering that four of them can be implemented by a single cog with just 512 program instructions, obviously not many.
At the end of the day it does not matter how a hardware UART compares to a COG.
If we throw out the COG we have a less flexible device. We can no longer do UART today SPI tomorrow, or whatever we need.
If we keep all the COGs and start adding SPI, UART, PWM etc etc we blow the transistor budget. At least in the P1. And we end up with an ARM SoC like device with a 3000 page data sheet
20 clock cyles per bit is roughly the fastest an SPI slave can go on a P1, if we are waiting for both edges of the SPI clock
If we KNOW the spi frequency, we can cheat and wait for only one edge - getting us down to 14 clock cycles per bit.
Neither of the above has time to look for de-asserted CS
80MHz/20 = 4Mbps max SPI slave
80MHz/14 = 5.7Mbps max SPI slave
We can get to 5Mbps / 7.1Mbps by running the P1 @ 100Mhz.
Let me explain more clearly what I am thinking. In my experience, successful companies have more of the following characteristics:
- Strategic business model
- Ecosystem
--- Supporting environment
--- Defacto standard
--- Willing labor pool
--- Fundamental building blocks
- Constituencies
--- OEMs
--- Application developers
--- Consultants
--- End users
- Great marketing
- Profitable business model
- Solves a problem
Parallax clearly has some of the above, but, do you think the competition has more of the above?
Unless you are manufacturing millions of units and competing with China, the extra cost of a Propeller over a more targeted chip with custom silicon to do a similar job is irrelevant, and the ability to do such special versions without special hardware can be a far more useful feature.
Parallax is not a semiconductor company? They are a "learning" company as far as i can see.
The Prop is very interesting and with man very dedicated people making amazing projects.
However the Prop is many years old without a viable follow on. It is designed to be an a language interpreter to get a large programs and now days commercial application use big programs.
If the Prop was made by a semiconductor company then it would have flash, much more ram, and have a native assembly level address apace in the megabytes. It would have a follow product line also.
The fact they wasted so much time on the Prop II shows they are not in the semiconductor business. It is a hobby of a very bright employee who gets to try new things out. A very good gig for a learning company and i think it is great that the Parrallax owners encourage such adventures. But we the users should not expect unrealistic "jobs" from their efforts.
cheers,
richard
Parallax designed and supplies an 8 core 32 bit MCU. They have customers for said device. They are as much a semi-conductor company as ARM, ATMEL, MicroChip, and even AMD. None of whom actually make semiconductors. And quite likely if the Prop was to be made by such a company there would be no Prop. They would be just be making another common or garden "me too" MCU like the one you describe as you describe. There is no point in Parallax making such a device the market is totally flooded with them already and the vendors are on a race to the bottom on pricing.
It's a bit harsh to describe the Prop II as a waste of time. Let's see what we get first.
But yes, overall, given the volumes we have see to date expecting Propeller job posting is very optimistic.
I don't think there is a big enough market to support such an endeavor.
Reasons that I think that it isn't used too often:
* It is relatively slow for computing tasks.
* It has no multiply instruction
* It didn't have a C compiler. I'm not sure what the state of this is now. I tried the C compiler about a year ago and wasn't satisfied with it. The documentation was basically non-existent and it was doing crazy things like optimizing away code that needed to run. Hopefully it is better now. A good C compiler would go a long way towards making it be used more commonly.
* You can't protect your code unless you put epoxy or something over top of your circuit board.
* It's 'weird'. Having no interrupts and eight cores was a head-scratcher when I started using it.
* It has a low pin count.
Despite all of those issues, I absolutely love the propeller because:
* I don't have to dig through the data sheet to figure out how to configure a peripheral. I just write one of my own.
* I can have as many 'peripherals' as I want because it is so flexible.
* VGA. (OK, yea, it does have a couple of peripherals but they are really simple)
* NTSC.
* I don't have to spend hours figuring out just exactly which part variant to use.
* Any pin can do anything.
* Free development environment.
* Excellent documentation.
* The assembly language is *wonderful* to work in. It's super easy to understand compared to other assembly languages.