It loads about 35x-70x faster than Quartus' built-in programmer (3 seconds to load straight into the FPGA, 6 seconds to load into flash for cold booting). Our -A7 board will support all 16 cogs and 512KB hub RAM. The DE2-115 will fit ~12 cogs and 256KB hub RAM. All this memory and I/O bandwidth, plus hub exec, is going to be really fun to work with.
Thanks for your ongoing efforts Chip. It will indeed be really fun to work with.
Any idea what kind of boot time we'll be looking at from the flash?
communication with the most rabid of all Prop fans on the planet.
That made me smile !
I just noticed that "The New 16-cog" thread is over 1 year old. The FPGA image that was supposed to "take several weeks", now will take one year plus two months (hopefully). And I said before that I was not waiting for the FPGA image. The long silence, instead of inability or unwillingness made me think of some secret shuttle and that the P2 was almost imminent (cautious silence before the announcement). Now this just look like silly wishes from a rabid P2 fan :-) But at that time I thought that this was something within the capabilities of both Chip and the design house (2 months verilog + 10 months FPGA conversion).
I also wrongly thought that this new P2 was supposed to be more similar to the P1+ than that to the "Hot" P2, but it looks quite complex. I wonder whether it will actually be possible to made a 180 nm IC from a 16 cog design that needs a 28x28 mm Cyclone V made at 28nm (or 12 cogs that just fit inside a 23x23mm Cyclone V made at 60nm).
Dark silicon? Don't know if this is the key to meet the power constrains. Otherwise, it could also means the IC that will never see the light of day, waiting one FPGA image after another till the end of the days. I feel that there is more people willing to spend $500 for a Cyclone V-A7 than people willing to spend $500 for a shuttle run IC. Supposing the total amount being almost the same (75 boards x $500 vs. 75 packed dies x $500) and we were given the option to choose. There is more fun playing with FPGA than the real IC beast. And I think that maybe the hobbyist that are still here do not mind a perpetual P2 development. Maybe this is the path for a great "education" project for the next 8 years, a soft-core "P" CPU.
The dark silicon material is very relevant. It does favor 16 simpler COGS, and the smart pins.
Power was a problem with the shuttle. An attempt to merge custom blocks with synthesis left shorts in the chip.
The outside power analysis done for the hot chip showed a clear need to reduce the number of gates flipping per cycle.
Some other early analysis was done to vet this design and understand the power constraints better.
Now the outside firm and Chip are putting it together and it will be one unified design meaning whole chip checks can be done this time.
Deffo some process learning going on.
Some of the delay on this one is due to the change in plan to do Parallax FPGA boards, loader and release of P1 as open design.
Lots of good ideas in the hot chip. Some carry over to this one, while others are new or changed to reflect power considerations and HUB comms and power requirements there.
In the hot chip, the large and very active busses were a problem. This one dies things differently to marginalize that.
recall the thousand post long thread where hundreds of ideas were thrashed out with Chip that led to a runaway in the complexity of the design, as feature was piled upon feature, which was finally canned because of thermal issues? That debacle.
That was no debacle. That is "what it takes". Proceding BEFORE that discussion, and arriving at those decisions after an insufficient product was released, THAT would be a debacle.
Unfortunately, most companies and individuals get it backwards as you did. And, for example, find thermal issues in a product already in the hands of angry customers.
Which is cheaper, correcting an issue in the design phase, or correcting an issue in field? Whatever the cost of the discussion period, in years and dollars, is a faction of the cost of producins and shipping a reject product.
The data says the so called "debacle" likely saved Parallax a fortune.
A debacle is a debacle no matter how it is arrived at. In this case arriving at an unworkable design after months and years of forum member input. This is the first time in history such a thing has been attempted.
Now, to the credit of Chip and Ken and Parallax that was the first experiment with: "Design an CPU/MCU with community input all along the way." that I have ever heard of. Do you know any others?
In a traditional company thousands of man hours, millions of dollars, can be soaked up in projects that ultimately fail. The difference is that mostly people outside the company never get to hear about that. It's bad press. It's kept quite. I have been involved in many such projects over the years. Debacles all of them.
And then, what about products that do actually make it out the door but fail spectacularly anyway? Like the the Intel 432, the Intel i860, the Intel Itanium. Debacles all of them.
I do agree however, finding problems earlier rather than later is a good idea if you can do it.
Any idea what kind of boot time we'll be looking at from the flash?
With the two-stage boot, this should be variable, and limited by the SPI clock speed.
eg a 1MB/s link, which is only a 8MHz clock, can stream 512KB in half a second,
Absolute shortest possible time-to-alive is likely dominated by Sw delays/pauses, but if the loader chip controls RST from the data, (ie avoids OS pipeline delays that can occur now between Handshake lines and Data) then it should be possible to tune that boot-poll delay from the rather slow value it has on P1, to something sub-ms on a P2.
It also drives DACs at those rates and performs DDS/Goertzel operations. It uses a 256x32 look-up RAM for outputting pixel-type and DDS/Goertzel data.
Having Goertzel still in the mix is really very exciting. Thats going to make all sorts of interesting real-world interactions possible. That part alone will be a lot of fun.
Is the goertzel block attached to the cog, or the "smart pins" ?
Is the goertzel block attached to the cog, or the "smart pins" ?
Has to be the Cog/DMA engine given SmartPins is last on the to-do list.
EDIT: In fact, at the moment, it must be just to the four nearby fast pins and parallel data to digital output ports. Question then becomes: What happens to concurrent OUTA/B Cog accesses on the same pins?
refering to the post where you named the time spent on P2 development "debacle". Unless somebody else used that name, yours was the first I saw.
A debacle is a debacle no matter how it is arrived at. In this case arriving at an unworkable design after months and years of forum member input. This is the first time in history such a thing has been attempted.
Determining that a design is unworkable BEFORE it goes into production is NEVER a dabacle. Determining a design is unworkable after it has shipped as a product is always a debacle. Do not confuse these. Your examples demonstrate this point.
Desinging the P2 is something that has never been done, so saying it should take X amount of time because so and so thinks it should is nonsense. This is the research part of R&D, figuring out what the design needs to be, and how it took to design it. Notice that the time for the design is in the past tense. We can only estimate based on how long this took LAST time, and since this is new, there is no last time. When Chip says, these are the features we finally want, and this is how long they took when we demo'ed them in FPGA, THEN we can expect a schedule that says it took X days last time, so it might take X days this time. But even then we have to allow additional time due to the leasons learned the first timearound.
In a traditional company thousands of man hours, millions of dollars, can be soaked up in projects that ultimately fail. The difference is that mostly people outside the company never get to hear about that. It's bad press. It's kept quite. I have been involved in many such projects over the years. Debacles all of them.
The quality literature is filled with examples on how and why projects fail, and how to avoid those errors. Saying we should ship because X time went by is part of the recipe for failure. Waiting until we know exactly what we want because we checked, and we know exactly how long it will take because we did it once before, is part of the recipe for success.
Keeping failure quiet allows management to be too proud to learn.
Even the most basic of PM's on a project like this, and almost anything smaller, would be having at least a weekly meeting tracking what is done, what isn't, milestone hit or roadblocks encountered, etc,etc.
Any of the knowledgeable engineers at Parallax involved in the P2 should be able to take some meeting minutes, and be able to draft up a simple post which would answer 80-90% of the big question people have.
The last thing that Parallax needs at this point is for Chip to read any of the P2 suggestions on the forum. We saw what a debacle that was the last time he did that. IMO Parallax needs to figure out exactly what they want in the P2, and then they need to crank out the Verilog for it.
What I called a debacle wasn't the HOT P2 design. I am referring to the design-by-forum approach that was followed for several months. I'm sure you would agree that the best approach to any project is to set the goals and do the brain-storming at the beginning of a project. Brain-storming and changing design goals toward the end of the project just seems backwards to me.
Doug, I was the first one to use the term debacle in this thread.
What I called a debacle wasn't the HOT P2 design. I am referring to the design-by-forum approach that was followed for several months. I'm sure you would agree that the best approach to any project is to set the goals and do the brain-storming at the beginning of a project. Brain-storming and changing design goals toward the end of the project just seems backwards to me.
Well, as it turned out, it wasn't near the end of the project. The project still doesn't sound like it is anywhere near the end.
I'm not sure why you're arguing about the definition of debacle.
Dictionary defines it as "a sudden and ignominious failure; a fiasco".
The shuttle run seemed to be that in spades. The fact that Parallax ultimately shelved that design should ipso facto be the proof.
I'm not following the rest of your comment about shipping before ready/validated, etc. Seems off in the weeds a bit, as I don't recall anyone suggesting Parallax ship something broken. What I remember is some people asking for a simple relook at the P1 design, taking into acount Ken's 5 Customer Wishes, and working on a more incremental P1.5 design instead of something primarily almost all new. I believe the belief was that approach would be vastly simpler, faster to market and cheaper.
In retrospect, that might have been the better approach. However the discussions seemed to be that the P16 was also supposed to be quite a bit simpler, with FPGA image available in weeks, and things like HubExec not being seen as overly difficult.
I just hope whatever new features are being added which may have caused delay are actually seen as value add by the market.
refering to the post where you named the time spent on P2 development "debacle". Unless somebody else used that name, yours was the first I saw.
Determining that a design is unworkable BEFORE it goes into production is NEVER a dabacle. Determining a design is unworkable after it has shipped as a product is always a debacle. Do not confuse these. Your examples demonstrate this point.
Desinging the P2 is something that has never been done, so saying it should take X amount of time because so and so thinks it should is nonsense. This is the research part of R&D, figuring out what the design needs to be, and how it took to design it. Notice that the time for the design is in the past tense. We can only estimate based on how long this took LAST time, and since this is new, there is no last time. When Chip says, these are the features we finally want, and this is how long they took when we demo'ed them in FPGA, THEN we can expect a schedule that says it took X days last time, so it might take X days this time. But even then we have to allow additional time due to the leasons learned the first timearound.
The quality literature is filled with examples on how and why projects fail, and how to avoid those errors. Saying we should ship because X time went by is part of the recipe for failure. Waiting until we know exactly what we want because we checked, and we know exactly how long it will take because we did it once before, is part of the recipe for success.
Keeping failure quiet allows management to be too proud to learn.
Doug, I was the first one to use the term debacle in this thread.... Brain-storming and changing design goals toward the end of the project just seems backwards to me.
Apologies, I don't follow all the thread very closely; but credit where is due.
Saying a project ship should by X date based on nothing more than time spent is beyond foolish. If the design needs work, then NOW is always the time to address it. Fixing the design BEFORE the product ships is always better than trying to change after the product ships. If the design takes longer, maybe its a new design and requires additional effort. And as always, it takes as long as it takes. You can't expect nine woomen to make a baby in one month.
Parallax is doing exactly as they should, which is truley understand the design and features before going into production. Where the approach of allowing intimate input from customers is a good idea is the only question. (Seems like a good idea based on the data, but in practice has additional considerations). Since they have chosen this method, then they need to follow it through, however long it takes. If anyone wants something sooner, we can go make our own with our own money.
... ignominious failure; a fiasco...The shuttle run seemed to be that in spades. The fact that Parallax ultimately shelved that design should ipso facto be the proof.
This is not a failure, this is development. The lesson learn should be find the requirements, and stick to those. In anycase it takes as long as it takes.
If they created a part that no-one would use, that would be the disater. Thorughly investigating and refining the feature set is not a disater.
It would nice nice if they could make a great part for no time and no cost and no effort, but generally this type of expectation is the root cause of failure.
Saying a project ship should by X date based on nothing more than time spent is beyond foolish. If the design needs work, then NOW is always the time to address it. Fixing the design BEFORE the product ships is always better than trying to change after the product ships. If the design takes longer, maybe its a new design and requires additional effort. And as always, it takes as long as it takes. You can't expect nine woomen to make a baby in one month.
There are three basic modes:
1. Incremental. This is basically a revision, refresh, etc... some new engineering gets done, a lot gets refined, and most quantities are known. This product is built to well honed and clear requirements, each of those linked clearly to market segments, revenue forecast, etc...
2. Derivative. A product similar to an existing one, but not a revision. Less is known, more new engineering happens. This also is built to requirements, but there is more risk. Market segmentation may have happened, or may be incomplete. Revenue forecasts are in place.
3. New. The biggest difference between this and other product modes, is the market may not yet know it wants the product, but metrics, needs analysis, and other factors tell the product creator the need and value will be obvious when seen. Requirements here vary some, there is a lot of risk, revenue forecasts are murky at best.
The P2 effort expectation is derivative, but the reality of it is new. That's the source of a lot of dialog here for sure. Given the dominant mode of this design is new, not derivative, the usual dates, metrics, etc... aren't going to apply and requirements may shift too, depending.
In my current role, I've got a few products in the queue. Some are incremental, they are boring, they have dates, times, spends, revenue. Misses on these can be directly linked to opportunity costs. The derivative ones are very similar, and that's due to the particular niche. We see some variance on these, but not much.
The derivatives and incremental products have dates and costs associated with not hitting those dates.
By far, the most expensive is the new products and processes. These have timelines measured in years. The biggest hurdle is qualifying materials, means, methods, technology. During this development, there are a number of predicted value points, and decisions trees that will refine requirements as we find what is possible and practical. Early on, we really don't know! So we work on stuff, until we do know.
Finally, the term investment is used on all these modes. Really, for most derivatives and incremental products, it's a spend. Cost of doing business, keep the product relevant, etc... Risk is moderate, so are the returns. The big factor is actually opportunity costs!
Real investments that carry some risk and that present some significant rewards are always associated with new products. Like any investment, you need to read your prospectus.
More seriously, this kind of investment is difficult to fund through banks and VC / other kinds of money. Risks are too high, requirements unclear, returns not all that well quantified.
Now, it is possible to do a ton of work up front and define a new product such that it can get funded, etc... Sometimes doing that makes the best sense, but it comes with obligations, and it's own costs and risks. Incremental funding, such as the kind Parallax is doing, carries far less risk, but it does add time. As capitalization happens, funds become available to advance the new product.
It's important to differentiate these from a lot of what I will call, "new product porn" kinds of write ups where they show a fast cycle, awesome engineering, killer expectations met, and so forth. Just know the vast majority of really new, not just derivative, doesn't work like that.
Another conversation might be whether or not P2 should have been more derivative. And this is up to Parallax, who needs to make a value judgement. They will have funds available, and do they put them on a modest risk modest return path, or do they do what they did with P1 and try to take a bigger leap.
The latter path was chosen. So let's just deal with that, support 'em, and hope this gets done and it meets the moving expectations in play at any given time.
Parallax operates in a small niche in a small market. There is not much competition in their niche, so they can afford to take 8 years to develop a chip. However, Arduino and Raspberry Pi and Beagle Boards and other solutions must be having some impact on their sales. In a normal competitive situation time-to-market is imperative.
Most competitive companies brain-storm and develop a plan with a set of goals and milestones at the beginning of a project. As the project proceeds there are usually some adjustments. New features can be added, but it is disastrous to the project to add major new features and redesign the product in the later phases of the project.
On one of the new product paths, we are shooting for a process that doesn't yet exist. We think it's possible. We are probably going to spend on the order of 500 to 700K to find out. And it's gonna take a couple years too.
We may spend 300K of that only to find out it's not possible, or there will be some compromises. Depending on how that goes, we may simply walk from that money and carry on in some other direction.
And those numbers are cheap. Small company kinds of numbers. Bigger entities make much bigger new R&D spends and they make a lot of them and they know a whole bunch of those will go nowhere.
Now, given those dynamics, is it smart to take funding, or build as you can? There is no right answer here, just risks and outcomes and value judgements.
In a normal competitive situation time-to-market is imperative.
Frankly, this is nearly always true.
It can be satisfied in a number of ways too. Parallax originally had the goal of making P2, then doing what they normally do with Propellers. As the process advanced some, they needed to add "how do we do this, given who we are and what we have?" to the list, and that's fine. Then as it advanced some more, and some new ideas were tried, FPGA efforts became interesting.
And so that is coming to market now. Take the P1 code, sell boards, get some interest in following Parallax down this path, maybe contributing. In terms of this project, getting people on FPGA does a whole lot. We test, document, etc... and that's good, and it needs to happen eventually anyway.
In a real sense, Ken and friends likely saw a small (relative to current project timeline) delay to get the FPGA platform done, actually has some positive impact on time to market in that some work is going to get done in parallel, not serially.
Parallax operates in a small niche in a small market. .... However, Arduino and Raspberry Pi and Beagle Boards ... must be having some impact on their sales.
How big an impact? Very small? Parallax sells microcontrollers, and not general purpose linux running CPU's, no matter how much we argue and speculate. Arduino? Not multicore, different niche, different tool for a different function. Would Parallax like to sell props to all the Arduino folks? Sure, I guess, but it won't happen until the arduino folk can understand and use and have a need for a multicore microcontroller. It ain't gonna happen soon. See the thread about "simplest computer" link to the hackaday "arduino killer" article where the guy wants to show ardino folks that an LED can be blinked with a 555 cheaper and easier than with an arduino. Folks got to get a little further down the road before they see those milestones.
Arduino and Raspberry Pi and Beagle Boards and other solutions must be having some impact on their sales. In a normal competitive situation time-to-market is imperative.
Is this a zero sum game? Is it so that every Arduino sold is a lost sale for Parallax? Is it so that a Raspi buyer now no longer needs a Propeller?
Or perhaps the huge growth in the Maker movement, of tinkering with robots, copters, IoT gadgets, general awareness of the possibilities of electronics and programming, spurred on by the Arduino and the Raspi have actually grown the market enormously. Perhaps there are more people out there now who would even consider buying a Propeller I or II board than ever before.
I and other have suggested, ever since the Raspi arrived that Parallax should make a Pi "plate". A Raspi mountable Propeller board. To help people out when they realize their Raspi is not so cool for real-time, real-world interfacing.
Surely the Parallax educational side would also benefit from leveraging the educational work of the Pi foundation, from making the Prop easy to use with the Raspi.
It's clearly not a zero sum game. If Parallax lost a sale for every Arduino sold they would probably have negative sales. I'm sure Parallax has lost some Stamp and Prop sales to Arduinos and Pi's, but this has probably been offset by sales of quadcopters and other things. And maybe Parallax's niche has grown to compensate for sales lost to Arduinos and Pi's.
I'm not sure if Pi plates would be a good thing for Parallax. Given their limited resources it would probably suck away more time, money and people than it's worth. The P2 and maybe even a BS3 are probably the way to go. I wonder about the FPGA board that they have been developing. This seems to be a bigger effort than they anticipated, and they may not see much payback from it. Then again, there may be a big market for FPGA development boards, and it might attract a whole new crowd of customers to Parallax.
Parallax operates in a small niche in a small market. .... However, Arduino and Raspberry Pi and Beagle Boards ... must be having some impact on their sales.
It is a dynamic market, and the impact varies.
The RaspPi has a very large separation from anything Parallax does, and sales of those can only help Parallax.
Hence the effort to have Prop tools run on a RaspPi.
The Pi2 extends that more - it can now run quite serious tools (see the Lazarus thread & screen shots).
I wonder about the FPGA board that they have been developing. This seems to be a bigger effort than they anticipated, and they may not see much payback from it. Then again, there may be a big market for FPGA development boards, and it might attract a whole new crowd of customers to Parallax.
I doubt the FPGA board was ever there as a revenue stream - the purpose of that is to prove P2 designs before they run silicon, as well as seed future sales.
Imagine the cost of a P2 re-spin ?
The FPGA board has the largest FPGA Altera's WebPack supports, and it can swallow a complete P2-Verilog portion.
Of course, that leaves the custom-logic to prove too.
The Arduino market is essentially the same one as the BS2 market(beginners, educators, artists, kids) a entry point into the world of electronics and embedded computing. Remember it was created by Banzi as a lower cost BS2 replacement in Europe and took off. But since it's introduction the Arduino ecosystem has grown and changed. There's the ARM based Teensy; the Energia which supports the TI boards(430MSP, ARM and DSC); Pinguino which supports 8 bit Microchips and the PIC32; then there's Mbed and all the various ARM boards it supports.
In short the Prop is now just one option of many for the Arduino user who needs more compute resources for their apps.
I doubt the FPGA board was ever there as a revenue stream - the purpose of that is to prove P2 designs before they run silicon, as well as seed future sales.
For sure but Dave might be right as well. While revenue is not the initial primary objective it does look like Parallax are making sure it's a future option without having to do yet another design.
Comments
EDIT: Here's an early document, dated 7 Mar 2012, - http://forums.parallax.com/attachment.php?attachmentid=90354&d=1331095780
Thanks for your ongoing efforts Chip. It will indeed be really fun to work with.
Any idea what kind of boot time we'll be looking at from the flash?
That made me smile !
I just noticed that "The New 16-cog" thread is over 1 year old. The FPGA image that was supposed to "take several weeks", now will take one year plus two months (hopefully). And I said before that I was not waiting for the FPGA image. The long silence, instead of inability or unwillingness made me think of some secret shuttle and that the P2 was almost imminent (cautious silence before the announcement). Now this just look like silly wishes from a rabid P2 fan :-) But at that time I thought that this was something within the capabilities of both Chip and the design house (2 months verilog + 10 months FPGA conversion).
I also wrongly thought that this new P2 was supposed to be more similar to the P1+ than that to the "Hot" P2, but it looks quite complex. I wonder whether it will actually be possible to made a 180 nm IC from a 16 cog design that needs a 28x28 mm Cyclone V made at 28nm (or 12 cogs that just fit inside a 23x23mm Cyclone V made at 60nm).
Dark silicon? Don't know if this is the key to meet the power constrains. Otherwise, it could also means the IC that will never see the light of day, waiting one FPGA image after another till the end of the days. I feel that there is more people willing to spend $500 for a Cyclone V-A7 than people willing to spend $500 for a shuttle run IC. Supposing the total amount being almost the same (75 boards x $500 vs. 75 packed dies x $500) and we were given the option to choose. There is more fun playing with FPGA than the real IC beast. And I think that maybe the hobbyist that are still here do not mind a perpetual P2 development. Maybe this is the path for a great "education" project for the next 8 years, a soft-core "P" CPU.
Power was a problem with the shuttle. An attempt to merge custom blocks with synthesis left shorts in the chip.
The outside power analysis done for the hot chip showed a clear need to reduce the number of gates flipping per cycle.
Some other early analysis was done to vet this design and understand the power constraints better.
Now the outside firm and Chip are putting it together and it will be one unified design meaning whole chip checks can be done this time.
Deffo some process learning going on.
Some of the delay on this one is due to the change in plan to do Parallax FPGA boards, loader and release of P1 as open design.
Lots of good ideas in the hot chip. Some carry over to this one, while others are new or changed to reflect power considerations and HUB comms and power requirements there.
In the hot chip, the large and very active busses were a problem. This one dies things differently to marginalize that.
One to two seconds.
That was no debacle. That is "what it takes". Proceding BEFORE that discussion, and arriving at those decisions after an insufficient product was released, THAT would be a debacle.
Unfortunately, most companies and individuals get it backwards as you did. And, for example, find thermal issues in a product already in the hands of angry customers.
Which is cheaper, correcting an issue in the design phase, or correcting an issue in field? Whatever the cost of the discussion period, in years and dollars, is a faction of the cost of producins and shipping a reject product.
The data says the so called "debacle" likely saved Parallax a fortune.
What's with the "as you did." thing?
A debacle is a debacle no matter how it is arrived at. In this case arriving at an unworkable design after months and years of forum member input. This is the first time in history such a thing has been attempted.
Now, to the credit of Chip and Ken and Parallax that was the first experiment with: "Design an CPU/MCU with community input all along the way." that I have ever heard of. Do you know any others?
In a traditional company thousands of man hours, millions of dollars, can be soaked up in projects that ultimately fail. The difference is that mostly people outside the company never get to hear about that. It's bad press. It's kept quite. I have been involved in many such projects over the years. Debacles all of them.
And then, what about products that do actually make it out the door but fail spectacularly anyway? Like the the Intel 432, the Intel i860, the Intel Itanium. Debacles all of them.
I do agree however, finding problems earlier rather than later is a good idea if you can do it.
eg a 1MB/s link, which is only a 8MHz clock, can stream 512KB in half a second,
Absolute shortest possible time-to-alive is likely dominated by Sw delays/pauses, but if the loader chip controls RST from the data, (ie avoids OS pipeline delays that can occur now between Handshake lines and Data) then it should be possible to tune that boot-poll delay from the rather slow value it has on P1, to something sub-ms on a P2.
Having Goertzel still in the mix is really very exciting. Thats going to make all sorts of interesting real-world interactions possible. That part alone will be a lot of fun.
Is the goertzel block attached to the cog, or the "smart pins" ?
Has to be the Cog/DMA engine given SmartPins is last on the to-do list.
EDIT: In fact, at the moment, it must be just to the four nearby fast pins and parallel data to digital output ports. Question then becomes: What happens to concurrent OUTA/B Cog accesses on the same pins?
refering to the post where you named the time spent on P2 development "debacle". Unless somebody else used that name, yours was the first I saw.
Determining that a design is unworkable BEFORE it goes into production is NEVER a dabacle. Determining a design is unworkable after it has shipped as a product is always a debacle. Do not confuse these. Your examples demonstrate this point.
Desinging the P2 is something that has never been done, so saying it should take X amount of time because so and so thinks it should is nonsense. This is the research part of R&D, figuring out what the design needs to be, and how it took to design it. Notice that the time for the design is in the past tense. We can only estimate based on how long this took LAST time, and since this is new, there is no last time. When Chip says, these are the features we finally want, and this is how long they took when we demo'ed them in FPGA, THEN we can expect a schedule that says it took X days last time, so it might take X days this time. But even then we have to allow additional time due to the leasons learned the first timearound.
The quality literature is filled with examples on how and why projects fail, and how to avoid those errors. Saying we should ship because X time went by is part of the recipe for failure. Waiting until we know exactly what we want because we checked, and we know exactly how long it will take because we did it once before, is part of the recipe for success.
Keeping failure quiet allows management to be too proud to learn.
Angular, an open source Javascript framework, has their weekly meeting notes up here: https://docs.google.com/document/d/150lerb1LmNLuau_a_EznPV1I1UHMTbEl61t4hZ7ZpS0/edit It's interesting to note that the team is Google employees, and they're working on 2.0 in relative isolation, but they still publish the raw notes.
You mean like this ?
"Now this is not the end. It is not even the beginning of the end. But it is, perhaps, the end of the beginning."
I'm not sure why you're arguing about the definition of debacle.
Dictionary defines it as "a sudden and ignominious failure; a fiasco".
The shuttle run seemed to be that in spades. The fact that Parallax ultimately shelved that design should ipso facto be the proof.
I'm not following the rest of your comment about shipping before ready/validated, etc. Seems off in the weeds a bit, as I don't recall anyone suggesting Parallax ship something broken. What I remember is some people asking for a simple relook at the P1 design, taking into acount Ken's 5 Customer Wishes, and working on a more incremental P1.5 design instead of something primarily almost all new. I believe the belief was that approach would be vastly simpler, faster to market and cheaper.
In retrospect, that might have been the better approach. However the discussions seemed to be that the P16 was also supposed to be quite a bit simpler, with FPGA image available in weeks, and things like HubExec not being seen as overly difficult.
I just hope whatever new features are being added which may have caused delay are actually seen as value add by the market.
Apologies, I don't follow all the thread very closely; but credit where is due.
Saying a project ship should by X date based on nothing more than time spent is beyond foolish. If the design needs work, then NOW is always the time to address it. Fixing the design BEFORE the product ships is always better than trying to change after the product ships. If the design takes longer, maybe its a new design and requires additional effort. And as always, it takes as long as it takes. You can't expect nine woomen to make a baby in one month.
Parallax is doing exactly as they should, which is truley understand the design and features before going into production. Where the approach of allowing intimate input from customers is a good idea is the only question. (Seems like a good idea based on the data, but in practice has additional considerations). Since they have chosen this method, then they need to follow it through, however long it takes. If anyone wants something sooner, we can go make our own with our own money.
This is not a failure, this is development. The lesson learn should be find the requirements, and stick to those. In anycase it takes as long as it takes.
If they created a part that no-one would use, that would be the disater. Thorughly investigating and refining the feature set is not a disater.
It would nice nice if they could make a great part for no time and no cost and no effort, but generally this type of expectation is the root cause of failure.
There are three basic modes:
1. Incremental. This is basically a revision, refresh, etc... some new engineering gets done, a lot gets refined, and most quantities are known. This product is built to well honed and clear requirements, each of those linked clearly to market segments, revenue forecast, etc...
2. Derivative. A product similar to an existing one, but not a revision. Less is known, more new engineering happens. This also is built to requirements, but there is more risk. Market segmentation may have happened, or may be incomplete. Revenue forecasts are in place.
3. New. The biggest difference between this and other product modes, is the market may not yet know it wants the product, but metrics, needs analysis, and other factors tell the product creator the need and value will be obvious when seen. Requirements here vary some, there is a lot of risk, revenue forecasts are murky at best.
The P2 effort expectation is derivative, but the reality of it is new. That's the source of a lot of dialog here for sure. Given the dominant mode of this design is new, not derivative, the usual dates, metrics, etc... aren't going to apply and requirements may shift too, depending.
In my current role, I've got a few products in the queue. Some are incremental, they are boring, they have dates, times, spends, revenue. Misses on these can be directly linked to opportunity costs. The derivative ones are very similar, and that's due to the particular niche. We see some variance on these, but not much.
The derivatives and incremental products have dates and costs associated with not hitting those dates.
By far, the most expensive is the new products and processes. These have timelines measured in years. The biggest hurdle is qualifying materials, means, methods, technology. During this development, there are a number of predicted value points, and decisions trees that will refine requirements as we find what is possible and practical. Early on, we really don't know! So we work on stuff, until we do know.
Finally, the term investment is used on all these modes. Really, for most derivatives and incremental products, it's a spend. Cost of doing business, keep the product relevant, etc... Risk is moderate, so are the returns. The big factor is actually opportunity costs!
Real investments that carry some risk and that present some significant rewards are always associated with new products. Like any investment, you need to read your prospectus.
More seriously, this kind of investment is difficult to fund through banks and VC / other kinds of money. Risks are too high, requirements unclear, returns not all that well quantified.
Now, it is possible to do a ton of work up front and define a new product such that it can get funded, etc... Sometimes doing that makes the best sense, but it comes with obligations, and it's own costs and risks. Incremental funding, such as the kind Parallax is doing, carries far less risk, but it does add time. As capitalization happens, funds become available to advance the new product.
It's important to differentiate these from a lot of what I will call, "new product porn" kinds of write ups where they show a fast cycle, awesome engineering, killer expectations met, and so forth. Just know the vast majority of really new, not just derivative, doesn't work like that.
Another conversation might be whether or not P2 should have been more derivative. And this is up to Parallax, who needs to make a value judgement. They will have funds available, and do they put them on a modest risk modest return path, or do they do what they did with P1 and try to take a bigger leap.
The latter path was chosen. So let's just deal with that, support 'em, and hope this gets done and it meets the moving expectations in play at any given time.
Memo to Staff:
Due to mid-year budget forecasts, we are having a 15% cut across the board. Please adjust the above numbers accordingly.
(yeah, I've been with corporate American way too long!!)
Most competitive companies brain-storm and develop a plan with a set of goals and milestones at the beginning of a project. As the project proceeds there are usually some adjustments. New features can be added, but it is disastrous to the project to add major new features and redesign the product in the later phases of the project.
On one of the new product paths, we are shooting for a process that doesn't yet exist. We think it's possible. We are probably going to spend on the order of 500 to 700K to find out. And it's gonna take a couple years too.
We may spend 300K of that only to find out it's not possible, or there will be some compromises. Depending on how that goes, we may simply walk from that money and carry on in some other direction.
And those numbers are cheap. Small company kinds of numbers. Bigger entities make much bigger new R&D spends and they make a lot of them and they know a whole bunch of those will go nowhere.
Now, given those dynamics, is it smart to take funding, or build as you can? There is no right answer here, just risks and outcomes and value judgements.
Frankly, this is nearly always true.
It can be satisfied in a number of ways too. Parallax originally had the goal of making P2, then doing what they normally do with Propellers. As the process advanced some, they needed to add "how do we do this, given who we are and what we have?" to the list, and that's fine. Then as it advanced some more, and some new ideas were tried, FPGA efforts became interesting.
And so that is coming to market now. Take the P1 code, sell boards, get some interest in following Parallax down this path, maybe contributing. In terms of this project, getting people on FPGA does a whole lot. We test, document, etc... and that's good, and it needs to happen eventually anyway.
In a real sense, Ken and friends likely saw a small (relative to current project timeline) delay to get the FPGA platform done, actually has some positive impact on time to market in that some work is going to get done in parallel, not serially.
How big an impact? Very small? Parallax sells microcontrollers, and not general purpose linux running CPU's, no matter how much we argue and speculate. Arduino? Not multicore, different niche, different tool for a different function. Would Parallax like to sell props to all the Arduino folks? Sure, I guess, but it won't happen until the arduino folk can understand and use and have a need for a multicore microcontroller. It ain't gonna happen soon. See the thread about "simplest computer" link to the hackaday "arduino killer" article where the guy wants to show ardino folks that an LED can be blinked with a 555 cheaper and easier than with an arduino. Folks got to get a little further down the road before they see those milestones.
Or perhaps the huge growth in the Maker movement, of tinkering with robots, copters, IoT gadgets, general awareness of the possibilities of electronics and programming, spurred on by the Arduino and the Raspi have actually grown the market enormously. Perhaps there are more people out there now who would even consider buying a Propeller I or II board than ever before.
I and other have suggested, ever since the Raspi arrived that Parallax should make a Pi "plate". A Raspi mountable Propeller board. To help people out when they realize their Raspi is not so cool for real-time, real-world interfacing.
Surely the Parallax educational side would also benefit from leveraging the educational work of the Pi foundation, from making the Prop easy to use with the Raspi.
I'm not sure if Pi plates would be a good thing for Parallax. Given their limited resources it would probably suck away more time, money and people than it's worth. The P2 and maybe even a BS3 are probably the way to go. I wonder about the FPGA board that they have been developing. This seems to be a bigger effort than they anticipated, and they may not see much payback from it. Then again, there may be a big market for FPGA development boards, and it might attract a whole new crowd of customers to Parallax.
It is a dynamic market, and the impact varies.
The RaspPi has a very large separation from anything Parallax does, and sales of those can only help Parallax.
Hence the effort to have Prop tools run on a RaspPi.
The Pi2 extends that more - it can now run quite serious tools (see the Lazarus thread & screen shots).
I doubt the FPGA board was ever there as a revenue stream - the purpose of that is to prove P2 designs before they run silicon, as well as seed future sales.
Imagine the cost of a P2 re-spin ?
The FPGA board has the largest FPGA Altera's WebPack supports, and it can swallow a complete P2-Verilog portion.
Of course, that leaves the custom-logic to prove too.
The Arduino market is essentially the same one as the BS2 market(beginners, educators, artists, kids) a entry point into the world of electronics and embedded computing. Remember it was created by Banzi as a lower cost BS2 replacement in Europe and took off. But since it's introduction the Arduino ecosystem has grown and changed. There's the ARM based Teensy; the Energia which supports the TI boards(430MSP, ARM and DSC); Pinguino which supports 8 bit Microchips and the PIC32; then there's Mbed and all the various ARM boards it supports.
In short the Prop is now just one option of many for the Arduino user who needs more compute resources for their apps.
For sure but Dave might be right as well. While revenue is not the initial primary objective it does look like Parallax are making sure it's a future option without having to do yet another design.