Shop OBEX P1 Docs P2 Docs Learn Events
Prop-2 Release Date and Price - Page 2 — Parallax Forums

Prop-2 Release Date and Price

2456720

Comments

  • Heater.Heater. Posts: 21,230
    edited 2015-02-21 11:14
    jmg,
    Notice the list is companies more interested in stockholder positions, than in actually building something.
    I'm not sure what you are getting at with that statement. For sure they are corporations with a duty to their share holders to try and make money. Perhaps they have greedy bosses who want to make money for themselves. That's the way of the world today. No surprise there.

    However, achieving that money grabbing aim also means being lean and mean and efficient. It means investing in building things to help you be mean and lean and efficient. Google and others have invest in things like the clang/LLVM C++ compiler to that end. They have contributed to Linux and a lot of open source projects. These exemplars of capitalism have learned to cooperate with each other on such projects to their mutual benefit. With no NDAs, cross-licensing and other friction. They have learned that this is good for them.

    So what about this RISC V processor idea?

    Well, designing a CPU instruction set architecture and even getting the chip to implement it designed is is not harder than a huge C++ compiler project or a Linux kernel. We do not actually need an Intel or an ARM to do that any more. In the same way they do not need MS to make an operating system.

    And it turns out that neither Intel or ARM are very good at designing instruction set architectures. Anyone starting from scratch, knowing what we know today would not make anything so hideously complex, and expensive to implement.

    So if Google and Amazon and co. decide they want something different that what Intel is providing, which I believe they do, I believe it is quite likely they will, together, do it. And the RISC V may well be the vehicle they need to do that. As Linux is in the operating system side of things. Something that does not belong to any of them but to which they will all gladly contribute in their own self interest.

    If this were to happen I would not worry about the following generations. In the same way we do not worry about the future of Linux. A world full of university post grads will be working on it and chip fabs will be happy to make a billion copies of whatever you want. Even Intel.
  • potatoheadpotatohead Posts: 10,261
    edited 2015-02-21 12:02
    So my question is why is Parallax still chasing the Propeller dream?

    Because they can, and they want to, and there are enough of us who share it to make it all a perfectly reasonable thing.

    And this kind of thing is precisely why I like small and private companies. No market expectation BS, right along with a lot of other Smile I could put here. Those things have their place, but they are by no means the only way, or THE way things can and should be done.

    As for "too much", we've had that discussion over and over and over.

    Things are worth what people will pay for them. And with a private corporation, one need only enough people seeing good value for the dollar to make the business viable, and able to capitalize for the future.

    Right now, in the US, small, largely private manufacturers number 200K enterprises sprinkled all over the place. These people are doing what they love to do, funding lives, families, and really aren't driven by the big business BS we so often hear about. They don't get daily write ups in the news, and they don't make shiny press releases, and do all manner of very visible things and so we don't always see the implications, our expectations largely set by a remarkably small number of people with a remarkably loud voice.

    The little company I've recently taken a job with is very similar to Parallax. We aren't driven by owning the market. We are driven by wanting to get some stuff done we want to get done, the way we want to do it, and with people we want to do it with. And it's fantastic. There is the daily business grind, and some issues to deal with, but that funds the fun! And the fun is nailing something we know people will want to pay for.

    Parallax knows people will want to pay for a P2.

    Most of us see there will be enough people to make it all make sense too.

    Rather than continue to read Fast Company, INC, FORBES, ESD, etc... it might just pay off to have a chat with some smaller scale enterprises doing great things. I've made a career out of this and I can tell you there is very little that beats doing what you love and loving what you do every day.

    The "competence" thing doesn't sit well with me. If you go and read back through the archives, you will find the point isn't to take over the market. The point is to build another thing people really value and can make good use of with few hassles, dependencies, etc...

    Here's what will happen:

    This last design iteration is a good one. It's going to make sense, and it's going to perform, and it's going to do things in notable ways just like P1 did.

    There will be a lot of naysayers comparing it to all manner of things designed with entirely different goals in mind. And there will be a lot of us asking, "WTF??" and that will be very highly entertaining just like it was for P1. I have my popcorn ready.

    The FPGA work will get done, Parallax has boards ready, and we will have a jam session here soon. Chip is going to take us through how the thing works just like he did with P1. And we think we will have it, and we won't, but we will get started, and great things will happen.

    Lots of people will learn stuff, projects will get made, so will products. Parallax can take that body of information and experience, package it up, sell it, and make a lot of money on the experience, community, technical solution, consulting, etc... Others in the community will do the same thing, and that eco-system is either big enough to make sense, or it's not.

    For P1, it's big enough to make sense. Personally, I think this last design iteration will work out about the same, but it has code protect, and that's going to open the door for some specialized uses that really weren't appropriate for P1.

    Products compete in a lot of ways. Lowest cost isn't always one of them.

    Either you see value in this or you don't. Plenty of people do. Enough people do.

    That is all.
  • potatoheadpotatohead Posts: 10,261
    edited 2015-02-21 12:07
    Re: future generations

    Yeah, I'm with Heater. If the ISA proves viable and gets actualized in silicon, the investment in design, engineering and planning either pays off, or it does not. If it does, develop an ecosystem sufficient to get hardware and systems out there, others will jump on and build just like we see happening with other open efforts.

    It's disruptive at the core. And on that basis RISC-V is a good bet, given their ideas actually play out. We've seen open disrupt quite a few things where it makes sense to take the cost out of commonalities and focus on value added by people and so forth.

    And I think we are at a good time right now. It took us a while to try out various instructions, learn to make software, etc... Now that we've got a very serious amount of data, optimizing it all makes great sense.
  • evanhevanh Posts: 15,964
    edited 2015-02-21 13:38
    Brucee/Rod1963,

    Stop trying to compare with the mass market! The fact that there is only one model of Prop1 should be a big clue.

    I'm personally certainly not here looking for an ARM competitor. I came because it wasn't ordinary, it has a unique muiltprocessor architecture that no other processor is following. I came because there is no DRM, no NDA, no binary blobs. I like to hit the metal, obviously - knowing/learning how it really works.

    Parallax have a niche and are caring for it. They are not part of the mass market (micro-controllers) rat race nor have they ever been in the past.

    The Prop2 isn't going to be canned. It'll be exciting, no, it's already exciting. Just give it some time to be finished.

    brucee wrote: »
    I might as well throw some gasoline on the fire here. My perspective is the broad general purpose micro market. When the P1 was introduced, I thought it a novel design, but for acceptance into the general market PASM and SPIN were non-starters. In the general market C has been the standard embedded language for many years before the P1. The general market has coalesced around ARM as it is much easier for teams to move to a new project and new vendor if the tools and environment they are using are familiar to them. ....

    Bruce, you are saying, contrary to later assertion, Parallax should be an ARM shop.

    Developing a product shouldn't matter to the end user what languages/components get used. Saying that C is the only option just reeks of snobbery. Businessman type snobbery - the one that only buys an IBM branded PC, a clone is below him. Macs were off-limits for the same reason. Take the easy choice path, avoid chance of being judged before even getting started. Fear of failing without having tried. ... some descriptive names ...

    That's my take on the necessity of C and ARM.
  • mindrobotsmindrobots Posts: 6,506
    edited 2015-02-21 13:59
    jmg wrote: »
    Notice the list is companies more interested in stockholder positions, than in actually building something.

    The list contains companies that are building massive data centers and have much to gain from any decreases in power consumption and thermal burden.

    The company I work for is small in comparison to some of those mentioned and we have over 60,000 deployed servers to support our internal needs. We are building new massive data centers that are going to feature power friendly and thermally efficient commodity servers by the tens of thousands. Most old proprietary network functions are being virtualized on commodity hardware with tremendous cost savings in actual hardware and software as well as power and cooling.

    I don't think it is a case of playing to the stockholders, it will be a matter of survival for mega-large enterprises with server counts running in the high tens and hundreds of thousands. For a large enterprise, you no longer deploy multiple racks of servers at a time, you deploy shipping containers filled with racks of commodity servers and gigantic amounts of storage. Roll them into a data center, plug in the power, cooling and 40+GB of network bandwidth and throw the switch.
  • Beau SchwabeBeau Schwabe Posts: 6,566
    edited 2015-02-21 14:30
    Hello all, I feel there are a few things I need to say because I have not heard them brought up. Perhaps because very few people realize how small the Parallax Semiconductor team actually is. Even smaller since I was laid off several months ago.

    That said there were essentially two of us, myself and Chip. I bring this up, because nobody here thinks in man-hours. When I worked for National Semiconductor we had a layout team of at least 15 to 20 people. Considering the man-hours required, Parallax was actually ahead of schedule. On a regular basis at Parallax, I would pull 12-14 hour days doing layout, that's 3,380 man-hours per year .... assuming Chip pulled his own weight between the both of us that's 6760 man-hours per year from Parallax. Compared to a larger team of 15 or 20 that could pull 50,700 man-hours to 67,600 man-hours per year ..... nearly 8.75 times the man-hours for a given project. .... So with a larger team it could have taken Parallax only 10 months to get to where they are right now..
  • bruceebrucee Posts: 239
    edited 2015-02-21 14:44
    evanh wrote:
    Bruce, you are saying, contrary to later assertion, Parallax should be an ARM shop.

    Actually no, I am not suggesting Parallax do a custom chip with an ARM. There are so many ARMs out there now, they should choose one that meets their needs and build products around it much like they did with the PIC for the BASICstamp.
    So my question is why is Parallax still chasing the Propeller dream? --

    Because they can, and they want to, and there are enough of us who share it to make it all a perfectly reasonable thing.

    Yes they can until the money runs out. A very successful hobbyist product these days can sell a million or two units over a period of a couple years. Will a $10 P2 sell that many? I doubt it, but they can prove me wrong all they have to do is get it out the door. The problem I see is a 180 nm chip trying to compete in a 60-90 nm world is a big handicap to overcome.
    heater wrote:
    Interestingly the RISC V project exactly wants to do that. They want to be free of Intel and ARM. They are backed by Google, Amazon, FaceBook, MS, the Indian Government. Many big players want that freedom. Chip was ahead of the pack with that dream at least.

    Last I checked Google, Amazon, Facebook and MS had a few more resources to get the project done :) PS doing your own CPU design is the dream of every hardware designer, luckily I got to do one in the 1980s when it made some sense, nowadays I don't think that is true anymore. When introduced these days any SOC has a wealth of development tools including compilers, debuggers, trace, breakpoint via JTAG/SW. The P1 still comes up short on that scale.
    heater wrote:
    But then, I find it amazing that the Arduino has taken off so well when it uses that huge and complex, decidedly beginner unfriendly language C++.
    Much of Arduino's success was that they hid the unfriendliness of the C++ from most of their users,
    heater wrote:
    I cannot see how P1 performance is inferior to Arduino.
    As soon as your program exceeds 512 instructions, and takes the 8x hit or runs in a 30x hit of the interpreter, its performance drops off the cliff. I thought we all learned our lesson to avoid segmented memory of the x86 architecture.

    The P2 might have been an interesting product 10 years ago, but they are just too far behind the rest of the industry. I'll measure success by looking at Parallax going from back cover ads for P1 down to quarter page ads for distance sensors.
  • jmgjmg Posts: 15,173
    edited 2015-02-21 15:10
    mindrobots wrote: »
    The list contains companies that are building massive data centers and have much to gain from any decreases in power consumption and thermal burden..

    Of course, but that power profile is driven far more by Memory and Process advances, than by any tweak to a CPUs opcodes.
    Those selling a CPU, will try to spin differently, & the oldest trick in the book there is to compare what they hope to release on some new process, with what has been in use for some time.
  • Keith YoungKeith Young Posts: 569
    edited 2015-02-21 15:39
    I'm learning a lot reading the back and forth in this thread. Please keep it constructive and keep it going!

    brucee, I've seen your criticisms but I don't think I saw a direct set of suggestions backed with explanations. Can you elaborate on what you suggest and why? Whether anyone listens or cares doesn't matter to me, I'm just curious. I think you mentioned they should look at R&D in WiFi and Tablets/Phones. Can you elaborate?

    Thanks
  • bruceebrucee Posts: 239
    edited 2015-02-21 16:09
    Can you elaborate on what you suggest and why?

    Well if I had that killer app, I might be building it. Now that I am recently retired, once I get over the travel bug I'll start playing with what is out there.

    As I look around, I have seen lots of extremely powerful tools, things I could have hardly dreamed of when the BASICstamp came out.

    Now everyone (especially under the age of 30) has an extremely powerful computer in their pocket (a phone). With it a screen and network interfaces that rival anything available 10 years ago. I don't think you can do program development on the phone, but if I need a screen and a control for some widget the phone is a great interface for that (look at the fitBit app for example).

    You can now buy WiFi connections for less than $5, not certified, but what hobbyist cares.

    If I want some home brew computer like thing, a RasPi costs less than $25, though for most things I think of the phone is a better tool, real portable and very powerful.

    Chip vendors give away SOC boards at or sometimes below cost, so I can get powerful ARM boards for $10-$15. Link that to a WiFi and some target custom widget controlled from a phone and that is a custom IoT that makes sense to me.

    So someone will hopefully weave all these disparate parts together with some app building into an environment for simple control and communication.
  • Keith YoungKeith Young Posts: 569
    edited 2015-02-21 16:21
    WiFi and tablets/iPhones seems well within their capabilities. I agree, I hope they address that soon. Are you suggesting they develop ARM boards? I have little to no experience in this, but it seems to me they'd have a lot of competition and I don't really think it's in their business model or current market.
  • jmgjmg Posts: 15,173
    edited 2015-02-21 17:16
    brucee wrote: »
    If I want some home brew computer like thing, a RasPi costs less than $25, though for most things I think of the phone is a better tool, real portable and very powerful.

    A phone can be a convenient "Take the System pulse" and Setup tool. but it is pretty poor at
    * Doing actual code development for ANY MCU
    * Mission critical applications, or even any that have long Up times

    That leaves a shipload of real embedded tasks, yes, that phone can help in the fringes, but a mainstream solution it is not.
    Small tablets are also slightly more useful, with USB connections, and able to actually run real tools.

    See the thread on the HP Stream 7 - I've seen tools already make changes to be more Stream 7 tolerant.
    Would I use Stream 7 all day ? - heck no, but for field fixes. and code updates, it has appeal.
    Also useful in classrooms.
  • jmgjmg Posts: 15,173
    edited 2015-02-21 17:19
    WiFi and tablets/iPhones seems well within their capabilities. I agree, I hope they address that soon.
    ? Parallax already have WiFi, and there are threads on their IDEs running on Tablets, and on RaspPi, so I'd say they already have addressed that some time ago...
  • potatoheadpotatohead Posts: 10,261
    edited 2015-02-21 17:26
    IMHO, phones are almost there, but not quite. Android itself is finally getting some things, like low latency UX capabilities, that it has needed for a while. Others are multi-window, and or some real ability to use the display for more than just phone related tasks.

    My older DROID 4, had a keyboard, and I actually authored quite a few BASIC programs on it. Got some app, just to give that all a go. USB device support is good already, with quite a few things supported, if a bit unexpected.

    Currently, I've a Note 4, and it's got an insane display. More pixels than an HDTV has, but it's nearly impossible to use it in a meaningful way. That device does have windowing capability, but... and there are a lot of buts, font sizes, the overall UX paradigm, etc... are far from optimal. That said, it's the first device that makes me want to actually try stuff on Android. Not that it's good, but that I think it could finally work.

    Give Android a couple more cycles and we might revisit this with better results.

    @brucee: Hey, every so often somebody steps in here with a load of comments similar to your own. It's all good, but don't expect them to be all that well received, until you've put Parallax into context.

    Many of us here know them very well, and you will find they are very open about how, what and why they do what they do. It's just not a scene you can map onto the more typical best practices you've likely grown familiar with during the course of your career. If you go digging through forum archives, you can find a few of us somewhere familiar to where you are right now. Give it some time, and most importantly, pick up a P1 and go. The device does a lot more than you would expect. And it also does some kinds of things so easy it's laughable compared to what one's expectations may be coming from other devices and environments.
  • User NameUser Name Posts: 1,451
    edited 2015-02-21 18:09
    brucee wrote: »
    As soon as your program exceeds 512 instructions, and takes the 8x hit or runs in a 30x hit of the interpreter, its performance drops off the cliff. I thought we all learned our lesson to avoid segmented memory of the x86 architecture.

    Clearly you persist in several misconceptions. The Propeller is a multicore microcontroller, not a single core microprocessor. Your segmented memory comments simply aren't applicable. Nor are your comments regarding the 512 instruction limit before things fall off a cliff. You may know the actual facts, but you choose to twist them, apparently to justify your invective and pessimism.

    I've never used LMM and I've never run out of memory on any project I've ever implemented on the Propeller. It's because I use the Propeller as a multicore embedded controller. (What a concept!) That some choose to use the Propeller in a more general fashion is a testament to how flexible is. But to then grade the Propeller on such a basis (for which it was never intended) is either dishonest or ignorant. Which are you?
    brucee wrote: »
    ...I check it out every 6 months or so to see what is happening...

    No more of your negativism and FUD until August? This day is really looking up!
  • bruceebrucee Posts: 239
    edited 2015-02-21 18:56
    potatohead wrote:
    @brucee: Hey, every so often somebody steps in here with a load of comments similar to your own. It's all good, but don't expect them to be all that well received, until you've put Parallax into context.
    Yes, I understand that I am talking to the Parallax fan club here, but I do know the Parallax context. In the 1990s we were using BASICstamps for quite a few process control applications at a display startup I was working at. Yes they were slow, but when you are turning on devices, and controlling temperature they were perfectly adequate. BASIC was perfect for our non-programmer types, which included PhD Physicists who cared more about the devices than the mechanics of controlling them

    In the early 2000s we started using Silicon Labs 8051s as we found a BASIC interpreter and C compiler for them and that gave us some more breathing room on program space and variables. Around the time of the introduction of the P1, we had started using LPC2106 boards from Olimex, again programmable in C or BASIC and as nearly a single chip solution was more than adequate for our needs. Those Olimex boards were around $50, and running reasonable sized programs (~20K) at 60 MHz. I looked at the P1 and between the limited memory and PASM/SPIN it was not nearly as good a solution for our needs.

    Since then the tech world has moved on and I can pick up true single chip solutions equivalent to the LPC2106 for around $1. I did pick up a P1 dev board at Radio Shack at the fire sale price of $8 and tried out the C IDE. Actually a pretty nice start, but as I suspected the P1 is a bit of a slug as the Coremark tests I ran on it were about 5X slower than the $1 ARMs. Now I can get 200 MHz ARMs that run about 20x the P1.

    From what I've learned here today, I'd say the P2 is probably not going to be available until 2016 at the earliest. For what I need, I see using a PC to develop code, but then have it communicate via web/bluetooth to my phone as the perfect solution. Unfortunately Apple has such a closed BT system, WiFi and the web is the best way to include it in the environment.
  • potatoheadpotatohead Posts: 10,261
    edited 2015-02-21 19:32
    Well then, cheers! Seems you have it all sorted.
  • Heater.Heater. Posts: 21,230
    edited 2015-02-21 19:35
    brucee,
    I thought we all learned our lesson to avoid segmented memory of the x86 architecture.
    I do hope so too.

    However I don't see the Propeller as having a segmented architecture. I see a machine with a 32Kbyte linear address space accessed by 8 cores each of which as 512 32 bit registers.

    Those cores don't execute code directly from RAM because then they would all be 8 times slower than they are. That would seriously impact that kind of "soft devices" you could create with a COG. And that would defeat the whole idea of the device.

    Now that architecture may not do what you want, fair enough.

    Spin and even C using LMM on the Prop may well be slow compared to raw C on an little ARM. But raw speed is not everything. Can those little ARMs directly control 32 PWM servos? If they can, can you program that easily?

    That servo example is an extreme demo. But it shows the kinds of application space where the Prop shines and many other solutions falter or become difficult. If you don't need to do such things then the Prop is not for you. Also fair enough.

    I'm curious, what actually is it you are wanting to do?
  • evanhevanh Posts: 15,964
    edited 2015-02-21 19:40
    The 32KB HubRAM is a problematic limit that isn't resolvable on the Prop1. A lot more fits in 512KB but the Prop2 will have the same problem albeit a little alleviated by a faster external memory. I'd like to know more about the possibilities of MRAM packing in several times the density of SRAM. Replacing that 512KB with 4MB of non-volatile MRAM would rock. No more external ROMs and you'd get the instant on of direct execute.

    Beau, what's your knowledge there? There was a reason why MRAM wasn't pursued. It required too many metal layers or something.
  • Beau SchwabeBeau Schwabe Posts: 6,566
    edited 2015-02-21 20:09
    MRAM is still largely "in development", Chip wanted to use something that had more time to mature. Also the targeted process does not support the required MRAM layers.
  • jmgjmg Posts: 15,173
    edited 2015-02-21 20:35
    evanh wrote: »
    Replacing that 512KB with 4MB of non-volatile MRAM would rock. No more external ROMs and you'd get the instant on of direct execute..
    MRAM is nothing like the speed of SRAM, and it employs a destructive Read which eats into the life cycle.
    - that's even before "the process does not support it", so MRAM is stone-dead on P2.

    More appealing/practical, and what could be done in a P2, is true Execute-In-Place on high performance Serial memory. That hooks into commodity parts, on a falling price curve, and makes any memory limit a much softer ceiling.
  • evanhevanh Posts: 15,964
    edited 2015-02-21 21:13
    Thanks Beau. JMG, you're thinking of FRAM or something. MRAM is entirely non-destructive, as in no known limit to number of reads and writes and certainly doesn't wipe the cell, and theoretically nearly as fast as SRAM. Writes have been a bit power hungry in some variants. As Beau point out though, MRAM doesn't have any track record as such.
  • evanhevanh Posts: 15,964
    edited 2015-02-21 22:01
    Problem is HubRAM doesn't just extend by adding external memory. There are huge speed and latency costs no matter what type of external memory gets used. In a CPU like description, HubRAM is like combining all cores' caches together and making them directly addressable without penalising speed.

    As an application developer, one can't just add more to this "cache" to get more space. One, instead, has to write software that carefully only transfers data to and from the external memory as efficiently as possible. Of course this software can be buried in the language support system or an API, as is already the case with XMM for example, but the application developer must work with this separation or risk an even harsher speed penalty than just sequential external fetches.
  • RamonRamon Posts: 484
    edited 2015-02-22 04:35
    Beau, nice to see you here again.

    When Ken announced that you were being laid off I asked him (half joking - half serious) if it could be possible to hire you as freelance IC designer to crowdfund via kickstarter a little improved P1 while we are waiting for the big P2.

    That was almost at the same time that the P1 Verilog code was announced. And some people asked for an slightly improved P1 but obviously no one here wants Parallax to be diverted from their current goal (P2).

    And also we don't want parallax P1 sales or revenue to be affected by this project. So if done, there should be some royaltie paid to Parallax (that hopefully can revert in a better P2 development).

    What do you think about the viability of that project? Will you be willing to lead it?
  • bruceebrucee Posts: 239
    edited 2015-02-22 07:02
    heater wrote:
    Can those little ARMs directly control 32 PWM servos?
    At what speed? At what resolution? How many will you build?

    To do something like that I would use an FPGA, which could run as many servos as the number of pins-2. Leave those last 2 to talk SPI to a controller. That would be the most flexible solution.

    Or I'd spend a couple more $ on an ARM with DMA and a separate internal memory. 4K of memory and 32 IOs would give you 10 bit resolution.

    I know FPGAs are expensive in low volume, but they also become quite cheap at high volume (Altera and Xilinx don't cater to the DIY community, it is too small of a market). Maybe that is where Parallax should look, bring FPGAs to the DIY market with simple software tools, I am not suggesting they build any silicon.

    On the ARM side, yes I would have to read a chapter of a user manual, but still simpler than writing PASM, at least in my opinion.
  • Heater.Heater. Posts: 21,230
    edited 2015-02-22 08:15
    brucee,

    Yes, as always it all depends on what you want to make, how many you want to make, how price sensitive it is etc etc.

    However. The Propeller is designed to do things that might otherwise call for custom logic or an FPGA. Admittedly at lower speeds and lesser scale than an FPGA but that is what the multiple COGs, deterministic execution timing and close coupling from CPU to pins is all about. Not to mention the per COG timers.

    So taking that 32 PWM thing as an example of such jobs. We can see that your FPGA solution is a lot more expensive. It's a couple of orders of magnitude more complicated to do. One has to know Verilog/VHDL, the tool chains, etc etc. It's a much bigger task than a couple hours writing normal software. Physically building it is also a bigger task.

    Then, what if you want some normal software running in this FPGA solution as well, to control those PWM signals say. Now you need a soft core CPU in your FPGA solution to run that code. You have to hook those PWM blocks up to it. The complexity has now gone through the roof.

    So FPGA is more expensive and a lot more complex than a Prop at the tasks the Prop was designed to do.

    What about the ARM solution? I guess low end ARMs could handle this job. But what about similar real-time tasks that also require many low latency responses to multiple inputs? Yes, again the FPGA could handle it at development great complexity and expense. The ARMs would fail trying to handle too many such input evens at the same time. All that juggling with interrupts is more complex than running up another COG anyway.

    Where the Prop fits it cannot be beat (Except perhaps by XMOS) The question might be, how big is the application space that the Prop does fit?

    Anyway I think we are in agreement, it's horses for courses as usual.
  • bruceebrucee Posts: 239
    edited 2015-02-22 09:08
    I am still trying to figure out what market the Px is trying to address.

    A Spartan-3 with 100 pins single quantity is $6.44. Verilog for PWM is pretty simple to write (free tools from Xilinx). Add a $1 ARM for control. Yes a 2 chip solution, but the P1 is 2 chips (P1 and external Flash), P2 is 3 (P2, 1.8V regulator and external Flash).

    You can build a P1 with an FPGA, but the opposite is not true. For instance one of the designs I did recently used an FPGA to do a 96 x 64 crossbar switch for a desktop IC tester (used for chip verification). Signals of 20 MHz or so had to be supported, could any Px architecture do that.

    The problem I see here in the forum is it is so much pie in the sky, with what if we could use 20 nm technology or FRAM or MRAM or ... If Parallax had infinite resources that would all be possible and wonderful. But what I see here is the "perfect is the enemy of the good". We are on our fourth iteration with not really anything to show for it.

    Parallax is now selling DE0 FPGA boards, maybe they should spend the effort on making the programming more DIY friendly if Verilog is the issue.
  • Beau SchwabeBeau Schwabe Posts: 6,566
    edited 2015-02-22 09:30
    Ramon,

    "When Ken announced that you were being laid off I asked him (half joking - half serious) if it could be possible to hire you as freelance IC designer to crowdfund via kickstarter a little improved P1 while we are waiting for the big P2. ... ...What do you think about the viability of that project? Will you be willing to lead it?"

    -- I don't see this happening for several reasons.... Design and testing of P1 improvements would need additional resources to parallel current P2 resources, and I actually have another job now where my resources are in use. For me to freelance would require significant monetary compensation for my time with a FIXed ( locked ) design that wasn't subject to perpetually being 95% complete.
  • Heater.Heater. Posts: 21,230
    edited 2015-02-22 09:42
    Brucee,
    I am still trying to figure out what market the Px is trying to address.
    A good question. We can all speculate and perhaps Parallax has articulated it themselves somewhere.

    To my mind it's:

    1) Originally an upgrade to all those old BASIC powered 8 bitters. That is only of historical significance as far as I can tell. But in the modern world anyone who has hit the wall with their Arduino and discovered that mixing and matching multiple tasks in an AVR is basically impossible is a potential Prop customer.

    2) As I say, it's fits the bill in terms of determinism and flexibility for jobs that would otherwise use a bunch of logic chips or an FPGA. As long as the scale and speed demands an not to great.

    I cannot agree that Verilog of VHDL is anywhere as easy to use as BASIC, Spin or even C. Seems the guys designing the RISC V at Berkeley agree with me, they have been developing their own hardware design language "Chisle" to address that issue. Language aside you still have to deal with the huge complexity of the FPGA dev tools

    Of course you can put a Prop in an FPGA but not the other way around. I'm not sure what you are getting at there. But as I said, if you need a soft core CPU in your FPGA design to run code as well as having to create the custom logic parts then you have a thousand times more complex task than using a Prop or XMOS.

    Yes, the forum is full of wild ideas. It's best to just ignore most of it.

    You have an interesting idea re: "making the programming more DIY friendly if Verilog is the issue." As I said the post grads at Berkeley are working on this problem. Does Parallax have the skills to tackle that? I don't think so. Should Parallax become a software house? I don't think so. Should Parallax outsource development of what you suggest to someone else? I don't think so, that would cost the world, take years and probably never work or see adoption.
  • bruceebrucee Posts: 239
    edited 2015-02-22 11:35
    Heater. wrote: »
    Of course you can put a Prop in an FPGA but not the other way around. I'm not sure what you are getting at there.
    What I mean is that I have seen the Prop suggested as a low end FPGA replacement, which I don't understand.

    Verilog is not all that bad, its worst feature is that it is "almost" C. VHDL is wordy, much like ADA. There is SystemC which is C with hardware extensions. Academics always like to reinvent the wheel, it makes for more PhD thesis.

    Parallax can't outsource it all, but it did get started with Chip writing a BASIC interpreter, then IDE, then SPIN interpreter. He has been immersing himself in Verilog. Maybe he could write a front end to Verilog to glue together common widgets into a useful tool for DIYs or students. For a processor something of his own design (a few miniCOGs) .

    The problem with the P1/2 has been that the assumption is that it can do hardware peripheral emulation, but in reality it struggles to do that. Can it do USB, 100 Mb ethernet, HDMI, or other high speed interfaces? Not so far that I have seen.

    I just don't see much viability it doing an 180 nm custom chip. That process was last used for leading edge SOCs almost 10 years ago now.
Sign In or Register to comment.