We need a prize for the first to port gcc to run on the P2!
Ugh. I don't really understand this obsession with running all of the tools on the P2. There are lots of things I've come to rely on in my development environment like nice editors, Google, web browsing and email, etc. I don't think all of those things will be available on a P2-hosted development environment without a lot of work reinventing the wheel. Also, if you ever did do that you'd end up with something as complex as Mac OS X / Windows / Linux anyway. I'd be willing to bet that even if we could run PNut or even a C compiler on the P2, very few people would actually develop that way. Other than the various Forths, I don't see too many people using the tools we currently have to program directly on the P1.
I don't really understand this obsession with running all of the tools on the P2
It's a strange obsession. One shared by Chip himself. He has expressed his desire for freedom from the PC and a self hosting Propeller many times. Why do you think he is building micro-controller with video etc?
Strangely enough, over on another thread the problem of how to program Propellers from iPads is posed. Well, if the Prop contained it's own editor/compiler/interpreters all the iPad would need is a terminal app and you are in business. And you don't give up you web surfing, email, or Parallax
forums:)
It's an idea whose time as come as PC are dipping out.
It's a strange obsession. One shared by Chip himself. He has expressed his desire for freedom from the PC and a self hosting Propeller many times. Why do you think he is building micro-controller with video etc?
Others also have this obsession. For example you can now program your tiny micro-controllers in Python or JavaScript, no need for a PC at all: http://www.espruino.com/
Actually, I can understand people wanting to use an interactive language like Forth, Python, or JavaScript that has an immediate mode (REPL). Even if I didn't write my code on the target platform, I'd still like that kind of language for debugging and testing. I don't think I'd want a compiler based language on the target though including Spin or C.
Strangely enough, over on another thread the problem of how to program Propellers from iPads is posed. Well, if the Prop contained it's own editor/compiler/interpreters all the iPad would need is a terminal app and you are in business. And you don't give up you web surfing, email, or Parallax
forums:)
I suppose so. I'll be convinced when I actually see people using these on-target tools. As I said, I've only really seen evidence that people use Forth like that on the P1 even though there are various Basics available as well as a Spin compiler.
It's an idea whose time as come as PC are dipping out.
Could be. However, then you're still teathered to another machine. You aren't really running standalone.
If the compiler is very small and fits in the device, along with some kind of editor why not? Spin for example is small and fast, it would be almost as
interactive as JS or Python.
These devices are now more capable than the CP/M desktop machines we managed with quite well all that long time ago.
However, then you're still teathered to another machine. You aren't really running standalone.
True. At least you need a dumb terminal.
And that was my point. What we are seeing here is the fading away of the PC as people adopt the basically useless "dumb terminals" called tablets and phones. If you want to do anything it has to be done on a remoter server somewhere. In this case the micro-controller is he server (sort of) serving up it's own dev system thus making it independent of any particular dumb terminal, iPad, Android, etc.
This is what the "program Prop on iPad" thread is facing.
Someone in that educational institution might soon decide that perhaps they could just skip the Propellers for their robotic experiments and use MicroPython or TinyJS. All they need is that terminal app on the iPrison.
If the compiler is very small and fits in the device, along with some kind of editor why not? Spin for example is small and fast, it would be almost as
interactive as JS or Python.
Then why don't people use that on-chip Spin compiler that already exists for the P1 (Sphinx)?
These devices are now more capable than the CP/M desktop machines we managed with quite well all that long time ago.
That's true but what I expect from a development machine has changed since the CP/M days. I wouldn't want to go back to CP/M as a development environment.
True. At least you need a dumb terminal.
Yes but serial ports are something that is rapidly disappearing from PCs these days. Also, I don't think Windows ships with Hyperterminal anymore so you'd at least have to install some terminal program before you could even talk to your target. If you're going to do that, why not install an IDE?
And that was my point. What we are seeing here is the fading away of the PC as people adopt the basically useless "dumb terminals" called tablets and phones. If you want to do anything it has to be done on a remoter server somewhere. In this case the micro-controller is he server (sort of) serving up it's own dev system thus making it independent of any particular dumb terminal, iPad, Android, etc.
I'm not sure that the PC will ever completely disappear. However, we may go back to the days when only business users and hackers have actual computers and everyone else uses an appliance like a tablet or phone.
This is what the "program Prop on iPad" thread is facing.
Someone in that educational institution might soon decide that perhaps they could just skip the Propellers for their robotic experiments and use MicroPython or TinyJS. All they need is that terminal app on the iPrison.
However, isn't the terminal exactly what you can't get on the so-called iPrison? Apple disallows the Bluetooth SPP?
Anyway, I'm not opposed to programming on the target. I'm currently working on a C-like bytecode compiler with a REPL that will run on the Propeller. I just don't think it will be the most common way of programming micro controllers.
...why don't people use that on-chip Spin compiler that already exists for the
P1
No idea, never looked, I have a PC
That's true but what I expect from a development machine has changed since the CP/M days. I wouldn't want to go back to CP/M as a development environment.
Agreed.
...serial ports are something that is rapidly disappearing...
True, Actually I was thinking ahead of serial ports and hyperterminal. Todays "dumb terminal" is the web browser. Shall we say a browser on ANY machine. The real work is served up from elsewhere. We are almost at the point where such micro-controllers will be network attached and can host their own development services.
Point is it matters not what machine or OS the user has. He does not even need to own a machine, just surf to the target and program it.
Potentially the editor and compiler could run in the browser having been served up from the target.
We could already imagine the Open Spin Compiler (for example) being converted to JavaScript by Emscripten and served up to a users browser from a Propeller !
I'm not sure that the PC will ever completely disappear.
It will. Even us techies don't actually want a PC. What we want is a big screen, keyboard, mouse etc to work with whilst sitting in a comfy chair. The compute engine does not need to be that big ugly PC box or laptop. We may well have our own compute engine in the closet.
However, isn't the terminal exactly what you can't get on the so-called iPrison? Apple disallows the Bluetooth SPP?
Yeah, that's shitty. But as I said the browser and net is the modern dumb terminal.
I'm currently working on a C-like bytecode compiler with a REPL that will run on the Propeller.
Re: Why don't people use the on chip compiler on P1?
I don't think the P1 is large enough to make it practical. Now I did and will use P1 for some basic things I don't have dedicated equipment for. It's been a signal generator a few times. And it's been a logic analyzer too. If Forth and I got along better, I would very likely have setup a P1 on a dedicated board for those purposes and I would use it interactive, maybe doing some other things too. P1 is actually a fine Forth computer. The scale of it happens to work out well, unless one wants graphical displays. Then it's a bit cramped.
Re: The various BASICS
Same problem. Given how the P1 performs, an interactive basic is pretty limited. I did enjoy Bean's basic.
It's interesting to note the differences between people who used 8 bit computers and those who really didn't. Back in the day, that 8 bit computer was there on my bench, and it did some of the things a P1 would today. It wouldn't make a very fast logic analyzer, but I could get signals out of it, or write short programs to control things. It was the general purpose calculator in a pinch. Used it a lot for that, particularly graphing various things. I had a library of BASIC and Assembly Language programs that would output TV alignment and test signals. One of my computers was very suitable for that purpose. Sounds and signals. Graphic libraries for math. Load one of those, input it, adjust the range, run it, etc... Extremely clunky by today's standards, but back then it seemed small enough.
That computer was my interface to the world. A crude one, but effective. I could do things to learn things, and I did. And those things I learned took me right out of being poor and I do well today. Basic computing that is not so isolated from the world, a library or Internet connection today is all somebody needs to feel some passion, get after it, nail the skills and go sell them. Beautiful.
As the PC grew up, the distance between me and the various outputs it has simply grew. And the PC is kind of expensive, should I burn out a port or something... I use the PC for a lot of things now, and I have a few of them,. All laptops now. I'm OK with an isolated USB connection, maybe audio in some cases, but not much else. And I've got one jittery, mostly useless USB port on my Lenovo from a mistake. Given the cost of that machine, no way am I doing that again. And service can be painful. Rebuilding the Mac was an adventure! But here I am typing on it, after a couple days work. I don't want to do that again either. Time better spent doing fun stuff.
For larger scope things, a PC is a must for me. The OS, windows, storage, speed, and all those other things, are great for development. I plan to continue that.
So why on chip?
A few reasons actually:
One is I do miss that more connectable workbench computer that can take some damage without the cost and utility issues. When a laptop goes down, lots of stuff goes with it. The things get used for games, various accounts, communication, and a ton of other things in addition to programming. Seems excessive to have too many computers, and to have one for development only, though I do have one that is old for P1 development. It's what I use most of the time, and it's not used for much else. The thing is gonna die though. Soon. Then I'll be back to multi-purpose computing in that use case.
Was reading an article a while back on some game developers out there who still have their 8 bit machines. They can reach over, type up something lean and know it will just happen, then continue on while that machine does something they learned how to do so many years ago, it's dead simple.
I will probably take a P2, set it up with storage, a couple of displays, and a variety of I/O setups for sampling things, digital / analog, various outputs, etc...
From there, I may just turn it on and leave it on, able to input things, write short programs, display, etc... and do so in a lean, simple, consistent way. A recent experience with the P2 monitor was notable. Just left it on, moving data into and out of it. Start a COG doing something, stop that COG, capture data, or build a few things, put them in RAM, when it all runs, output it all back to the terminal. Had I some storage and a bit more options for connectivity than I do on the FPGA board, I would have likely just wrote it all out to the SD card for use later on. Never turned the chip off. I got a small look at what the Forth guys do all the time.
Now, I've noted a trend with Millenials and some other people who want to operate in a leaner way. They don't have PC's, or if they do, they have one and they don't use it much. It's like that appliance needed for a few things, but generally avoided like a land line is for many people today.
Lots of them don't want the complexity and hassle. Cars? Maybe. Land line? Never. Computer? Maybe, but a phone and tablet does enough. They want to live and be leaner than we are. Not sure why that is. Maybe it's cost. Here in the US, cost is getting to be an issue for a growing number of people. The poltics of that are for another day, but the reality of it suits this discussion perfectly, I think.
I can tell you right now, if I were say a late teen or 20 something, I would set one of these up along with all my goodies either salvaged or bought, and it would get used regularly, because I probably would not have a PC, nor a car, and I would be learning and doing stuff I want to do lean, avoiding as much of the daily time suck as possible. And there is your answer to Ken's "but we need to work with an iPad" right there. We do need to work with an iPad, but some of those students will do other things, some will use a PC like we do, etc... Just having the choice out there is worth doing, if nothing else.
I grew up dirt poor. Every cool thing I had, I either repaired or built. When I did finally get computers, I got them on work trade and I used the Smile out of them. Now I don't have any regrets on that. So many skills I have today came from that, it's laughable when I compare to my peers today, who often have no real clue. I think a little different because of that, and I think I relate to the millenials fairly well too. They are experiencing similar things, and for the smart ones so much is being thrown away too! Compared to what I could get my hands on, sheesh! No contest. A day scrounging thrifties and your neighbors can yield a total playground! Game on.
They do have phones, tablets, pads.
And those things can be painful to develop with. A young person may well find an on chip system practical if it's packaged up as a product. They can get into electronics mode, and there the thing is. If it's got an SD card slot or two, enough RAM to hold a lot of data or programs or display things, it may well make sense to use it instead of fiddling around with the pad and using things at a distance.
At least the PC can seem pretty close. Direct connection, it's fast, you compile, press the button and things happen. The pad seems far away by comparison. Droid can be either, though I've not explored doing so much, mostly because I don't want an expensive and necessary phone getting mixed up with the electronics, where sometimes I kill things. Dead.
Funny too, I was doing P2 testing when suddenly executables from Chip and Baggers no longer worked, though they could run mine. Turns out building for XP is something like building for Snow Leopard on Mac OS. Gotta know to do special things, or it won't work on the older machines anymore. PITA It may make sense to make it all very lean, so that problem kind of goes away for those who want that, or maybe need that. Who knows?
Heck, a P1 able to telnet, or run a nice serial terminal, connected to a UNIX PC, suddenly gets kind of useful in simple ways. Build something on the PC, "squirt" it over to the P2 via the monitor, and if it's data, display it, or if it's a program, run it, etc... It could be a console when not doing anything else. I like having those, because I actually do look at the SYSLOGS of various kinds while doing stuff.
It's expensive or sometimes difficult to obtain and use programs for the PC to do exploratory things. Here's one: I want to know what the limits of my ears are and then other people's ears are regarding audio related things. Like say I want to dither between pitches, or connect a few speakers and work with psycho-acoustic imaging. (which I actually do want to do)
On the PC, it's actually kind of hard to do that. Licenses are needed for the surround capability. The audio chain needed to make all that work is laborious too. I could have that going on a P2 with similar efforts and really learn some stuff that I could then apply to a larger project. An audio only game environment intrigues me.
Now, using the two together might be useful too. Here's another one audio related: Say I want to do an mp3 encode of something, then pick that apart and isolate related groups of frequencies and output them on separate tracks, a few at a time, one speaker each. That workbench computer is going to be simple to work with compared to the alternatives. No licenses, no BS. And I could graphically display some info fairly easy too. Real time. No GL needed.
Quite simply, there are some sounds in my head I would like to get out. I've either heard something I want to explore some, so there it sits in there waiting, or I've imagined something I am curious to see if it's perception for real is the same, and for me, or others. If it's not a simple stereo sampled thing, it gets kind of hard. Doesn't have to be.
Oh, here is one other subtle thing. A couple of those sounds in my head are beyond what my ears will do now. I'll need to "see" those, then I can rope somebody into the scene and ask them what they heard. Nifty, lean, cheap.
So it's things like that for me. Lots of other examples abound. The PC will still be there. I like the PC, and it's got serious advantages, but there are also a lot of barriers today that really don't need to be there. I'll use the PC for a lot of development. But I know I would do more core things interactive, maybe in combination with a PC too. And I would do that because I would know exactly what is happening on the real end of things, where if I get a bunch of software and expensive gear, I may not, or there are limits on it that aren't appropriate.
I used the "on-chip" Spin compiler that Michael Parks wrote for his Sphinx OS. The main reason I stopped was that there seemed to be an intermittent bug in the SD card routines and, every so often, it would corrupt the directory. I didn't see anything obvious and no one was maintaining the system. I had ported my Meta2 compiler-compiler front end to it and there was a version of FemtoBasic that ran on it. It needed a real editor and maybe a version for a VGA display to get a bigger screen for editing ... all of that for the P1. It could be easily ported to the P2 with major improvements given the P2's resources. On the other hand, a Raspberry Pi or BeagleBone or similar board would make a great development base for the P2 with the P2 available as the I/O and peripheral co-processor.
Remember all those graphics demos people made for the Propeller?
They were mostly made to prove that it could be done...
I still have a certain .mp3 with a Propeller singing in harmony with itself, courtesy of Chip's genius...
There are books on how to best utilize the 'fast' coding functions of the 'Zero Page' addressing in the 6502...
I'm by no means a great programmer, but I KNOW that anyone who isn't 'pushed' by having to work on a limited system is never going to be the next Knuth, Torvalds or Chip...
When I see todays 'shields' people design for Arduinos and other SBCs I feel like digging out the Clue-by-Four and well... make my opinion known...
As all I see are modules that hardly ever can be used together because EVERYONE uses the same IOs, and no one thought as far as adding some switches or jumpers... They just seem to think that Plug'n Pray will work on these, too...
Incidentally, a good desktop computer today is more powerful than the CM-5 supercomputer that was ranked as #1 on the TOP500 list in 1993.
Where does all the cycles go?
I mean a traditional PC. Those big, ugly, beige boxes with fans whirring and rattling around inside, mostly full of empty space. Horrible pieces of cheap skate engineering.
No, I didn't think so. What our generation fell in love with was much smaller and nicer, C64's, BBC micros and so on.
The survival of Apple and the MACs shows this. Sadly only those with money were in a position to avoid the PC ugliness.
The lap top became the PC a while ago. At least they are not huge and noisy. Still pretty horrible though.
So the tablet generation are only skipping all that hideousness now that they can, it's cheap enough to escape.
As I said, what we really want is big screens, nice keyboards and a comfy chair to work in. The compute engine can disappear. It can be distributed elsewhere. It can be accessible from anywhere.
I never thought about it that way, but your comment brings back all the impressions I had when I built up my first MSDOS PC. It was clunky, big, slower than my nice Apple ][ for some things, and using it was a total pig.
I love my laptops. I really love my Mac. Jobs was right about it. All of it. Fought that for a while, going to big stuff, like SGI, but you know what? I eventually gave up the big IRIX boxes, because I can't take them with me. (and I miss IRIX big. Still.)
BTW, my first laptop was a throw away at work. The sysadmin we hired tried to fix a broken power button on it. It was a Pentium 90 or something, full color display, running WIndows 98. I put Red Hat on it.
Found a bag full of all the parts! LMFAO! So, I took it all home, attached wires to where the broken part of the board provided the power signal, reassembled all of it, and just made a pressure switch out of layers of electrical tape. It ran fine for years. Gave it to somebody trying to go to school. Ran for years more. (they wanted Windows back though)
Anyway, I did everything I could on that thing, because it was where I was. Funny that.
Re: Computing elsewhere.
Well, for some things yes. E-mail and all the basic work comms? Yeah. I don't care, and I've got local software for my things on my terms. No worries.
Other things? No, I kind of want the computing where the work is, and it doesn't have to be all that much computing either.
It seems simplistic to say that the P2 won't be commercially viable if it doesn't match the price of the LPC4370. The two are substantially different, philosophically and practically. Also, who can know of all commercial applications in existence? I'm surprised at the varied uses to which the P1 is put. It has done well in an environment of dirt cheap ARMs.
I said *much* higher. No, P2 does not *have* to *match* the LPC4370 (or many similar parts) but it won't do well if it's priced much higher. Ie, if someone can get LPC4370 for $10 or less and P2 is priced at $15 or $20 or...
If you're going after commercial design wins, price matters. Period. No amount of wishful thinking changes that.
The LPC4370 shows 1,000: $5.34 vs Prop 1 @ 500: $5.68
So I'd say it there is not even a remote chance of P2 being priced lower than LPC4370.
I think what Bill was getting at, and I what I said, is that it has to be at least competitive. If it's 1.5 or 2x (or more) the price of parts that can do comparable things - that's going to be a problem.
IIRC Chip said the P2 may in fact be a tiny bit cheaper raw cost but there is a large R&D to recover first. I am sure someone wanting 1M pcs could negotiate a great deal with Ken.
Unfortunately, it works that very same way with other parts as well. Pricing, in large-volume designs especially, is largely a comparative function. That is, you shop around.
Actually, I can understand people wanting to use an interactive language like Forth, Python, or JavaScript that has an immediate mode (REPL). Even if I didn't write my code on the target platform, I'd still like that kind of language for debugging and testing. I don't think I'd want a compiler based language on the target though including Spin or C.
Ditto. I'd want an environment - like Forth - that runs on the metal and allows me to program interactively, except with more libraries and code readily available to go along with it. Anything more than that is, in my view, a mere toy. And since we're not really talking (so some say at least) about the hobbyist market now, that would seem especially worthless. Granted, it might be cool for some to play with - but that's about as far as it goes. Dreams of competing (even poorly) with RaspPi, BeagleBone, etc.? ... you can forget those.
Actually, I can understand people wanting to use an interactive language like Forth, Python, or JavaScript that has an immediate mode (REPL).
Since this is now officially the whackadoodle thread =D , how about making Prop into a multi-core stack machine such that it runs Forth like a native assembly language (similar to the J1 FPGA core or some 4-bit micros that I remember from years ago). Forth already does surprisingly well at performance, but then you'd have interactive programming (a mini-OS of sorts) with virtually no performance hit at all.
Since this is now officially the whackadoodle thread =D , how about making Prop into a multi-core stack machine such that it runs Forth like a native assembly language (similar to the J1 FPGA core or some 4-bit micros that I remember from years ago). Forth already does surprisingly well at performance, but then you'd have interactive programming (a mini-OS of sorts) with virtually no performance hit at all.
If you're going after commercial design wins, price matters. Period.
This is going to have to come in stages as the price curve bends.
Existing Parallax users are highly likely to adopt it. Education, etc... will work.
People making specialized or niche products may well adopt as well. Being able to differentiate and or rapidly innovate is very important. If they are selling to early adopters in their niche, this will work too. As they move along the curve, volume comes up, ideally Parallax recovers and price can drop with them. Nice. If this happens. I think it's a reasonable expectation. Price isn't always primary in this case. Time to market often is. Margins in specialized niches are way different and are that way for different reasons than the typically thin ones found in consumer electronics.
As the P2 moves along it's curve, the early adopters will need to fund the later majority, and as that happens, larger scale commercial adoption will be increasingly possible.
I'm not very good at this marketing/pricing/economics thing. Often it seems the price of a thing is nothing to do with what it cost to develop or manufacture. Fashion accessories and super-cars have to be expensive because some people would not be seen dead in anything not known to be expensive.
And what about this supply and demand thing? What a paradox. They say if demand outstrips supply you can command a higher price. OK. But if you have a higher demand you can make more. If you make more economies of scale bring the costs down. You can lower the price and cause bigger demand. And perhaps increase profits as a result.
So how does one know where to pitch the price of a new product. Sounds like ear to the ground, finger on the pulse, gut feeling and some tea leaves are required rather than any serious calculation.
So what price the P2?
I would suggest that there is only one other chip in the world that is comparable to the P2 and that is made by XMOS. (There I said the X word again). My gut would say that the P2 should be price competitive with the corresponding device from the XMOS range.
Ugh. I don't really understand this obsession with running all of the tools on the P2. There are lots of things I've come to rely on in my development environment like nice editors, Google, web browsing and email, etc. I don't think all of those things will be available on a P2-hosted development environment without a lot of work reinventing the wheel. Also, if you ever did do that you'd end up with something as complex as Mac OS X / Windows / Linux anyway. I'd be willing to bet that even if we could run PNut or even a C compiler on the P2, very few people would actually develop that way. Other than the various Forths, I don't see too many people using the tools we currently have to program directly on the P1.
Basically, I agree with you. I already have to tote my laptop around to program or diagnose problems, and the majority of the software needed to do that runs on Windows. On the other hand every time I have to get a new laptop I also end up having the latest version of Windows on it, so I can see the appeal of having the tools run on the target processor. Of course even that has it's down side. Imagine working on a dozen different products, each with their own unique OS and development system.
It will. Even us techies don't actually want a PC. What we want is a big screen, keyboard, mouse etc to work with whilst sitting in a comfy chair. The compute engine does not need to be that big ugly PC box or laptop. We may well have our own compute engine in the closet.
Interesting...
Why in the closet. I can't understand why there isn't a reasonably priced big screen monitor with the computer built in to the back of it. Adding an inch or so of depth to the monitor would provide plenty of room to mount all the hardware needed, provide enough surface area to dissipate the heat generated without fans, and get rid of all the external cabling.
So how does one know where to pitch the price of a new product. Sounds like ear to the ground, finger on the pulse, gut feeling and some tea leaves are required rather than any serious calculation.
, and
Often it seems the price of a thing is nothing to do with what it cost to develop or manufacture.
, with
Fashion accessories and super-cars have to be expensive because some people would not be seen dead in anything not known to be expensive.
, realizing that first and foremost: Products are worth what people will pay for them.
In reverse order:
Fashion accessories are not expensive because those people wouldn't be caught dead in something not expensive. They are expensive, because they convey status, and status has very high value to some people, and near zero to others. The actual product manufacturing cost has very little to do with this dynamic, and in fact you can see lame attempts to convey this with precious metals and gems being globbed onto something that is otherwise ordinary.
Paris Hilton status is very different from, say Elon Musk status.
Paris is about the money, she has it, and shows it, and the product manufacture is actually almost completely marginalized. Few others see value in this. Gaudy, trashy, etc... Now you will also see ordinary people making ordinary things gaudy with cheap materials. They are signaling their envy of the money as status. Both Paris and the ordinary poser wannabe will pay MORE for a gaudy thing for the signals it conveys, not what it does or how it's made, etc... The difference really is Paris can afford real gems. Ok then..
The other more respectable fashion trend is all about style relevance and status. People buy expensive designer clothing at very high margins because they want to be first with it, trendy, hip, etc... This works about like the gaudy stuff does, but there often is a more respectable element of style, and all that speaks to people. A really great suit, is one case where somebody wants to show status, style, relevance, but not be trashy. Materials, quality of construction, engineering (yes, there is engineering in clothing, after all the Playtex women made NASA the best Space Suit), do matter far more in this case.
Notably, both of these cases include forms of value that are disconnected from the actual manufacturing, which means worth is in the eyes of the buyer, leaving the seller to establish that in their mind and justify price.
Elon sells very expensive Tesla cars. Like Apple, Tesla is adding value throughout the whole experience, including high tech materials, manufacturing, etc... Not trashy or gaudy. Just high value. The Tesla needs to cost a lot, because doing that is the only way to fund getting to the cheaper Teslas for everybody. Early adopters buy because they want the value of the Tesla NOW, not 10 years from now. Very different from the gaudy Paris Hilton type value.
Apple Computer is an easy example of high value that incorporates more than manufacturing. Apple adds value in their design, quality of manufacture, software environment, purchase experience, in short, everything and Apple asks for a return on that value added in the form of higher price for it's products. Apple has somewhere close to a 20 percent profit margin on it's business where HP has something closer to 8 percent.
Apple and HP could technically spend the same manufacturing budget and ship functionally identical machines. Let's say they do that. Should they cost the same?
Absolutely not.
If Apple spends money on design, or maybe just spends on more savvy people to do design, what they render with that budget will be more refined, ergonomically pleasing and functional, looks better, and so on. Apple would also do the work on simple packaging, lean purchase experience, and so on. Finally, there is the software ecosystem out there.
So these two functionally identical machines from a computer stand point are capable of the same throughput, display resolution, storage, etc...
Some of us do not value anything beyond that and see thicker margins as excessive and would not be buyers. We would buy the HP and be pleased that we got optimal capability for a low price. Others see their interaction experience differently. Not having that "HP Care" thing nagging you all the time, is worth some money. Seriously, Sony learned this when people would pay them NOT to include the garbage with their PC. Not doing that was worth about $50 / machine. Isn't that notable?
Now here's the interesting part. Say Apple and HP sell these machines for a couple of years. HP will have sold about 5 times as many than Apple would have. HP ends up doing more warranty service, software updates, and all the work that goes along with moving a product to people, and they do it for a lower margin per machine.
Per transaction, HP earns a lot less.
Apple will have moved considerably less machines. However they do that same work at a much higher margin!
Per transaction, Apple earns a lot more. Additionally, they earn money through the software offerings, etc... where HP wants to do that with printers and that ink we all know costs too much, etc...
At the end of it, Apple will have done a lot more work, and harder work up front, but they get higher margins and a bigger footprint of sales.
Apple is extremely well capitalized today. HP? Not so much.
So there is your case of once again, price being disconnected from manufacturing costs. Now it comes time to make the next machine. Apple being well capitalized can do the same awesome job again, because they can afford to. HP must operate leaner, because they can't afford to.
This is what Steve Jobs realized early on. Things are worth what people will pay for them, and adding value they recognize adds to the margin per sale, which means a better funded business overall, which means having enough to continue to make fine products and no race to the bottom type things getting in the way.
Again, things are worth what people will pay for them. On the product manufacturing side, doing everything you can to add value to your product means you do everything possible to get a better return on your product. It's the return per product that you want, not the lowest price, unless your business model is to sell to everybody possible. Often, that's not the right choice. Depends on your product and what value you can add.
I very frequently see mass consumer goods models applied to discussions on price and P2 applicability generalized as "the way it all works" in this forum, when the truth is niche, higher quality, higher value products do not work in the same way that mass consumer goods work.
That should explain my post above in that the initial P2 price curve will be suitable for some kinds of efforts and not others.
Frankly, I see the P2 as the iPod / Apple of microcontrollers. Parallax has a lot of design in the thing, and Parallax will need to ask for that value in order to fund doing the next thing. Due to long life cycles, price will come down, Parallax makes the P3, and the cycle continues, just like it did on P1, which was more money at first, and is now very nicely priced.
The expectation that it meets mass marketed devices price right out of the gate is a completely unreasonable one. This isn't to say it's going to be a ton of money, but the product needs to fund the business as well as fund the next product, or we won't get a next product that is as cool as the one we have now.
Which brings me back to this one:
So how does one know where to pitch the price of a new product?
The short answer here is to add up all the value added, manufacturing costs, development returns needed to fund the next thing, etc...
Step one.
Step two is to understand what those values are worth to potential buyers. Maybe it saves them time. Maybe they get to market quicker. Maybe it's easier to use. Whatever. Maybe the educational materials mean it can be adopted easier.
Then price based on what that worth means in dollars. Sometimes you have to ask people this. Other times you can compute it based on metrics. You may be able to infer it too, based on other products, or some understanding of the buyer.
Finally, price at the higher end of that range and release the product to early adopters and engage them to tune up the price, or add more value to maintain it, depending on the scenario.
That's the essence of it. I've been through this a few times and it's an intense exercise!
[QUOTE=Heater.;1225623So what price the P2?[/QUOTE]
I would suggest that there is only one other chip in the world that is comparable to the P2 and that is made by XMOS. (There I said the X word again). My gut would say that the P2 should be price competitive with the corresponding device from the XMOS range.[/QUOTE] Really, from a business standpoint, it's only what is comparable for my *particular* application. That is to say, what options do I have to do what I want (or need) to do and how much do they cost? If significant volume is involved, this latter becomes an even more important question.
But XMOS would seem to be one competitor to look at closely when trying to evaluate this stuff. Any part (or sub-family) in particular we should be considering?
I can't understand why there isn't a reasonably priced big screen monitor with the computer built in to the back of it. Adding an inch or so of depth to the monitor would provide plenty of room to mount all the hardware needed, provide enough surface area to dissipate the heat generated without fans, and get rid of all the external cabling.
Apple makes these. Big monitor that is a computer. Expensive, but really nice if the form factor is your thing. For many, it's not.
But XMOS would seem to be one competitor to look at closely when trying to evaluate this stuff. Any part (or sub-family) in particular we should be considering?
Very strongly agreed. I would also evaluate where and how they added value and how that impacted their sales. Then look at P2 and see how and where that value add is optimal and likely different from what XMOS did.
Xmos and Parallax aren't the only mulitcore players anymore. Freescale, TI and others have gotten into this arena in a big way. Parallax won't be able to claim uniqueness as a selling point. Yes the other multi-cores are quite different but they are out there.IMO it's going to have to be on technical merits and development tools, ease of coding, etc.
Cost? Well if you're selling chips to instrumentation makers(industrial, medical, etc) their devices that are used professionally are often quite expensive - especially the medical devices. I don't think a $2-$6 price difference would be major factor. I suspect the MCU chosen will mostly be on package, capability, ability to sell chip to PHB, etc.
Consumer devices are another category altogether, it seems to be a cut throat arena where price matters and every big player is fighting for a piece of the action because it means millions of chips sold.
I can't understand why there isn't a reasonably priced big screen monitor with the computer built in to the back of it.
Why not indeed? I think they are called "smart TVs", if only we could get at the Linux OS they are running.
Point is I want my work and my stuff to be accessible from anywhere in the world I happen to be. I want it in the "cloud", but I want that cloud to be mine. It would probably end up in the closet
potatohead,
Products are worth what people will pay for them
I can go with that oft said phrase. But when you are launching a new product, especially a new product unlike any other, how on earth do you know what the people will pay?
Pitch it too high and you make no sales, you may have gone bust before you realize your mistake.
Pitch it too low and your lose your profits and go bust.
KC_Rob,
Any [ XMOS ] part (or sub-family) in particular we should be considering?
No idea just now, I have not been following XMOS for a while now and I believe they have new devices out since I last checked.
XMOS actually ticked me off a bit. Some marketing droid there decided it would be a good idea to rename "cores" as "tiles" and hardware thread scheduling as "cores". (Then in small print "logical cores"). They applied this to all their marketing material and it seemed like a big dishonest sham to me.
rod1963,
Which Freescale, TI..multicore chips are you referring to. I know we have multi ARM core machines and such but those mostly occupy a totally different space. Then there are the chips that come with an ARM and some PRU's (or what are they called) to handle the real-time work. I'm not sure how the comparison goes there.
No idea just now, I have not been following XMOS for a while now and I believe they have new devices out since I last checked.
I get the impression that you're the most familiar with them here, which is why I asked. I've only looked at XMOS superficially, so I'm in no position right now to even guess.
Comments
Others also have this obsession. For example you can now program your tiny micro-controllers in Python or JavaScript, no need for a PC at all:
http://www.espruino.com/
http://www.kickstarter.com/projects/214379695/micro-python-python-for-microcontrollers
Strangely enough, over on another thread the problem of how to program Propellers from iPads is posed. Well, if the Prop contained it's own editor/compiler/interpreters all the iPad would need is a terminal app and you are in business. And you don't give up you web surfing, email, or Parallax
forums:)
It's an idea whose time as come as PC are dipping out.
If the compiler is very small and fits in the device, along with some kind of editor why not? Spin for example is small and fast, it would be almost as
interactive as JS or Python.
These devices are now more capable than the CP/M desktop machines we managed with quite well all that long time ago. True. At least you need a dumb terminal.
And that was my point. What we are seeing here is the fading away of the PC as people adopt the basically useless "dumb terminals" called tablets and phones. If you want to do anything it has to be done on a remoter server somewhere. In this case the micro-controller is he server (sort of) serving up it's own dev system thus making it independent of any particular dumb terminal, iPad, Android, etc.
This is what the "program Prop on iPad" thread is facing.
Someone in that educational institution might soon decide that perhaps they could just skip the Propellers for their robotic experiments and use MicroPython or TinyJS. All they need is that terminal app on the iPrison.
That's true but what I expect from a development machine has changed since the CP/M days. I wouldn't want to go back to CP/M as a development environment. Yes but serial ports are something that is rapidly disappearing from PCs these days. Also, I don't think Windows ships with Hyperterminal anymore so you'd at least have to install some terminal program before you could even talk to your target. If you're going to do that, why not install an IDE? I'm not sure that the PC will ever completely disappear. However, we may go back to the days when only business users and hackers have actual computers and everyone else uses an appliance like a tablet or phone. However, isn't the terminal exactly what you can't get on the so-called iPrison? Apple disallows the Bluetooth SPP?
Anyway, I'm not opposed to programming on the target. I'm currently working on a C-like bytecode compiler with a REPL that will run on the Propeller. I just don't think it will be the most common way of programming micro controllers.
Point is it matters not what machine or OS the user has. He does not even need to own a machine, just surf to the target and program it.
Potentially the editor and compiler could run in the browser having been served up from the target.
We could already imagine the Open Spin Compiler (for example) being converted to JavaScript by Emscripten and served up to a users browser from a Propeller ! It will. Even us techies don't actually want a PC. What we want is a big screen, keyboard, mouse etc to work with whilst sitting in a comfy chair. The compute engine does not need to be that big ugly PC box or laptop. We may well have our own compute engine in the closet. Yeah, that's shitty. But as I said the browser and net is the modern dumb terminal. Interesting...
I don't think the P1 is large enough to make it practical. Now I did and will use P1 for some basic things I don't have dedicated equipment for. It's been a signal generator a few times. And it's been a logic analyzer too. If Forth and I got along better, I would very likely have setup a P1 on a dedicated board for those purposes and I would use it interactive, maybe doing some other things too. P1 is actually a fine Forth computer. The scale of it happens to work out well, unless one wants graphical displays. Then it's a bit cramped.
Re: The various BASICS
Same problem. Given how the P1 performs, an interactive basic is pretty limited. I did enjoy Bean's basic.
It's interesting to note the differences between people who used 8 bit computers and those who really didn't. Back in the day, that 8 bit computer was there on my bench, and it did some of the things a P1 would today. It wouldn't make a very fast logic analyzer, but I could get signals out of it, or write short programs to control things. It was the general purpose calculator in a pinch. Used it a lot for that, particularly graphing various things. I had a library of BASIC and Assembly Language programs that would output TV alignment and test signals. One of my computers was very suitable for that purpose. Sounds and signals. Graphic libraries for math. Load one of those, input it, adjust the range, run it, etc... Extremely clunky by today's standards, but back then it seemed small enough.
That computer was my interface to the world. A crude one, but effective. I could do things to learn things, and I did. And those things I learned took me right out of being poor and I do well today. Basic computing that is not so isolated from the world, a library or Internet connection today is all somebody needs to feel some passion, get after it, nail the skills and go sell them. Beautiful.
As the PC grew up, the distance between me and the various outputs it has simply grew. And the PC is kind of expensive, should I burn out a port or something... I use the PC for a lot of things now, and I have a few of them,. All laptops now. I'm OK with an isolated USB connection, maybe audio in some cases, but not much else. And I've got one jittery, mostly useless USB port on my Lenovo from a mistake. Given the cost of that machine, no way am I doing that again. And service can be painful. Rebuilding the Mac was an adventure! But here I am typing on it, after a couple days work. I don't want to do that again either. Time better spent doing fun stuff.
For larger scope things, a PC is a must for me. The OS, windows, storage, speed, and all those other things, are great for development. I plan to continue that.
So why on chip?
A few reasons actually:
One is I do miss that more connectable workbench computer that can take some damage without the cost and utility issues. When a laptop goes down, lots of stuff goes with it. The things get used for games, various accounts, communication, and a ton of other things in addition to programming. Seems excessive to have too many computers, and to have one for development only, though I do have one that is old for P1 development. It's what I use most of the time, and it's not used for much else. The thing is gonna die though. Soon. Then I'll be back to multi-purpose computing in that use case.
Was reading an article a while back on some game developers out there who still have their 8 bit machines. They can reach over, type up something lean and know it will just happen, then continue on while that machine does something they learned how to do so many years ago, it's dead simple.
I will probably take a P2, set it up with storage, a couple of displays, and a variety of I/O setups for sampling things, digital / analog, various outputs, etc...
From there, I may just turn it on and leave it on, able to input things, write short programs, display, etc... and do so in a lean, simple, consistent way. A recent experience with the P2 monitor was notable. Just left it on, moving data into and out of it. Start a COG doing something, stop that COG, capture data, or build a few things, put them in RAM, when it all runs, output it all back to the terminal. Had I some storage and a bit more options for connectivity than I do on the FPGA board, I would have likely just wrote it all out to the SD card for use later on. Never turned the chip off. I got a small look at what the Forth guys do all the time.
Now, I've noted a trend with Millenials and some other people who want to operate in a leaner way. They don't have PC's, or if they do, they have one and they don't use it much. It's like that appliance needed for a few things, but generally avoided like a land line is for many people today.
Lots of them don't want the complexity and hassle. Cars? Maybe. Land line? Never. Computer? Maybe, but a phone and tablet does enough. They want to live and be leaner than we are. Not sure why that is. Maybe it's cost. Here in the US, cost is getting to be an issue for a growing number of people. The poltics of that are for another day, but the reality of it suits this discussion perfectly, I think.
I can tell you right now, if I were say a late teen or 20 something, I would set one of these up along with all my goodies either salvaged or bought, and it would get used regularly, because I probably would not have a PC, nor a car, and I would be learning and doing stuff I want to do lean, avoiding as much of the daily time suck as possible. And there is your answer to Ken's "but we need to work with an iPad" right there. We do need to work with an iPad, but some of those students will do other things, some will use a PC like we do, etc... Just having the choice out there is worth doing, if nothing else.
I grew up dirt poor. Every cool thing I had, I either repaired or built. When I did finally get computers, I got them on work trade and I used the Smile out of them. Now I don't have any regrets on that. So many skills I have today came from that, it's laughable when I compare to my peers today, who often have no real clue. I think a little different because of that, and I think I relate to the millenials fairly well too. They are experiencing similar things, and for the smart ones so much is being thrown away too! Compared to what I could get my hands on, sheesh! No contest. A day scrounging thrifties and your neighbors can yield a total playground! Game on.
They do have phones, tablets, pads.
And those things can be painful to develop with. A young person may well find an on chip system practical if it's packaged up as a product. They can get into electronics mode, and there the thing is. If it's got an SD card slot or two, enough RAM to hold a lot of data or programs or display things, it may well make sense to use it instead of fiddling around with the pad and using things at a distance.
At least the PC can seem pretty close. Direct connection, it's fast, you compile, press the button and things happen. The pad seems far away by comparison. Droid can be either, though I've not explored doing so much, mostly because I don't want an expensive and necessary phone getting mixed up with the electronics, where sometimes I kill things. Dead.
Funny too, I was doing P2 testing when suddenly executables from Chip and Baggers no longer worked, though they could run mine. Turns out building for XP is something like building for Snow Leopard on Mac OS. Gotta know to do special things, or it won't work on the older machines anymore. PITA It may make sense to make it all very lean, so that problem kind of goes away for those who want that, or maybe need that. Who knows?
Heck, a P1 able to telnet, or run a nice serial terminal, connected to a UNIX PC, suddenly gets kind of useful in simple ways. Build something on the PC, "squirt" it over to the P2 via the monitor, and if it's data, display it, or if it's a program, run it, etc... It could be a console when not doing anything else. I like having those, because I actually do look at the SYSLOGS of various kinds while doing stuff.
It's expensive or sometimes difficult to obtain and use programs for the PC to do exploratory things. Here's one: I want to know what the limits of my ears are and then other people's ears are regarding audio related things. Like say I want to dither between pitches, or connect a few speakers and work with psycho-acoustic imaging. (which I actually do want to do)
On the PC, it's actually kind of hard to do that. Licenses are needed for the surround capability. The audio chain needed to make all that work is laborious too. I could have that going on a P2 with similar efforts and really learn some stuff that I could then apply to a larger project. An audio only game environment intrigues me.
Now, using the two together might be useful too. Here's another one audio related: Say I want to do an mp3 encode of something, then pick that apart and isolate related groups of frequencies and output them on separate tracks, a few at a time, one speaker each. That workbench computer is going to be simple to work with compared to the alternatives. No licenses, no BS. And I could graphically display some info fairly easy too. Real time. No GL needed.
Quite simply, there are some sounds in my head I would like to get out. I've either heard something I want to explore some, so there it sits in there waiting, or I've imagined something I am curious to see if it's perception for real is the same, and for me, or others. If it's not a simple stereo sampled thing, it gets kind of hard. Doesn't have to be.
Oh, here is one other subtle thing. A couple of those sounds in my head are beyond what my ears will do now. I'll need to "see" those, then I can rope somebody into the scene and ask them what they heard. Nifty, lean, cheap.
So it's things like that for me. Lots of other examples abound. The PC will still be there. I like the PC, and it's got serious advantages, but there are also a lot of barriers today that really don't need to be there. I'll use the PC for a lot of development. But I know I would do more core things interactive, maybe in combination with a PC too. And I would do that because I would know exactly what is happening on the real end of things, where if I get a bunch of software and expensive gear, I may not, or there are limits on it that aren't appropriate.
Because it's possible, of course...
Remember all those graphics demos people made for the Propeller?
They were mostly made to prove that it could be done...
I still have a certain .mp3 with a Propeller singing in harmony with itself, courtesy of Chip's genius...
There are books on how to best utilize the 'fast' coding functions of the 'Zero Page' addressing in the 6502...
I'm by no means a great programmer, but I KNOW that anyone who isn't 'pushed' by having to work on a limited system is never going to be the next Knuth, Torvalds or Chip...
When I see todays 'shields' people design for Arduinos and other SBCs I feel like digging out the Clue-by-Four and well... make my opinion known...
As all I see are modules that hardly ever can be used together because EVERYONE uses the same IOs, and no one thought as far as adding some switches or jumpers... They just seem to think that Plug'n Pray will work on these, too...
Incidentally, a good desktop computer today is more powerful than the CM-5 supercomputer that was ranked as #1 on the TOP500 list in 1993.
Where does all the cycles go?
I mean a traditional PC. Those big, ugly, beige boxes with fans whirring and rattling around inside, mostly full of empty space. Horrible pieces of cheap skate engineering.
No, I didn't think so. What our generation fell in love with was much smaller and nicer, C64's, BBC micros and so on.
The survival of Apple and the MACs shows this. Sadly only those with money were in a position to avoid the PC ugliness.
The lap top became the PC a while ago. At least they are not huge and noisy. Still pretty horrible though.
So the tablet generation are only skipping all that hideousness now that they can, it's cheap enough to escape.
As I said, what we really want is big screens, nice keyboards and a comfy chair to work in. The compute engine can disappear. It can be distributed elsewhere. It can be accessible from anywhere.
I never thought about it that way, but your comment brings back all the impressions I had when I built up my first MSDOS PC. It was clunky, big, slower than my nice Apple ][ for some things, and using it was a total pig.
I love my laptops. I really love my Mac. Jobs was right about it. All of it. Fought that for a while, going to big stuff, like SGI, but you know what? I eventually gave up the big IRIX boxes, because I can't take them with me. (and I miss IRIX big. Still.)
BTW, my first laptop was a throw away at work. The sysadmin we hired tried to fix a broken power button on it. It was a Pentium 90 or something, full color display, running WIndows 98. I put Red Hat on it.
Found a bag full of all the parts! LMFAO! So, I took it all home, attached wires to where the broken part of the board provided the power signal, reassembled all of it, and just made a pressure switch out of layers of electrical tape. It ran fine for years. Gave it to somebody trying to go to school. Ran for years more. (they wanted Windows back though)
Anyway, I did everything I could on that thing, because it was where I was. Funny that.
Re: Computing elsewhere.
Well, for some things yes. E-mail and all the basic work comms? Yeah. I don't care, and I've got local software for my things on my terms. No worries.
Other things? No, I kind of want the computing where the work is, and it doesn't have to be all that much computing either.
If you're going after commercial design wins, price matters. Period. No amount of wishful thinking changes that.
Remember how I prefaced this suggestion, however.
This is going to have to come in stages as the price curve bends.
Existing Parallax users are highly likely to adopt it. Education, etc... will work.
People making specialized or niche products may well adopt as well. Being able to differentiate and or rapidly innovate is very important. If they are selling to early adopters in their niche, this will work too. As they move along the curve, volume comes up, ideally Parallax recovers and price can drop with them. Nice. If this happens. I think it's a reasonable expectation. Price isn't always primary in this case. Time to market often is. Margins in specialized niches are way different and are that way for different reasons than the typically thin ones found in consumer electronics.
As the P2 moves along it's curve, the early adopters will need to fund the later majority, and as that happens, larger scale commercial adoption will be increasingly possible.
And what about this supply and demand thing? What a paradox. They say if demand outstrips supply you can command a higher price. OK. But if you have a higher demand you can make more. If you make more economies of scale bring the costs down. You can lower the price and cause bigger demand. And perhaps increase profits as a result.
So how does one know where to pitch the price of a new product. Sounds like ear to the ground, finger on the pulse, gut feeling and some tea leaves are required rather than any serious calculation.
So what price the P2?
I would suggest that there is only one other chip in the world that is comparable to the P2 and that is made by XMOS. (There I said the X word again). My gut would say that the P2 should be price competitive with the corresponding device from the XMOS range.
Basically, I agree with you. I already have to tote my laptop around to program or diagnose problems, and the majority of the software needed to do that runs on Windows. On the other hand every time I have to get a new laptop I also end up having the latest version of Windows on it, so I can see the appeal of having the tools run on the target processor. Of course even that has it's down side. Imagine working on a dozen different products, each with their own unique OS and development system.
Why in the closet. I can't understand why there isn't a reasonably priced big screen monitor with the computer built in to the back of it. Adding an inch or so of depth to the monitor would provide plenty of room to mount all the hardware needed, provide enough surface area to dissipate the heat generated without fans, and get rid of all the external cabling.
, and
, with
, realizing that first and foremost: Products are worth what people will pay for them.
In reverse order:
Fashion accessories are not expensive because those people wouldn't be caught dead in something not expensive. They are expensive, because they convey status, and status has very high value to some people, and near zero to others. The actual product manufacturing cost has very little to do with this dynamic, and in fact you can see lame attempts to convey this with precious metals and gems being globbed onto something that is otherwise ordinary.
Paris Hilton status is very different from, say Elon Musk status.
Paris is about the money, she has it, and shows it, and the product manufacture is actually almost completely marginalized. Few others see value in this. Gaudy, trashy, etc... Now you will also see ordinary people making ordinary things gaudy with cheap materials. They are signaling their envy of the money as status. Both Paris and the ordinary poser wannabe will pay MORE for a gaudy thing for the signals it conveys, not what it does or how it's made, etc... The difference really is Paris can afford real gems. Ok then..
The other more respectable fashion trend is all about style relevance and status. People buy expensive designer clothing at very high margins because they want to be first with it, trendy, hip, etc... This works about like the gaudy stuff does, but there often is a more respectable element of style, and all that speaks to people. A really great suit, is one case where somebody wants to show status, style, relevance, but not be trashy. Materials, quality of construction, engineering (yes, there is engineering in clothing, after all the Playtex women made NASA the best Space Suit), do matter far more in this case.
Notably, both of these cases include forms of value that are disconnected from the actual manufacturing, which means worth is in the eyes of the buyer, leaving the seller to establish that in their mind and justify price.
Elon sells very expensive Tesla cars. Like Apple, Tesla is adding value throughout the whole experience, including high tech materials, manufacturing, etc... Not trashy or gaudy. Just high value. The Tesla needs to cost a lot, because doing that is the only way to fund getting to the cheaper Teslas for everybody. Early adopters buy because they want the value of the Tesla NOW, not 10 years from now. Very different from the gaudy Paris Hilton type value.
Apple Computer is an easy example of high value that incorporates more than manufacturing. Apple adds value in their design, quality of manufacture, software environment, purchase experience, in short, everything and Apple asks for a return on that value added in the form of higher price for it's products. Apple has somewhere close to a 20 percent profit margin on it's business where HP has something closer to 8 percent.
Apple and HP could technically spend the same manufacturing budget and ship functionally identical machines. Let's say they do that. Should they cost the same?
Absolutely not.
If Apple spends money on design, or maybe just spends on more savvy people to do design, what they render with that budget will be more refined, ergonomically pleasing and functional, looks better, and so on. Apple would also do the work on simple packaging, lean purchase experience, and so on. Finally, there is the software ecosystem out there.
So these two functionally identical machines from a computer stand point are capable of the same throughput, display resolution, storage, etc...
Some of us do not value anything beyond that and see thicker margins as excessive and would not be buyers. We would buy the HP and be pleased that we got optimal capability for a low price. Others see their interaction experience differently. Not having that "HP Care" thing nagging you all the time, is worth some money. Seriously, Sony learned this when people would pay them NOT to include the garbage with their PC. Not doing that was worth about $50 / machine. Isn't that notable?
Now here's the interesting part. Say Apple and HP sell these machines for a couple of years. HP will have sold about 5 times as many than Apple would have. HP ends up doing more warranty service, software updates, and all the work that goes along with moving a product to people, and they do it for a lower margin per machine.
Per transaction, HP earns a lot less.
Apple will have moved considerably less machines. However they do that same work at a much higher margin!
Per transaction, Apple earns a lot more. Additionally, they earn money through the software offerings, etc... where HP wants to do that with printers and that ink we all know costs too much, etc...
At the end of it, Apple will have done a lot more work, and harder work up front, but they get higher margins and a bigger footprint of sales.
Apple is extremely well capitalized today. HP? Not so much.
So there is your case of once again, price being disconnected from manufacturing costs. Now it comes time to make the next machine. Apple being well capitalized can do the same awesome job again, because they can afford to. HP must operate leaner, because they can't afford to.
This is what Steve Jobs realized early on. Things are worth what people will pay for them, and adding value they recognize adds to the margin per sale, which means a better funded business overall, which means having enough to continue to make fine products and no race to the bottom type things getting in the way.
Again, things are worth what people will pay for them. On the product manufacturing side, doing everything you can to add value to your product means you do everything possible to get a better return on your product. It's the return per product that you want, not the lowest price, unless your business model is to sell to everybody possible. Often, that's not the right choice. Depends on your product and what value you can add.
I very frequently see mass consumer goods models applied to discussions on price and P2 applicability generalized as "the way it all works" in this forum, when the truth is niche, higher quality, higher value products do not work in the same way that mass consumer goods work.
That should explain my post above in that the initial P2 price curve will be suitable for some kinds of efforts and not others.
Frankly, I see the P2 as the iPod / Apple of microcontrollers. Parallax has a lot of design in the thing, and Parallax will need to ask for that value in order to fund doing the next thing. Due to long life cycles, price will come down, Parallax makes the P3, and the cycle continues, just like it did on P1, which was more money at first, and is now very nicely priced.
The expectation that it meets mass marketed devices price right out of the gate is a completely unreasonable one. This isn't to say it's going to be a ton of money, but the product needs to fund the business as well as fund the next product, or we won't get a next product that is as cool as the one we have now.
Which brings me back to this one:
The short answer here is to add up all the value added, manufacturing costs, development returns needed to fund the next thing, etc...
Step one.
Step two is to understand what those values are worth to potential buyers. Maybe it saves them time. Maybe they get to market quicker. Maybe it's easier to use. Whatever. Maybe the educational materials mean it can be adopted easier.
Then price based on what that worth means in dollars. Sometimes you have to ask people this. Other times you can compute it based on metrics. You may be able to infer it too, based on other products, or some understanding of the buyer.
Finally, price at the higher end of that range and release the product to early adopters and engage them to tune up the price, or add more value to maintain it, depending on the scenario.
That's the essence of it. I've been through this a few times and it's an intense exercise!
I would suggest that there is only one other chip in the world that is comparable to the P2 and that is made by XMOS. (There I said the X word again). My gut would say that the P2 should be price competitive with the corresponding device from the XMOS range.[/QUOTE] Really, from a business standpoint, it's only what is comparable for my *particular* application. That is to say, what options do I have to do what I want (or need) to do and how much do they cost? If significant volume is involved, this latter becomes an even more important question.
But XMOS would seem to be one competitor to look at closely when trying to evaluate this stuff. Any part (or sub-family) in particular we should be considering?
Apple makes these. Big monitor that is a computer. Expensive, but really nice if the form factor is your thing. For many, it's not.
Very strongly agreed. I would also evaluate where and how they added value and how that impacted their sales. Then look at P2 and see how and where that value add is optimal and likely different from what XMOS did.
Cost? Well if you're selling chips to instrumentation makers(industrial, medical, etc) their devices that are used professionally are often quite expensive - especially the medical devices. I don't think a $2-$6 price difference would be major factor. I suspect the MCU chosen will mostly be on package, capability, ability to sell chip to PHB, etc.
Consumer devices are another category altogether, it seems to be a cut throat arena where price matters and every big player is fighting for a piece of the action because it means millions of chips sold.
Point is I want my work and my stuff to be accessible from anywhere in the world I happen to be. I want it in the "cloud", but I want that cloud to be mine. It would probably end up in the closet
potatohead, I can go with that oft said phrase. But when you are launching a new product, especially a new product unlike any other, how on earth do you know what the people will pay?
Pitch it too high and you make no sales, you may have gone bust before you realize your mistake.
Pitch it too low and your lose your profits and go bust.
KC_Rob, No idea just now, I have not been following XMOS for a while now and I believe they have new devices out since I last checked.
XMOS actually ticked me off a bit. Some marketing droid there decided it would be a good idea to rename "cores" as "tiles" and hardware thread scheduling as "cores". (Then in small print "logical cores"). They applied this to all their marketing material and it seemed like a big dishonest sham to me.
rod1963,
Which Freescale, TI..multicore chips are you referring to. I know we have multi ARM core machines and such but those mostly occupy a totally different space. Then there are the chips that come with an ARM and some PRU's (or what are they called) to handle the real-time work. I'm not sure how the comparison goes there.