No, I am deadly serious. JS is turning out to be a God send around here. Even on a little ARM board that is taking care of some fastish real-tme stuff. If you also need secure http connections and websockets JS can do it in just a few lines of code and runs pretty damn smartly. As bonus I get a free web server on the board to use for configuration status pages etc. It's a lot easir to do that in JS than many other languages and systems and as fast or faster.
It's also a bonus that you are now using the same language all the way from embedded box to server to users browser client. Saves all that mental context switching when developing.
Anyway I was not suggesting the language dictates the machines word size, as you say we have had single and double floating point on all kinds of machines and all kinds of languages.
My only little point is that JS only has 64 bit float numbers. As such having a 64 bit machine probably gives you the chance to optimize the Just In Time compilation of engines like V8.
For example: Currently V8 will keep JS arrays as 32 bit integers if it is known that the elements are all ints that fit within 32 bit. This results in very fast code being compiled when operating on that array. As soon as you assign a bigger number or float to any element of the aray the whole array has to be copied to an array of floats, involving memory allocation and deallocation, and all the code to operate on the array is re compiled. Things get a lot slower.
So, in a few years I hope my little $50 ARM boards will be 64 bit machines and JS will fly even better in there. This will happen because of the JS push into mobile devices, phones, tabs etc.
My only little point is that JS only has 64 bit float numbers. As such having a 64 bit machine probably gives you the chance to optimize the Just In Time compilation of engines like V8.
No, the databus is already 128 bit or more - meaning the FPU is not bottlenecked. "64 bit" is mostly about address space. If you are dealing in 4GB+ data blocks with JS then you might have an argument, but, even then, it's no different to any compiled app.
For example: Currently V8 will keep JS arrays as 32 bit integers if it is known that the elements are all ints that fit within 32 bit. This results in very fast code being compiled when operating on that array.
Well, that is a bit of a cheat. You are dealing with apple and orange technicalities: 1) the speed difference between ints and floats, and 2) the speed penalty of larger memory usage, and 3) maybe some inefficiencies shuffling between them vs not having to shuffle when all ints are used.
And, of course, the databus on these low-power devices may be only 32 bit wide irrespective of the memory model. Prolly get similar results on the 64 bit Atoms for the same reasons.
AFAIK the de-facto standard is now: 8 bits - byte, 16 bits - word, 32 bits - long, 64 bits dlong (double long). This is what I see most often in the literature.
bit/byte/word are really hardware terms, and long is a software term - the Propeller is the only place outside of software where I've ever seen 'long' used as a hardware term. I was pretty surprised when I first looked at the Propeller to see 'long' used that way. Anyway, I think it's better to keep the two separate. So, for hardware, bits, nibbles (aka nybbles) and bytes are pretty well set in stone these days - everyone agrees. Word is variable, but almost always (well, you could almost always leave out the almost there..) refer to the natural word size of the computer. Looking back at the minis of a decade or two or three ago the same vendor could thus call 16 bits a word on their 16-bit mini, the same vendor would use 'word' about the 32 bit word size of their 32-bit mini (and use half-word about 16 bits). DEC used, as far as I recall, 'word' for 32 bits, 'double word' for 64 bits, and 'quad word' for 128 bits. The 32 bits of a 32-bit Motorola 68k was also called a word.
Going back to 'long', the industry standard for C programming (agreed among the biggest *nix vendors and others, I have a copy of a whitepaper somewhere) has been as follows for more than 20 years:
char = 8 bits, short = 16 bits, int = 32 bits, long = word size, i.e. 32 bits on a 32-bit computer, 64 bits on a 64-bit computer. To be accurate, and that's important because Windows 64 doesn't follow the standard and is an exception: 'long' shall be the size of a pointer. So, in C programming you can always do 'long a = (long) &variable;', whatever the word size. What you can't do is 'int a = (int) &variable;' because it's not portable: It'll chop the pointer in half on a 64-bit machine. How Windows programmers handle that kind of pointer manipulation I have no idea.
(never mind the uint_32, uint_8 standard - that's from a different effort. Equally valid, but the above is also a completely safe definition of the 'old' terms these days and has been so for a very long time now).
As for addressable memory.. in my job I need all I can get. I have 16GB on my desktop but that's starting to be limiting. The data sets are now so huge, and the processing speed requirements keep increasing, so it's necessary to be able to keep those sets in memory. If I could have 196GB RAM (as one machine here can have installed) then I would.
I agree, the data bus may well be very wide and the FPU may not be the bottle neck. And yes it's the address space we have craved for mostly.
However you are missing what I am saying about V8 and JavaScript Just In Time (JIT) compilation. In V8 if your numbers are integers that fit in 32 bits then it generates code, on the fly, to use the 32 bit integer ops of your processor, and it allocates normal linear arrays that can be indexed very quickly. So even if JS thinks all numbers are floats the V8 engine may not even be using floats in your program.
But in JS all numbers are 64 bit floats so if your program at some point drops a float into an existing integer array the array has to be converted to floats, a lot of allocating, copying and deallocating has to happen and all that old JITed integer code is discarded and new FP code generated.
If your processor were a 64 bit machine though all that convoluted optimization could be avoided and performance boosted.
Now, I might still be wrong, so have a look at this fascinating video about how the V8 JS engine does all this to achieve C++ like performance. Then tell me what you think.
If your processor were a 64 bit machine though all that convoluted optimization could be avoided and performance boosted.
It's still going to hit the same old 64 bit FPU for real floats no matter what size the integer data registers are. At this point it becomes about databus/cache/RAM speeds and hidden datatype interchange massaging.
Hmm, you do realise that you are not talking about floats any longer right? You did start off by saying that the nature of JS using 64 bit floats made it a strong demand for a 64 bit processor. Then you switched to pushing the hidden integer features instead. And how JS's integer performance in 32 bit data registers is wonderful as long as you stay within 32 bit integer limits ... gee, maybe it's not so floaty after all ...
Ok, I can see if you are wanting to do only integer calculations and the multiplies are likely to blow out the 32 bit range (What happens to that nice clean integer, given the automatic nature, on the subsequent divide? 32 bit ints do cover a lot of normal work already.) then a 64 bit integer data register would be faster, yes. I presume this is your argument?
PS: Having watched some of that vid I see you are saying the nominal JS float datatype is officially an object/class with all the usual object baggage. Or more accurately, V8 has a special case where it stores the int into what would usually be the object pointer - using the least significant bit of the pointer to indicate whether it is a pointer or a 31 bit int. So, that's an even less fair comparison. If the float was a simple datatype the same as the int then it'd run faster with floats.
Hehe, it's interesting how a bunch of the speed trap avoidances revolve around treating the language like a typed, procedural, compiled environment. Ie: sit closer to the metal.
It's a Samsung, and it happens when I turn on GAME mode and turn off PIXEL SHIFTING. When I am close to the TV with it operating normally, and I do something like put bands of grey on it, or the program does, I see some of the bands are completely stable. Others have an odd pattern to them. At reasonable distances this all blends together and I see distinct grey bands that appear pixel perfect, but they aren't. When those two are enabled, the patterns all go away and there are simply fewer grey scale or color graduations. That may not be the intended result either. My older Plasma that died had similar settings and did not demonstrate that behavior. This one does though, and it's interesting to get a pixel for pixel completely static display. Doing that does increase the opportunity for burn in though. It's a curio for me at best, simply because it isn't apparent at normal ordinary viewing distances.
I've not thought about it too much, but this post makes me want to go and twiddle with an LCD to see if it does a similar thing, or just has more grey scales, or just renders things with less precision. I suspect some combination of the latter as the switch speed on LCD isn't anywhere close to what plasma is.
Edit: I just checked and both my Thinkpad laptops have 10 bit options. Now I've got to do a TV / CRT comparison... This might be out there more than I thought. Not sure how I get 10 bit data into it without a short program though... That's a hunting exercise for later.
In any case, 8 bit color isn't anywhere close to our own fidelity of perception, particularly with monochrome images. 10 gets really good, and I suspect something over that is really where the boundary is. IMHO, this is still one area where even a modest CRT will out perform modern displays. To be really nit picky, straight up RGB fails to reproduce a little of the color space many people can see too. Room for improvement there for sure.
Re: 64 bit
IMHO, this isn't a question of need as much as it is cost. Honestly, we don't need 64 bits for a lot of stuff, though I have an office worker to show you who will bury a 4GB machine just running MS Office, and a few other content create / information manage applications. Data is getting big. Really big and really quick. That alone is enough to warrant broad adoption of 64 bit computers IMHO. It's about data first and foremost in most applications today. Remember, we are essentially capped on peak compute too. Multi-core / multi-processing of various kinds is growing because cycles per clock isn't. That has data implications because we are now back to the core idea of smarter and bigger data being a primary way to improve performance as opposed to code optimization and such that really focus on cycles / clock. When the peak happened somewhere in the 00's, data started to grow. Here we are today with huge data! Only going to get bigger folks, and that's true for most applications.
And if that scales broadly for cost savings, it's going to mean using wider bit paths for a lot of things, and there will always be that trade off between optimal code / data / compute with narrower bit paths being somewhat less than optimal in a lot of applications because the cost of those exceeds the value of the optimal environment.
64 bits it is then. For those niches where it really matters, there will be options and they will remain niche and often expensive, depending and that's how computing is.
If the float was a simple datatype the same as the int then it'd run faster with floats.
Bingo. That's what I mean. If it were a 64 bit machine everything could be saved in similar 64 bit variables/arrays.
Well, I probably not explaining myself very well, and for sure I don't know the intricacies of JS JIT optimization. But I think you get my idea now. Looks to me like if V8 did not have to chop between 64bit floats and 32 bit integers it would have more scope for streamlining code.
Yes it is interesting how those speed trap avoidances work. Up till now I have not worried about any of that in my code. After all I had no clue what was going on inside V8. I just notice that similar code in C++ or JS throwing XML or JSON around runs at about the same speed. Which I find quite amazing.
P.S. That is a good point re: overflows causing things to blow up out of the 32 bit size. I would imagine they might even be so smart as to allow expressions to have bigger than 31 bit iternediate results but as long the final result you store back to you target array is within limits the array does not have to be reworked.
I will now go away and recast the integer only heater-fft in JavaScript and find out how it flies on V8:)
Re: High End Workstations do exist. (and are totally worth it too)
They can run Windows, or Linux and they typically feature processors with very high front side bus metrics as well as deep caches. It's not just marketing on those either. Both OSes take good advantage of these machines and they are well indicated for the users who employ them.
I really don't understand all the negative commentary. Back in the DOS days, those 32 bit computers can and did run some higher end stuff that performed significantly better and on larger data sets then. It got used, just not by some ordinary joe, but again we all saw the benefit through economy of scale. That's true today with the move to 64 bit computing. For most people it isn't going to change their lives, other than they will find it much more difficult to bury a machine now no matter what it is they are doing.
However that all goes, the cost to maintain production of code and hardware for 32 bit computing is rapidly exceeding the value of doing so. Advancing things in this way is very high value in the end. Always has been.
Re: Mac being irrelevant.
Clearly you do not understand, nor participate in the kinds of work groups and tasks that a Mac adds value to. If one just looks at hardware, yeah the Mac isn't optimal in terms of features and cost, but if you look at the computer OS+Software+Features, little things like insane good power management tend to start to add up. For those people impacted, they see value and are happy to pay for it. This is not all about sheer market share. Never has been, never will be. (sigh)
Sorry I have been harping on about this a bit obsessively recently.
Story is this: A year ago I had an urgent requirement to stream a lot of data in real-time over the internet to browsers. We are talking in the order of 10Kbytes ten times per second. Of course I knew nothing about these new fangled WEB technologies (still don't) so I picked up the easiest way I could find to do it. That was Node.js + a websockets module + a couple of hundred lines of JS in the server and in the browsers.
I was very nervous about all this as it's all very new. I had no idea if it would work or how reliable it might be. Besides as we all know JS is that stupid crappy scripting language for handling mouseovers in your HTML. But I had no choice, the boss wanted it NOW. We had a demo at an iternational trade show comming up.
A year later that thing is still running just fine and it has been very easy to enhance.
Then I started to notice that it was sucking about the same CPU load as a C++ server we have that is doing pretty much the same thing. What's going on here?
Recently I was trying out similar stuff in Google's Go language which is compiled to native code. WTF it's ten times slower than the same functionality in interpretted JS, on my little ARM boards at least.
Don't apologize. It is a good thing. I like reading others perspective on this things. All good and I may just dip my toe in for a project or two at work.
I may just dip my toe in for a project or two at work.
Excellent.
Be warned. If you are comming from a good old fashioned structured programming school and work in C or Pascal etc or if you come from an object oriented school, Java, C++ etc then you will find yourself in a strange place with JavaScript.
You might expect JS is a simple scripting language like VB or something. It is not. You can do structured code in JS, you can do class based code in JS. But then JS has all these funky features like first class functions, closures and prototypical inheritance that take a bit of getting used to.
Then of course there is a pile of, shall we say "defects" in the language design that can catch you out. You best defense against falling into those traps is to check your code with JSLint.
All in all it's an amazing language with features that normally only exist in things like lisp or languages out of AI research that have never gone mainstream. Incredibly it was designed and built in 10 days at Netscape.
Anyway, we are way off topic here now. When I get JS talking to my Props we can revisit this.
Then of course there is a pile of, shall we say "defects" in the language design that can catch you out.
When you cannot be sure whether the expression 3 + 3 will return 6 or 33, that's not a scare-quote "defect." That's fundamentally broken and needs to be fixed, if necessary by starting over from scratch and realizing that sometimes strong typing is actually a good idea.
I can't really argue with that. Except that as far as I know the epression 3 + 3 is actually always 6 in JS.
Mind you ("Result = " + 3 + 3) is the string "Result = 33".
Mind you 0.1 + 0.2 is not 0.3, but then that is typical of the IEEE_754-1985 floating point standard in all languages.
The defects are a bit weirder than that even.
Like the fact that a var declared within curly brackets is not scoped to those brackets but rather the function they are within.
Like the fact that semi-colons can be ommited but doing so may lead to obscure hard to find errors.
Like the fact that the built in "this" variable does not always refer to the object you think it might.
However JSLint does a brilliant job of pointing out all the subtle posibilities for ambiguity.
There is little chance of starting over from scratch with JS it is deeply entrenched in every browser and many other places by now. But, hey, C has its quirks an inscrutible behaviors as well and that is going stronger than ever.
Oh, by the way, quickly what do you get if you write this in the dat section of a Spin program:
s byte "Result = " + 3 + 3
Even weirder, better start over from scratch with Spin:)
I happen to be a fan of that kind of behavior. I don't see it as an error so much as something that needs to be communicated. When things are simple, fast, they tend to also be as potentially powerful as they are dangerous. Count me in as a fan.
Bingo. That's what I mean. If it were a 64 bit machine everything could be saved in similar 64 bit variables/arrays.
At that particular point I was talking about storing the float as a native datatype directly, forgoing any object handling that it currently requires. Single or double was irrelevant except for maybe databus/RAM bandwidth again.
What it comes down to is there is two sources of slow down: Object handling, leading to bandwidth issues. And ALU vs FPU speeds.
... Looks to me like if V8 did not have to chop between 64bit floats and 32 bit integers it would have more scope for streamlining code.
Again, it's not a 64 vs 32 issue.
I will now go away and recast the integer only heater-fft in JavaScript and find out how it flies on V8:)
It will be interesting to see if V8 manages to work out bit-shifting for divides without resorting to floats.
It got used, just not by some ordinary joe, but again we all saw the benefit through economy of scale.
That can be said of any design that dominates. Ie: If IBM had chosen a different CPU then Intel would have shrivelled away like all the others did.
Why did this domination occur? It certainly wasn't on technical grounds. Because the large scale economy didn't make use of the new capabilities even when they did belatedly turn up!
There's you reason for my attitude. And I did already cover this in my previous potato post. :P
That's true today with the move to 64 bit computing. For most people it isn't going to change their lives, other than they will find it much more difficult to bury a machine now no matter what it is they are doing.
True, but now there is only the PC left. No, shouldn't say that. The real story with 64 bit is RAM size and speed, and this is a real tech feature of the modern times.
Re: Mac being irrelevant.
Clearly you do not understand, nor participate in the kinds of work groups and tasks that a Mac adds value to. If one just looks at hardware, yeah the Mac isn't optimal in terms of features and cost, but if you look at the computer OS+Software+Features, little things like insane good power management tend to start to add up. For those people impacted, they see value and are happy to pay for it. This is not all about sheer market share. Never has been, never will be. (sigh)
Yes, the Mac was a leader both technically, choosing the 68k CPU, SCSI and 24 bit colour, and on the application/desktop front. Very significantly so. But the clones made that all irrelevant. M$, and Intel to some extent, got a free ride on the back of the crappy clones.
My point is, the Mac, in it's current level of popularity, has a rather massive crutch supporting it, known as the iPod and iPad, and iPhone ... M$ had all but squashed Apple into non-existence by 2000, even earlier. The Mac was still clinging on to a little of DTP with a few scientific and educational stragglers before the iPod hit the streets. The Mac was not hip. The Mac was not cheap. The Mac was not spec'd amazing. The Mac was not a Wintel box, even though it, by then, used PC hardware for everything except the CPU. Apple was nearly gone, just like the rest.
Apple, and I'm pretty amazed they pulled it off, owes it's current existence to the Web. It took Apple a lot of years to get into gear with the iPod and I don't know why. It would have been interesting to see if the music industry itself tried to use the Web as, say, an indexing system for finding and even ordering songs right from the outset. Would there have been room for other players like Apple?
**Warning, scroll this if length is not your thing. Somebody asked me "why" lol (sorry guys, I do this sometimes)
Well, maybe Evanh. Intel might have shriveled up. Who knows whether or not Moto, Zilog, or somebody else would have taken the paths Intel has?
That domination happened for a few reasons:
1. IBM put some weight behind the tech. Apple pioneered the PC with the Apple 2 series of computers, which basically were 8 bit PC's. They did everything a PC did, and until clock speed washed them out, were perfectly fine, capable PC's. I still have one, and it still gets used like a PC. Having a design that encourages add-on tech is the difference between a product and a platform. PC's were going to be the platform, many other computers were merely products. Very interesting times then.
2. For all the warts on Intel chips, they did have the basic features needed to carry computing forward. This, in tandem with #1 and IMHO the success everybody saw Apple enjoy with the Apple 2 and it's very long life, highlighted the PC as the go forward design. I remember those times very well and early PC's were kind of crappy compared to other products, but even then, it was possible to pay a lot and get a lot with a PC. That's a really big deal. A well equipped PC, despite the dubious CPU was capable of a lot of things. Other machines and CPUs were faster, and or offered up significant capabilities, but they were not offering that basic element.
3. Deals. IBM + MSDOS started things off. Soon there were lots of software companies authoring for that platform and the weight of the business behind it meant a steady stream of buyers and longevity. The best tech almost never wins. The best business model almost always does. That's the third leg of the "why the PC dominated" stool.
Both IBM and Apple didn't bother to compete on a pure price / performance basis. The value of the machines had as much to do with the business model and the design meant to be expanded on as it did the core technology inside. Both eras had exactly the same dynamics. When the Apple 2 was king, it featured an 8 bit CPU, clocked a bit slow though it wasn't interrupted with video DMA and refresh so it ran very nicely, coupled to discrete logic graphics. It was just enough to get stuff done. The real show was in the expansion and the nicely designed software interface. Anywhere the machine needed something to get stuff done that stuff was designed and sold, resulting very quickly in that "get stuff done" kind of machine. Most users of the machines paid to get stuff done, not for some technical merit or other. A few killer examples were CP/M cards that brought a ton of ready to go software to Apple computers used for business, combined with all the nifty graphics / publishing applications. That right there justified the high cost. You either had to have two machines to do that, or just didn't do some of that stuff. Other technically superior machines simply did not offer "get stuff done" It's a big deal, as I just wrote. Really big.
The same was true for the PC.
When the PC really started to take off, there were much better machines around, nearly all of them niche computers. The thing about them is they don't get the software attention general purpose computers do and that's just a big deal. Software is a huge part of the overall value proposition inherent in hardware --computer hardware aimed at people using computing to get stuff done. That's not always true for hardware in general, but it is totally true for computer hardware. Once the center of mass moved from the Apple 2 to the PC, it was all but over for everybody else.
What was gonna happen, and did happen was the software grew in capability and the general purpose machines were expanded and added on to so that software could continue to grow in capability. That cycle continues through today, though we have a nice split going on with ARM.
During that early PC time, I was running both high end SGI IRIX machines, which put the mere PC to shame and made an Amiga user drool, but the cost on those was off the charts high. Niche machines for sure. The most interesting thing happened with the PC. "Good enough" started to take root. If one wanted to do CAD and CAM, which was a career focus of mine, doing it on an SGI or SUN or HP UNIX machine was as good as it gets. Leading edge. Many of the niche computers were on par compute wise and even graphics wise but they did not see Application ports because there was no audience and the machines didn't always have expansion capability that made sense in that context. The PC did though. CAD and CAM saw ports to DOS as well as some new software aimed at meeting market demand for CAD and CAM. I could take a DOS PC and beef it up and get to the upper end of "good enough" to where only the largest jobs required UNIX, the rest could be done and could be done at a nice cost advantage. That SAME PC could be stripped down a little and dropped into the business office where it would crunch numbers too, where UNIX did not offer that option. In fact, one could just run crappy UNIX on a PC and get those bits done easily enough, and plenty of people did. Linus secured that with Linux and the results are well known today.
General purpose machines meant being able to meet lots of needs in a fairly consistent and increasingly cost effective way. Didn't matter that the tech was less than optimal. In fact, the lesson learned is that it NEVER DOES.
The longer term lesson here is that general purpose computing eventually rises to meet the vast majority of needs, pushing specialized and often superior tech to niches where it dominates on technical merit to the point where it makes very little sense to disrupt it, and those niches move over time too, always being eroded by "good enough"
So that's why.
Yes, Apple Mac took a real beating and the Moto CPU's followed by the Power PC did not keep up with Intel. Neither did MIPS or ALPHA either. MIPS was so damn good that a 300Mhz MIPS would run as well as a Pentium X at a few times that clock speed. Alphas were similar. Apple migrated their user base and software through Moto, PPC and onto Intel, picking up BSD UNIX as the core of their new OS.
That makes the Mac today. Do you know what the rocket scientists run at many of the National Labs? Mac. Why? Because it's a great UNIX that has general purpose office software and it can rather easily work with the higher end HPC hardware they are always working on. Want to know what a lot of Silicon Valley startups run? Mac, and they do so because the machines are elegant, stable and run UNIX easily communicating with and operating like UNIX / Linux hardware that runs the Internet today. Want to know what a lot of content creators run? Mac. Good applications are available on the machines, they look good, run good, and they have a very productive UI. I could go on and on.
It's not about share with Apple. It's never about the best tech either, though Apple does push the edge with their iPads and Phones. It's about targeting niches and really delivering better than "good enough" and making fat margins on that, maximizing the resources of the organization toward higher margin deals as much of the time as possible. Apple doesn't need share as much as it needs people who will pay for all the value they add, not just buy on technical hardware specifications alone. The high margin Mac funded the iTunes, iPod and all the other developments just like the high margin Apple 2 funded the Lisa, then the Mac. This is why Apple does what it does. It's not about share. It is all about margin, because high margins mean being able to capitalize the company, which funds new products, pays for risk, etc...
Dell can't become Apple. They run on thin margins! They aren't ever capitalized enough to make the kinds of investments needed to do what Apple does. That's the difference, and make no mistake, the Mac is just fine with the share it has because that user base pays plenty for a machine that's got a lot of value for them. So long as that results in good capitalization of the company, share doesn't matter. Again, that lesson to be learned from Apple Computer.
Those people, who do buy on specs alone, are typically PC buyers who are thin margin customers who want good enough and on it goes.
I deal with a lot of companies doing what I do and I've seen this all play out multiple times. The people who run Apple stuff are looking for specific attributes to be maximized and they pay for that, and those attributes rarely involve the fastest CPU, but they do want great power management, or a fantastic looking display, or a machine that is a breeze to travel with, or that runs UNIX, etc...
Apple is very well capitalized and the IPod / iPhone + App Store, etc.. .really kicked it off for them, but make no mistake, the Mac was making them plenty of money and it delivered much higher margins than any PC did, meaning they made more money at a fraction share than race to the bottom hardware value only companies like Dell did at several times that volume of machines. And it made that money because Apple had software and hardware control able to sell a very different value proposition. Microsoft sells software and Dell sells hardware, both of them high volume companies, neither able to do what Apple does and get what Apple gets for doing it. Ever wonder why Microsoft is trying really hard to get control of hardware and produce it too? This is why.
In any case, throughout all of this, I never, ever saw the best tech win on anything, ever. It's not about that, never will be about that. It's about the business model and the overall value added and most importantly, whether or not the producers of that product or solution do the work to get people to see that value and pay for it. Those are the people that win. They will always win.
And that comes down to "best tech" On which metric? Speed, power consumption, size, etc...? That's where adding value comes into play. An emphasis on one technical metric doesn't secure market share because it's about meeting demand and that's done by adding value around the tech as well as selecting optimal tech for the target markets. "Best Tech" really varies considerably.
As for the slow appearance of iTunes + iPod, blame the music industry itself for that. They have fought everybody over distribution and they continue doing that. Had they not fought over all that stuff, they themselves could have aced Apple out, building on Napster of all things, who was the first to actually propose a bulk subscription model for trading music. Had the majors said "yes" then, it's very likely Apple would have played out very differently today. They would still be making great computers and very likely would have just integrated that creation into their overall plan and made less money, but plenty still because they really don't race to the bottom for volume as much as they have always worked on figuring out how to add value across the whole effort. Lots of companies today could learn a lot from Apple computer.
Took forever for Apple to convince the majors to sell on iTunes. Took 'em longer still to convince them to sell in high value ways. Took longer still to get them to sell unencumbered music, etc... Blame the labels and the RIAA for that, not Apple. Whole other discussion there.
Jobs had a saying: If you buy a 30 cent part from a surplus store and sell it to somebody for $6, it was worth $6 to them and they don't need to know what you paid for it. That's been true at Apple since day one. And it's true right now. Things are worth what people will pay for them and when a lot of value is shown to people they will pay a lot of money. Truth is, people see that cheap part and think they should get it cheap too, but they were not the ones scrounging around sourcing it either, nor did they inventory it, document it, whatever. All of those things add value and when that value is well expressed people pay for it and that maximizes all the activity, not just the transacting on the part itself. That thinking is what got the cool products done. That is what a lot of people don't understand about Apple and technology in general.
The sum of all that is we end up in a world filled with cheap, plentiful, fast PC's that evolved to an optimal design and those are thin margin commodity products. The rest of the world is niches where there is a lot of value to be offered and a lot of money to be made. The very best tech finds it's way into some niches, most of it gets passed up in favor of good enough and there is always new stuff bubbling up here and there. The best tech will flash up, make it's impact from time to time, but realistically, the bulk of what is out there will be those things that are good enough and that can be improved on, dealt with, standardized so that the people who want to get stuff done can simply pay to get stuff done and they always pay for that, not the nature of the tech, unless they are in a niche where it really matters or enthusiasts like us who really like tech apart from the wider "getting stuff done" basis that the vast majority of people evaluate things on.
Edit: And I mean none of that in a bad way. Some of us just want the tech and we don't want to pay anything other than a thin margin for it too. Great! Plenty of takers on that, everybody is happy. Others want to get stuff done, and that's a whole other deal involving more than just technology, and that's great too, and that is what Apple generally does.
The key here is to recognize those dynamics and then apply them to the technology that is out there and more importantly, the players and their business model. I don't see an Apple of ARM yet, but I'm kind of hoping one pops up. There is a growing opening for some machine that runs "good enough" on ARM that uses the Smile out of Open Source Software, that features great power management, solid design, etc... If somebody does that and makes the right deals, the wintel nut could get cracked wide open and if it does, we are headed for really interesting times again.
A $50 machine can do the basics on ARM. Internet, Office, E-mail, Programming, Gaming, etc... Who is going to really nail it and do the Apple thing? When I'm thinking about tech and doing this kind of stuff, I just want a cheap ARM computer I can throw code at. Prolly will get a Pi or maybe an Android PC. But, I would pay nicely for an ARM machine with the work done. Nice GUI, managed software, drivers all sorted that just rocks hard. I would pay several hundred dollars for that and I suspect a lot of others would too. Make me a great little ARM laptop that runs all day on a battery, features ports I can use, killer display, OSS software for low run cost, app store to buy stuff and games, and I'm in! I would pay even more for that deal. Support it with regular OS updates, applications / drivers managed and such, and it's basically a Mac on ARM that presents huge value in that the machine might not even cost what all the software licenses would from Microsoft or Apple. Whoever does this will get nice margins too, because they would save people a lot of work easily worth paying for.
That's nipping right at the toes of wintel cheap machines, and bring that on! Heck, it could sit right next to my Mac as machines of choice, high value, elegant, fun, well designed, and I would turn the Thinkpads off.
***I don't run Dell, HP, other cheap consumer grade machines, just because I want to buy a good one and run it for 5-10 years for much higher value, so I pay more, get more, spend less over time, just FYI I'm either running near free junk, because I might break or fry it. (no joke) Or, I'm running a great machine I don't want to mess with for a really long time. Average just costs money over time. No thanks. Plenty of people do though, buying one average machine after the other. And there is that PC again. "good enough" dominates because it's good enough, not "best" or "optimal" because most people just don't think it through, or value only specific things.
What it comes down to is there is two sources of slow down: Object handling, leading to bandwidth issues. And ALU vs FPU speeds.
Yes, with JS we are kind of stuck with the object handling because it is not a statically compiled language. So there is the other source of slow down which is the fact that we have to interpret and compile on the fly at run time anyway.
Again, it's not a 64 vs 32 issue.
Seems to me it is. That V8 video clearly shows they are munging JS numbers which are 64 bit floats down into native 32 bit ints so as to get a big speed boost. When the compiler detects that it is safe to do that your code is flying, as soon as you mix in bigger numbers or floats all that optimization has to be undone and new less efficient code generated.
Ergo, I think that if you processor was 64 bits anyway all that optimization work, resizing, allocating and deallocating the arrays would not be necessary and you would fly all the time.
It will be interesting to see if V8 manages to work out bit-shifting for divides without resorting to floats.
Hope they don't. Bit shifting right is not the same as divide for negative numbers:)
Wow, that must be a record long post, even for you:)
I'm not sure I quite caught what an Apple of ARM might be?.
Seems to me the ARM is the Intel PC of mobile. As a design it is that platform that many others have picked up to build embedded devices from simple controllers to phones to iPads. It may not be so accessible to Joe Bloggs in the street like the PC ISA bus was but it does have a huge community of companies adopting it and building on it.
So is an ARM-Apple and expensive niche general purpose computer modern day Apple style or is it that just good enough, cheap, open, expandable general purpose computer Apple II style?
The Raspberry Pi and VIA APC and others seem to show me that in a couple of years we could indeed have that 50 dollar 64 bit dual core ARM with 4-8GB RAM and SATA. At that point it's "good enough" to replace the Intel PC as we know it for 99% of use cases.
What, "wont run MS Office". Who cares. When the machine is cheap enough to be almost free and all documenting is done on the WEB (Does anyone word process for printing on their laser printer any more?) then MS is done as well.
ARM is many things, but to claim that it is 'the Intel PC of mobile' may be a bit premature.
Interestingly, ARM is a 'double acronym'(or whatever you'd call it) as
ARM = Advanced RISC Machines.
RISC = Reduced Instruction Set Computing.
My 1999 Psion netBook has a 190MHz StrongARM CPU, and of course runs a fully Pre-emptive OS with a GUI. It is also strong enough to run an IBM PC XT emulator.
(Some poor sap managed to get Windows to start in it, even. I mostly used it to run the DOS version of the PBASIC editor for my BS2p)
The way things are going, with more and more SW being written in assorted versions of Java, with more and more work being offloaded to internet-based servers, people will care less and less about what is actually inside their box.
I ran a Citrix app on my old Psion, to remote into the office... Now I can do that with any Mac, WintelPC, Linux box, pad or lappy.
Office 365 or Google Docs?
Does it matter as long as it can work with my files?
The fact is, it's mostly us 'HW tinkeres' really care about the differences between platforms or 8/16/32/64bit architecture any more, and it seems like there are fewer of us every year.
Yeah, "Intel PC of mobile" is perhaps over reaching the mark. I just meant that back in the day the PC grew with the help of a lot of companies growing up and building addons for it because they could, sound cards, graphics cards, etc etc. Similarly the ARM is taken up by many companies now to build their own System On a Chip (SOC) with all kind of different GPU's, I/O's etc etc.
The big difference of course is that it's impossible for the little guy to do that in his garage.
I think it's a nice machine with the work done for the user. Some GUI investment so that all the little things are sorted out, designed so that it is useful as a nicely made laptop is.
Truth is, I'm not entirely sure. What I am sure of is I don't yet see anyone going for the higher margin, just get stuff done kind of thing. I do see some nice packages and options, or closed devices. Maybe it won't happen / can't happen. I do know what I would pay more for though, and generally that sense has aligned with higher margin products, where they are an option.
Right now, there is a fairly coarse and significant divide between mobile and PC. Tablets and such live in there, as do some more general devices. (Pi, Droid PC Boards) Microsoft's ARM port of Win 8 is notable for only being delivered on locked devices that pretty much ONLY run Win 8, and ONLY get apps from their store or some windows server acting as a store. They are careful not to make a general purpose machine that crosses that divide...
I'm saying somebody should. I think there is demand for that, and I think it's growing.
Regardless of high margin, low margin, get stuff done or otherwise the practical reality has been that ARM chips were very capable processors for the power consumption and so took over the phone/tablet/ipod mobile world. Pretty much because there was nothing else that would last any time on the batteries.
Meanwhile the Intel and AMD guys were hell bent on raw performance regardless of power consumption. They had no chance in the mobile market.
So, simple physics separated these two camps.
Now though ARMs are getting bigger, multiple cores and 64 bits building up to that "ARM-Apple" machine and the x86's are starting down a low power road.
Perhaps they will meet in the middle somewhere and we will have an interesting time of confusion in the market.
By the time an ARM grows enough to be your "ARM-Apple" it may be as power hungry as an Intel and not suitable for anything below a laptop . They are already saying that 64 bit ARMs are for server use.
We are peaked in terms of raw compute per unit of time. The two camps will meet and an ARM laptop could run OSS and App Store and fork the general computing market. Lots of us can get it done on a device like that. Those numbers would come mostly from Microsoft too. There is room for another Apple essentially.
Another post because editing on Droid isn't optimal right now.
The whole point of the post was the best tech isn't necessary. The business model and value added to the product is where it is all at.
Right now Microsoft has saturated its market and they have been making moves to close things down and grow revenue on licenses. They aren't adding as much value and are rent seeking with increasingly onerous license terms and a general grip on hardware aimed at keeping competition at bay.
It was telling to see win 8 and how the surface played out. Lots of people are interested in cool hardware and they don't care that it doesn't perform like Intel does.
That is the open door. ARM + a good set of support chips (tegra, etc) can do lots of useful things and there is a pool of operating system software out there and a ton of DROID software bubbling up.
Do what Apple did. Take an open OS, build a great GUI environment and manage the hardware so that all of it works and can see software updates like Apple does them. Deal with the ugly stuff too. Users don't want to know stuff, so manage that part of things and give them a get stuff done machine.
Microsoft is making a mess trying to blend tablet desktop mobile together and there is an opening for a general purpose machine that has a good GUI that just works. When one takes a look at the pool of open apps, there is plenty to work with. Add an app store and it is good.
People like us see its a nice Unix, open and fast enough to develop on. Ordinary people see a machine that works well and it comes with lots of software that would cost tons if they were to buy licenses, etc...
Package that all up and ask for what all that work is worth and you get a high margin machine that can exist on a fraction share and there is real competition to Intel.
Comments
No, I am deadly serious. JS is turning out to be a God send around here. Even on a little ARM board that is taking care of some fastish real-tme stuff. If you also need secure http connections and websockets JS can do it in just a few lines of code and runs pretty damn smartly. As bonus I get a free web server on the board to use for configuration status pages etc. It's a lot easir to do that in JS than many other languages and systems and as fast or faster.
It's also a bonus that you are now using the same language all the way from embedded box to server to users browser client. Saves all that mental context switching when developing.
Anyway I was not suggesting the language dictates the machines word size, as you say we have had single and double floating point on all kinds of machines and all kinds of languages.
My only little point is that JS only has 64 bit float numbers. As such having a 64 bit machine probably gives you the chance to optimize the Just In Time compilation of engines like V8.
For example: Currently V8 will keep JS arrays as 32 bit integers if it is known that the elements are all ints that fit within 32 bit. This results in very fast code being compiled when operating on that array. As soon as you assign a bigger number or float to any element of the aray the whole array has to be copied to an array of floats, involving memory allocation and deallocation, and all the code to operate on the array is re compiled. Things get a lot slower.
So, in a few years I hope my little $50 ARM boards will be 64 bit machines and JS will fly even better in there. This will happen because of the JS push into mobile devices, phones, tabs etc.
No, the databus is already 128 bit or more - meaning the FPU is not bottlenecked. "64 bit" is mostly about address space. If you are dealing in 4GB+ data blocks with JS then you might have an argument, but, even then, it's no different to any compiled app.
And, of course, the databus on these low-power devices may be only 32 bit wide irrespective of the memory model. Prolly get similar results on the 64 bit Atoms for the same reasons.
Going back to 'long', the industry standard for C programming (agreed among the biggest *nix vendors and others, I have a copy of a whitepaper somewhere) has been as follows for more than 20 years:
char = 8 bits, short = 16 bits, int = 32 bits, long = word size, i.e. 32 bits on a 32-bit computer, 64 bits on a 64-bit computer. To be accurate, and that's important because Windows 64 doesn't follow the standard and is an exception: 'long' shall be the size of a pointer. So, in C programming you can always do 'long a = (long) &variable;', whatever the word size. What you can't do is 'int a = (int) &variable;' because it's not portable: It'll chop the pointer in half on a 64-bit machine. How Windows programmers handle that kind of pointer manipulation I have no idea.
(never mind the uint_32, uint_8 standard - that's from a different effort. Equally valid, but the above is also a completely safe definition of the 'old' terms these days and has been so for a very long time now).
As for addressable memory.. in my job I need all I can get. I have 16GB on my desktop but that's starting to be limiting. The data sets are now so huge, and the processing speed requirements keep increasing, so it's necessary to be able to keep those sets in memory. If I could have 196GB RAM (as one machine here can have installed) then I would.
-Tor
I agree, the data bus may well be very wide and the FPU may not be the bottle neck. And yes it's the address space we have craved for mostly.
However you are missing what I am saying about V8 and JavaScript Just In Time (JIT) compilation. In V8 if your numbers are integers that fit in 32 bits then it generates code, on the fly, to use the 32 bit integer ops of your processor, and it allocates normal linear arrays that can be indexed very quickly. So even if JS thinks all numbers are floats the V8 engine may not even be using floats in your program.
But in JS all numbers are 64 bit floats so if your program at some point drops a float into an existing integer array the array has to be converted to floats, a lot of allocating, copying and deallocating has to happen and all that old JITed integer code is discarded and new FP code generated.
If your processor were a 64 bit machine though all that convoluted optimization could be avoided and performance boosted.
Now, I might still be wrong, so have a look at this fascinating video about how the V8 JS engine does all this to achieve C++ like performance. Then tell me what you think.
http://www.youtube.com/watch?v=UJPdhx5zTaw
Hmm, you do realise that you are not talking about floats any longer right? You did start off by saying that the nature of JS using 64 bit floats made it a strong demand for a 64 bit processor. Then you switched to pushing the hidden integer features instead. And how JS's integer performance in 32 bit data registers is wonderful as long as you stay within 32 bit integer limits ... gee, maybe it's not so floaty after all ...
Ok, I can see if you are wanting to do only integer calculations and the multiplies are likely to blow out the 32 bit range (What happens to that nice clean integer, given the automatic nature, on the subsequent divide? 32 bit ints do cover a lot of normal work already.) then a 64 bit integer data register would be faster, yes. I presume this is your argument?
PS: Having watched some of that vid I see you are saying the nominal JS float datatype is officially an object/class with all the usual object baggage. Or more accurately, V8 has a special case where it stores the int into what would usually be the object pointer - using the least significant bit of the pointer to indicate whether it is a pointer or a 31 bit int. So, that's an even less fair comparison. If the float was a simple datatype the same as the int then it'd run faster with floats.
I've not thought about it too much, but this post makes me want to go and twiddle with an LCD to see if it does a similar thing, or just has more grey scales, or just renders things with less precision. I suspect some combination of the latter as the switch speed on LCD isn't anywhere close to what plasma is.
Edit: I just checked and both my Thinkpad laptops have 10 bit options. Now I've got to do a TV / CRT comparison... This might be out there more than I thought. Not sure how I get 10 bit data into it without a short program though... That's a hunting exercise for later.
In any case, 8 bit color isn't anywhere close to our own fidelity of perception, particularly with monochrome images. 10 gets really good, and I suspect something over that is really where the boundary is. IMHO, this is still one area where even a modest CRT will out perform modern displays. To be really nit picky, straight up RGB fails to reproduce a little of the color space many people can see too. Room for improvement there for sure.
Re: 64 bit
IMHO, this isn't a question of need as much as it is cost. Honestly, we don't need 64 bits for a lot of stuff, though I have an office worker to show you who will bury a 4GB machine just running MS Office, and a few other content create / information manage applications. Data is getting big. Really big and really quick. That alone is enough to warrant broad adoption of 64 bit computers IMHO. It's about data first and foremost in most applications today. Remember, we are essentially capped on peak compute too. Multi-core / multi-processing of various kinds is growing because cycles per clock isn't. That has data implications because we are now back to the core idea of smarter and bigger data being a primary way to improve performance as opposed to code optimization and such that really focus on cycles / clock. When the peak happened somewhere in the 00's, data started to grow. Here we are today with huge data! Only going to get bigger folks, and that's true for most applications.
And if that scales broadly for cost savings, it's going to mean using wider bit paths for a lot of things, and there will always be that trade off between optimal code / data / compute with narrower bit paths being somewhat less than optimal in a lot of applications because the cost of those exceeds the value of the optimal environment.
64 bits it is then. For those niches where it really matters, there will be options and they will remain niche and often expensive, depending and that's how computing is.
Well, I probably not explaining myself very well, and for sure I don't know the intricacies of JS JIT optimization. But I think you get my idea now. Looks to me like if V8 did not have to chop between 64bit floats and 32 bit integers it would have more scope for streamlining code.
Yes it is interesting how those speed trap avoidances work. Up till now I have not worried about any of that in my code. After all I had no clue what was going on inside V8. I just notice that similar code in C++ or JS throwing XML or JSON around runs at about the same speed. Which I find quite amazing.
P.S. That is a good point re: overflows causing things to blow up out of the 32 bit size. I would imagine they might even be so smart as to allow expressions to have bigger than 31 bit iternediate results but as long the final result you store back to you target array is within limits the array does not have to be reworked.
I will now go away and recast the integer only heater-fft in JavaScript and find out how it flies on V8:)
They can run Windows, or Linux and they typically feature processors with very high front side bus metrics as well as deep caches. It's not just marketing on those either. Both OSes take good advantage of these machines and they are well indicated for the users who employ them.
I really don't understand all the negative commentary. Back in the DOS days, those 32 bit computers can and did run some higher end stuff that performed significantly better and on larger data sets then. It got used, just not by some ordinary joe, but again we all saw the benefit through economy of scale. That's true today with the move to 64 bit computing. For most people it isn't going to change their lives, other than they will find it much more difficult to bury a machine now no matter what it is they are doing.
However that all goes, the cost to maintain production of code and hardware for 32 bit computing is rapidly exceeding the value of doing so. Advancing things in this way is very high value in the end. Always has been.
Re: Mac being irrelevant.
Clearly you do not understand, nor participate in the kinds of work groups and tasks that a Mac adds value to. If one just looks at hardware, yeah the Mac isn't optimal in terms of features and cost, but if you look at the computer OS+Software+Features, little things like insane good power management tend to start to add up. For those people impacted, they see value and are happy to pay for it. This is not all about sheer market share. Never has been, never will be. (sigh)
Story is this: A year ago I had an urgent requirement to stream a lot of data in real-time over the internet to browsers. We are talking in the order of 10Kbytes ten times per second. Of course I knew nothing about these new fangled WEB technologies (still don't) so I picked up the easiest way I could find to do it. That was Node.js + a websockets module + a couple of hundred lines of JS in the server and in the browsers.
I was very nervous about all this as it's all very new. I had no idea if it would work or how reliable it might be. Besides as we all know JS is that stupid crappy scripting language for handling mouseovers in your HTML. But I had no choice, the boss wanted it NOW. We had a demo at an iternational trade show comming up.
A year later that thing is still running just fine and it has been very easy to enhance.
Then I started to notice that it was sucking about the same CPU load as a C++ server we have that is doing pretty much the same thing. What's going on here?
Recently I was trying out similar stuff in Google's Go language which is compiled to native code. WTF it's ten times slower than the same functionality in interpretted JS, on my little ARM boards at least.
Be warned. If you are comming from a good old fashioned structured programming school and work in C or Pascal etc or if you come from an object oriented school, Java, C++ etc then you will find yourself in a strange place with JavaScript.
You might expect JS is a simple scripting language like VB or something. It is not. You can do structured code in JS, you can do class based code in JS. But then JS has all these funky features like first class functions, closures and prototypical inheritance that take a bit of getting used to.
Then of course there is a pile of, shall we say "defects" in the language design that can catch you out. You best defense against falling into those traps is to check your code with JSLint.
All in all it's an amazing language with features that normally only exist in things like lisp or languages out of AI research that have never gone mainstream. Incredibly it was designed and built in 10 days at Netscape.
Anyway, we are way off topic here now. When I get JS talking to my Props we can revisit this.
When you cannot be sure whether the expression 3 + 3 will return 6 or 33, that's not a scare-quote "defect." That's fundamentally broken and needs to be fixed, if necessary by starting over from scratch and realizing that sometimes strong typing is actually a good idea.
I can't really argue with that. Except that as far as I know the epression 3 + 3 is actually always 6 in JS.
Mind you ("Result = " + 3 + 3) is the string "Result = 33".
Mind you 0.1 + 0.2 is not 0.3, but then that is typical of the IEEE_754-1985 floating point standard in all languages.
The defects are a bit weirder than that even.
Like the fact that a var declared within curly brackets is not scoped to those brackets but rather the function they are within.
Like the fact that semi-colons can be ommited but doing so may lead to obscure hard to find errors.
Like the fact that the built in "this" variable does not always refer to the object you think it might.
However JSLint does a brilliant job of pointing out all the subtle posibilities for ambiguity.
There is little chance of starting over from scratch with JS it is deeply entrenched in every browser and many other places by now. But, hey, C has its quirks an inscrutible behaviors as well and that is going stronger than ever.
Oh, by the way, quickly what do you get if you write this in the dat section of a Spin program:
s byte "Result = " + 3 + 3
Even weirder, better start over from scratch with Spin:)
What it comes down to is there is two sources of slow down: Object handling, leading to bandwidth issues. And ALU vs FPU speeds.
Again, it's not a 64 vs 32 issue.
It will be interesting to see if V8 manages to work out bit-shifting for divides without resorting to floats.
Why did this domination occur? It certainly wasn't on technical grounds. Because the large scale economy didn't make use of the new capabilities even when they did belatedly turn up!
There's you reason for my attitude. And I did already cover this in my previous potato post. :P
True, but now there is only the PC left. No, shouldn't say that. The real story with 64 bit is RAM size and speed, and this is a real tech feature of the modern times.
Yes, the Mac was a leader both technically, choosing the 68k CPU, SCSI and 24 bit colour, and on the application/desktop front. Very significantly so. But the clones made that all irrelevant. M$, and Intel to some extent, got a free ride on the back of the crappy clones.
My point is, the Mac, in it's current level of popularity, has a rather massive crutch supporting it, known as the iPod and iPad, and iPhone ... M$ had all but squashed Apple into non-existence by 2000, even earlier. The Mac was still clinging on to a little of DTP with a few scientific and educational stragglers before the iPod hit the streets. The Mac was not hip. The Mac was not cheap. The Mac was not spec'd amazing. The Mac was not a Wintel box, even though it, by then, used PC hardware for everything except the CPU. Apple was nearly gone, just like the rest.
Apple, and I'm pretty amazed they pulled it off, owes it's current existence to the Web. It took Apple a lot of years to get into gear with the iPod and I don't know why. It would have been interesting to see if the music industry itself tried to use the Web as, say, an indexing system for finding and even ordering songs right from the outset. Would there have been room for other players like Apple?
I hope that's understanding enough?
Well, maybe Evanh. Intel might have shriveled up. Who knows whether or not Moto, Zilog, or somebody else would have taken the paths Intel has?
That domination happened for a few reasons:
1. IBM put some weight behind the tech. Apple pioneered the PC with the Apple 2 series of computers, which basically were 8 bit PC's. They did everything a PC did, and until clock speed washed them out, were perfectly fine, capable PC's. I still have one, and it still gets used like a PC. Having a design that encourages add-on tech is the difference between a product and a platform. PC's were going to be the platform, many other computers were merely products. Very interesting times then.
2. For all the warts on Intel chips, they did have the basic features needed to carry computing forward. This, in tandem with #1 and IMHO the success everybody saw Apple enjoy with the Apple 2 and it's very long life, highlighted the PC as the go forward design. I remember those times very well and early PC's were kind of crappy compared to other products, but even then, it was possible to pay a lot and get a lot with a PC. That's a really big deal. A well equipped PC, despite the dubious CPU was capable of a lot of things. Other machines and CPUs were faster, and or offered up significant capabilities, but they were not offering that basic element.
3. Deals. IBM + MSDOS started things off. Soon there were lots of software companies authoring for that platform and the weight of the business behind it meant a steady stream of buyers and longevity. The best tech almost never wins. The best business model almost always does. That's the third leg of the "why the PC dominated" stool.
Both IBM and Apple didn't bother to compete on a pure price / performance basis. The value of the machines had as much to do with the business model and the design meant to be expanded on as it did the core technology inside. Both eras had exactly the same dynamics. When the Apple 2 was king, it featured an 8 bit CPU, clocked a bit slow though it wasn't interrupted with video DMA and refresh so it ran very nicely, coupled to discrete logic graphics. It was just enough to get stuff done. The real show was in the expansion and the nicely designed software interface. Anywhere the machine needed something to get stuff done that stuff was designed and sold, resulting very quickly in that "get stuff done" kind of machine. Most users of the machines paid to get stuff done, not for some technical merit or other. A few killer examples were CP/M cards that brought a ton of ready to go software to Apple computers used for business, combined with all the nifty graphics / publishing applications. That right there justified the high cost. You either had to have two machines to do that, or just didn't do some of that stuff. Other technically superior machines simply did not offer "get stuff done" It's a big deal, as I just wrote. Really big.
The same was true for the PC.
When the PC really started to take off, there were much better machines around, nearly all of them niche computers. The thing about them is they don't get the software attention general purpose computers do and that's just a big deal. Software is a huge part of the overall value proposition inherent in hardware --computer hardware aimed at people using computing to get stuff done. That's not always true for hardware in general, but it is totally true for computer hardware. Once the center of mass moved from the Apple 2 to the PC, it was all but over for everybody else.
What was gonna happen, and did happen was the software grew in capability and the general purpose machines were expanded and added on to so that software could continue to grow in capability. That cycle continues through today, though we have a nice split going on with ARM.
During that early PC time, I was running both high end SGI IRIX machines, which put the mere PC to shame and made an Amiga user drool, but the cost on those was off the charts high. Niche machines for sure. The most interesting thing happened with the PC. "Good enough" started to take root. If one wanted to do CAD and CAM, which was a career focus of mine, doing it on an SGI or SUN or HP UNIX machine was as good as it gets. Leading edge. Many of the niche computers were on par compute wise and even graphics wise but they did not see Application ports because there was no audience and the machines didn't always have expansion capability that made sense in that context. The PC did though. CAD and CAM saw ports to DOS as well as some new software aimed at meeting market demand for CAD and CAM. I could take a DOS PC and beef it up and get to the upper end of "good enough" to where only the largest jobs required UNIX, the rest could be done and could be done at a nice cost advantage. That SAME PC could be stripped down a little and dropped into the business office where it would crunch numbers too, where UNIX did not offer that option. In fact, one could just run crappy UNIX on a PC and get those bits done easily enough, and plenty of people did. Linus secured that with Linux and the results are well known today.
General purpose machines meant being able to meet lots of needs in a fairly consistent and increasingly cost effective way. Didn't matter that the tech was less than optimal. In fact, the lesson learned is that it NEVER DOES.
The longer term lesson here is that general purpose computing eventually rises to meet the vast majority of needs, pushing specialized and often superior tech to niches where it dominates on technical merit to the point where it makes very little sense to disrupt it, and those niches move over time too, always being eroded by "good enough"
So that's why.
Yes, Apple Mac took a real beating and the Moto CPU's followed by the Power PC did not keep up with Intel. Neither did MIPS or ALPHA either. MIPS was so damn good that a 300Mhz MIPS would run as well as a Pentium X at a few times that clock speed. Alphas were similar. Apple migrated their user base and software through Moto, PPC and onto Intel, picking up BSD UNIX as the core of their new OS.
That makes the Mac today. Do you know what the rocket scientists run at many of the National Labs? Mac. Why? Because it's a great UNIX that has general purpose office software and it can rather easily work with the higher end HPC hardware they are always working on. Want to know what a lot of Silicon Valley startups run? Mac, and they do so because the machines are elegant, stable and run UNIX easily communicating with and operating like UNIX / Linux hardware that runs the Internet today. Want to know what a lot of content creators run? Mac. Good applications are available on the machines, they look good, run good, and they have a very productive UI. I could go on and on.
It's not about share with Apple. It's never about the best tech either, though Apple does push the edge with their iPads and Phones. It's about targeting niches and really delivering better than "good enough" and making fat margins on that, maximizing the resources of the organization toward higher margin deals as much of the time as possible. Apple doesn't need share as much as it needs people who will pay for all the value they add, not just buy on technical hardware specifications alone. The high margin Mac funded the iTunes, iPod and all the other developments just like the high margin Apple 2 funded the Lisa, then the Mac. This is why Apple does what it does. It's not about share. It is all about margin, because high margins mean being able to capitalize the company, which funds new products, pays for risk, etc...
Dell can't become Apple. They run on thin margins! They aren't ever capitalized enough to make the kinds of investments needed to do what Apple does. That's the difference, and make no mistake, the Mac is just fine with the share it has because that user base pays plenty for a machine that's got a lot of value for them. So long as that results in good capitalization of the company, share doesn't matter. Again, that lesson to be learned from Apple Computer.
Those people, who do buy on specs alone, are typically PC buyers who are thin margin customers who want good enough and on it goes.
I deal with a lot of companies doing what I do and I've seen this all play out multiple times. The people who run Apple stuff are looking for specific attributes to be maximized and they pay for that, and those attributes rarely involve the fastest CPU, but they do want great power management, or a fantastic looking display, or a machine that is a breeze to travel with, or that runs UNIX, etc...
Apple is very well capitalized and the IPod / iPhone + App Store, etc.. .really kicked it off for them, but make no mistake, the Mac was making them plenty of money and it delivered much higher margins than any PC did, meaning they made more money at a fraction share than race to the bottom hardware value only companies like Dell did at several times that volume of machines. And it made that money because Apple had software and hardware control able to sell a very different value proposition. Microsoft sells software and Dell sells hardware, both of them high volume companies, neither able to do what Apple does and get what Apple gets for doing it. Ever wonder why Microsoft is trying really hard to get control of hardware and produce it too? This is why.
In any case, throughout all of this, I never, ever saw the best tech win on anything, ever. It's not about that, never will be about that. It's about the business model and the overall value added and most importantly, whether or not the producers of that product or solution do the work to get people to see that value and pay for it. Those are the people that win. They will always win.
And that comes down to "best tech" On which metric? Speed, power consumption, size, etc...? That's where adding value comes into play. An emphasis on one technical metric doesn't secure market share because it's about meeting demand and that's done by adding value around the tech as well as selecting optimal tech for the target markets. "Best Tech" really varies considerably.
As for the slow appearance of iTunes + iPod, blame the music industry itself for that. They have fought everybody over distribution and they continue doing that. Had they not fought over all that stuff, they themselves could have aced Apple out, building on Napster of all things, who was the first to actually propose a bulk subscription model for trading music. Had the majors said "yes" then, it's very likely Apple would have played out very differently today. They would still be making great computers and very likely would have just integrated that creation into their overall plan and made less money, but plenty still because they really don't race to the bottom for volume as much as they have always worked on figuring out how to add value across the whole effort. Lots of companies today could learn a lot from Apple computer.
Took forever for Apple to convince the majors to sell on iTunes. Took 'em longer still to convince them to sell in high value ways. Took longer still to get them to sell unencumbered music, etc... Blame the labels and the RIAA for that, not Apple. Whole other discussion there.
Jobs had a saying: If you buy a 30 cent part from a surplus store and sell it to somebody for $6, it was worth $6 to them and they don't need to know what you paid for it. That's been true at Apple since day one. And it's true right now. Things are worth what people will pay for them and when a lot of value is shown to people they will pay a lot of money. Truth is, people see that cheap part and think they should get it cheap too, but they were not the ones scrounging around sourcing it either, nor did they inventory it, document it, whatever. All of those things add value and when that value is well expressed people pay for it and that maximizes all the activity, not just the transacting on the part itself. That thinking is what got the cool products done. That is what a lot of people don't understand about Apple and technology in general.
The sum of all that is we end up in a world filled with cheap, plentiful, fast PC's that evolved to an optimal design and those are thin margin commodity products. The rest of the world is niches where there is a lot of value to be offered and a lot of money to be made. The very best tech finds it's way into some niches, most of it gets passed up in favor of good enough and there is always new stuff bubbling up here and there. The best tech will flash up, make it's impact from time to time, but realistically, the bulk of what is out there will be those things that are good enough and that can be improved on, dealt with, standardized so that the people who want to get stuff done can simply pay to get stuff done and they always pay for that, not the nature of the tech, unless they are in a niche where it really matters or enthusiasts like us who really like tech apart from the wider "getting stuff done" basis that the vast majority of people evaluate things on.
Edit: And I mean none of that in a bad way. Some of us just want the tech and we don't want to pay anything other than a thin margin for it too. Great! Plenty of takers on that, everybody is happy. Others want to get stuff done, and that's a whole other deal involving more than just technology, and that's great too, and that is what Apple generally does.
The key here is to recognize those dynamics and then apply them to the technology that is out there and more importantly, the players and their business model. I don't see an Apple of ARM yet, but I'm kind of hoping one pops up. There is a growing opening for some machine that runs "good enough" on ARM that uses the Smile out of Open Source Software, that features great power management, solid design, etc... If somebody does that and makes the right deals, the wintel nut could get cracked wide open and if it does, we are headed for really interesting times again.
A $50 machine can do the basics on ARM. Internet, Office, E-mail, Programming, Gaming, etc... Who is going to really nail it and do the Apple thing? When I'm thinking about tech and doing this kind of stuff, I just want a cheap ARM computer I can throw code at. Prolly will get a Pi or maybe an Android PC. But, I would pay nicely for an ARM machine with the work done. Nice GUI, managed software, drivers all sorted that just rocks hard. I would pay several hundred dollars for that and I suspect a lot of others would too. Make me a great little ARM laptop that runs all day on a battery, features ports I can use, killer display, OSS software for low run cost, app store to buy stuff and games, and I'm in! I would pay even more for that deal. Support it with regular OS updates, applications / drivers managed and such, and it's basically a Mac on ARM that presents huge value in that the machine might not even cost what all the software licenses would from Microsoft or Apple. Whoever does this will get nice margins too, because they would save people a lot of work easily worth paying for.
That's nipping right at the toes of wintel cheap machines, and bring that on! Heck, it could sit right next to my Mac as machines of choice, high value, elegant, fun, well designed, and I would turn the Thinkpads off.
***I don't run Dell, HP, other cheap consumer grade machines, just because I want to buy a good one and run it for 5-10 years for much higher value, so I pay more, get more, spend less over time, just FYI I'm either running near free junk, because I might break or fry it. (no joke) Or, I'm running a great machine I don't want to mess with for a really long time. Average just costs money over time. No thanks. Plenty of people do though, buying one average machine after the other. And there is that PC again. "good enough" dominates because it's good enough, not "best" or "optimal" because most people just don't think it through, or value only specific things.
Ergo, I think that if you processor was 64 bits anyway all that optimization work, resizing, allocating and deallocating the arrays would not be necessary and you would fly all the time. Hope they don't. Bit shifting right is not the same as divide for negative numbers:)
Wow, that must be a record long post, even for you:)
I'm not sure I quite caught what an Apple of ARM might be?.
Seems to me the ARM is the Intel PC of mobile. As a design it is that platform that many others have picked up to build embedded devices from simple controllers to phones to iPads. It may not be so accessible to Joe Bloggs in the street like the PC ISA bus was but it does have a huge community of companies adopting it and building on it.
So is an ARM-Apple and expensive niche general purpose computer modern day Apple style or is it that just good enough, cheap, open, expandable general purpose computer Apple II style?
The Raspberry Pi and VIA APC and others seem to show me that in a couple of years we could indeed have that 50 dollar 64 bit dual core ARM with 4-8GB RAM and SATA. At that point it's "good enough" to replace the Intel PC as we know it for 99% of use cases.
What, "wont run MS Office". Who cares. When the machine is cheap enough to be almost free and all documenting is done on the WEB (Does anyone word process for printing on their laser printer any more?) then MS is done as well.
Interestingly, ARM is a 'double acronym'(or whatever you'd call it) as
ARM = Advanced RISC Machines.
RISC = Reduced Instruction Set Computing.
My 1999 Psion netBook has a 190MHz StrongARM CPU, and of course runs a fully Pre-emptive OS with a GUI. It is also strong enough to run an IBM PC XT emulator.
(Some poor sap managed to get Windows to start in it, even. I mostly used it to run the DOS version of the PBASIC editor for my BS2p)
The way things are going, with more and more SW being written in assorted versions of Java, with more and more work being offloaded to internet-based servers, people will care less and less about what is actually inside their box.
I ran a Citrix app on my old Psion, to remote into the office... Now I can do that with any Mac, WintelPC, Linux box, pad or lappy.
Office 365 or Google Docs?
Does it matter as long as it can work with my files?
The fact is, it's mostly us 'HW tinkeres' really care about the differences between platforms or 8/16/32/64bit architecture any more, and it seems like there are fewer of us every year.
The big difference of course is that it's impossible for the little guy to do that in his garage.
I think it's a nice machine with the work done for the user. Some GUI investment so that all the little things are sorted out, designed so that it is useful as a nicely made laptop is.
Truth is, I'm not entirely sure. What I am sure of is I don't yet see anyone going for the higher margin, just get stuff done kind of thing. I do see some nice packages and options, or closed devices. Maybe it won't happen / can't happen. I do know what I would pay more for though, and generally that sense has aligned with higher margin products, where they are an option.
Right now, there is a fairly coarse and significant divide between mobile and PC. Tablets and such live in there, as do some more general devices. (Pi, Droid PC Boards) Microsoft's ARM port of Win 8 is notable for only being delivered on locked devices that pretty much ONLY run Win 8, and ONLY get apps from their store or some windows server acting as a store. They are careful not to make a general purpose machine that crosses that divide...
I'm saying somebody should. I think there is demand for that, and I think it's growing.
Regardless of high margin, low margin, get stuff done or otherwise the practical reality has been that ARM chips were very capable processors for the power consumption and so took over the phone/tablet/ipod mobile world. Pretty much because there was nothing else that would last any time on the batteries.
Meanwhile the Intel and AMD guys were hell bent on raw performance regardless of power consumption. They had no chance in the mobile market.
So, simple physics separated these two camps.
Now though ARMs are getting bigger, multiple cores and 64 bits building up to that "ARM-Apple" machine and the x86's are starting down a low power road.
Perhaps they will meet in the middle somewhere and we will have an interesting time of confusion in the market.
By the time an ARM grows enough to be your "ARM-Apple" it may be as power hungry as an Intel and not suitable for anything below a laptop . They are already saying that 64 bit ARMs are for server use.
We are peaked in terms of raw compute per unit of time. The two camps will meet and an ARM laptop could run OSS and App Store and fork the general computing market. Lots of us can get it done on a device like that. Those numbers would come mostly from Microsoft too. There is room for another Apple essentially.
The whole point of the post was the best tech isn't necessary. The business model and value added to the product is where it is all at.
Right now Microsoft has saturated its market and they have been making moves to close things down and grow revenue on licenses. They aren't adding as much value and are rent seeking with increasingly onerous license terms and a general grip on hardware aimed at keeping competition at bay.
It was telling to see win 8 and how the surface played out. Lots of people are interested in cool hardware and they don't care that it doesn't perform like Intel does.
That is the open door. ARM + a good set of support chips (tegra, etc) can do lots of useful things and there is a pool of operating system software out there and a ton of DROID software bubbling up.
Do what Apple did. Take an open OS, build a great GUI environment and manage the hardware so that all of it works and can see software updates like Apple does them. Deal with the ugly stuff too. Users don't want to know stuff, so manage that part of things and give them a get stuff done machine.
Microsoft is making a mess trying to blend tablet desktop mobile together and there is an opening for a general purpose machine that has a good GUI that just works. When one takes a look at the pool of open apps, there is plenty to work with. Add an app store and it is good.
People like us see its a nice Unix, open and fast enough to develop on. Ordinary people see a machine that works well and it comes with lots of software that would cost tons if they were to buy licenses, etc...
Package that all up and ask for what all that work is worth and you get a high margin machine that can exist on a fraction share and there is real competition to Intel.
It sounds like they are turning into Autodesk...
C.W.