Shop OBEX P1 Docs P2 Docs Learn Events
8-Bit 16-Bit 32-Bit Whats the difference? — Parallax Forums

8-Bit 16-Bit 32-Bit Whats the difference?

Aaqil KhanAaqil Khan Posts: 60
edited 2013-02-12 09:01 in General Discussion
- Can someone explain in layman terms, whats the difference between 8-bit , 16-bit (or n-bit for that matter) microcontrollers?
- how many bits is the BS2 microcontroller?
- what does it mean to the hobby programmer?

Thanks.

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
E=mc^2
«1345

Comments

  • James LongJames Long Posts: 1,181
    edited 2007-01-18 00:56
    Ok....I'll take a stab at a laymens definition:

    8 bit : 00000000 capable of doing number up to 255 (0-255)

    16 bit: 00000000_00000000 capable of doing numbers up to 65535 (0-65535)

    32 bit: 00000000_00000000_00000000_00000000 capable of doing numbers up to 4294967295 (0-4294967295)

    The bit level is how many bits the controller can handle.

    The BS2 is a 16 bit microcontroller.

    The usual limitation for the hobbiest is the limitation of calculations. If your number does not fall within these number ranges....then you have a problem. I didn't go into detail about -/+ because I don't want to have to type that much. If you use -/+ you just split the range in half......half negative, and half positive.

    Let me know if this doesn't make sense, I will explain further.

    James L
  • Mike GreenMike Green Posts: 23,101
    edited 2007-01-18 01:32
    In addition to the size of the basic arithmetic operations, the "width" of the processor affects the speed and sophistication of the instructions as well. The IBM 360 series of computers was a good example with identical instruction sets, but with models with 8-bit, 16-bit, 32-bit, and 64-bit data paths. The wider the data paths (including memory width), the faster they ran and the more expensive and physically larger they got.

    As a practical example with Parallax, the Stamps are actually 8-bit processors (SX or PIC) internally that run a program (the PBasic interpreter) that "pretends to be" a 16-bit processor for the PBasic machine which the PBasic compiler translates your Stamp programs into. The Propeller is a true 32-bit processor (actually 8 separate 32-bit processors on one chip) that sometimes runs a program (the Spin interpreter) that "pretends to be" a 32-bit processor for the Spin machine which the Propeller Tool translates your Spin programs into. The Propeller can also run programs or pieces of a program written in its native instruction set (assembly language) for speed or precise timing.
  • James LongJames Long Posts: 1,181
    edited 2007-01-18 01:40
    Mike,

    You never stop amazing me with your knowledge. I didn't know the Stamp as an 8 bit micro that "pretends to be a 16 bit".

    Interesting.

    Thanks for the one new thing for today.

    James Long
  • LSBLSB Posts: 175
    edited 2007-01-18 01:58
    My 2-bits:

    An 8 bit processor processes 8 bits at a time, a 16 bit processor 16, a 32 bit processor 32, ad infinitum…. I suspect that carries little meaning; it’s not always clear what the value of a bit is unless you need one.
    First, understand that a byte is the number of bits that the processor can process in one instruction. An 8 bit processor byte is 8 bits, like: 10110011; a 16 bit processor byte is like: 1100011101110011.

    Let’s start simpler… Suppose you had a one bit processor (a byte would be like: 1). Now one bit can carry some information—a 1 or a 0 (because bits are tiny little switches that are either on or off. Instructions are also carried to the processor in this way, so you could have two instructions—one represented by a 0 and one represented by a 1. A microprocessor works by loading a data byte and an instruction; the instruction tells the processor what to do with the data—perhaps to add it to the next byte it receives, or to subtract it, hold it, or store it. What can we do with 2 instructions (1 or 0)? One must be used to tell the processor to go and get the next byte, so that leave us only one instruction—let’s say we use it to tell the processor to add the current byte to the next byte received. Ok… good, we’ve got a computer that adds, but we can represent only two totals: 0 or 1. What happens when we add 0+0? We get zero and our computer works as we expected; the same with 0+1 and 1+0, we’re golden everything hums along perfectly. What happens when we add 1+1? The answer is not one—it’s zero! The processor “rolls over” and starts over because we’re out of room. We’re stuck—no way to add an instruction telling the computer to put the extra byte someplace and no byte to hold it.

    Let’s build a two bit processor (a byte would be like: 01). Using the same example as above, we can now add 1+1 and handle the answer: 2. We represent this in bits by 10—one group of 2 and no groups of 1. We can add 2+1, our byte to represent this is: 11—one group of 2 and one group of 1. When we add 2+2, we run out of space again, but we now have extra instructional bits as well! We can add an instruction that tells the computer to put the extra bit someplace so we can display it as part of the answer later. We’re saved! We can just keep putting our extra bits someplace and let them stack up because we just tell the computer to go and print them out as part of the answer when we need them… almost. Our problem is our computer won’t count any higher than three—how do we tell the processor where all our 'extra bits' are? Maybe we could add extra bytes someplace to tell our computer where the extra bytes are that we need for our answer…maybe, but all this going and getting extra bytes takes time and instructions and we run out of room and it takes longer, so we build a bigger processor.

    Eventually we get to an eight bit processor (like the Stamp). With eight bits we can represent 256 different values with the eight bits like: 00000000, 00000001, 00000011… 11111111. As we did with out 2-bit processor we count them like no groups of anything all the way up to one group of 128 and one group of 64 and one group of 32 and one group of 16 and one group of 8… (there’s a lot of information on binary counting—use Google). 256 instructions and adding chucks of 256 are sufficient for many hobbyist needs. If one of our instructions is “The next number is a color” then we can draw pretty good pictures with 256 colors (like old video games). If one of our instructions is “the next number is a letter” then we can do all the letters of the alphabet UPPERCASE and lowercase and have periods and question marks and just about everything. With 256 instructions we can add, subtract, multiply, and divide… that’s only 4—we have 251 left! When we need to put bits someplace, we can tell the computer 256 places where they are… that’s pretty good, but, we may need more… so we build an even bigger computer with 16 bits in a byte! Now we can draw really good pictures and give the computer even more instructions!

    Maybe you think nobody will ever need this many bits in a byte, but if you think of bits as colors and imagine that children can be happy with only a few colors, then imagine an artist… You see why most hobbyists (like me) are pretty happy with 8-bits and why some people need 16 bits—or more!

    This explanation is WAY over simplified and, frankly, not strictly true in every aspect, but I sacrificed facts only to clarify my explanation. More bits per bite means more capability, more options, and more possibilities. Why am I happy with 8 bit processors? Because I can buy some brands for about a dollar and I’m not an artist.
  • Aaqil KhanAaqil Khan Posts: 60
    edited 2007-01-19 18:25
    Thank you everyone for your responses. By the way, I didnt know than an 8-bit microcontroller could "pretend to be" a 16-bit controller? I can imagine the other way around but dont get how an "lower" microcontroller can pretend to be a "higher" controller? I mean at its core its still computing at 8-bits isnt it??? eyes.gif

    What about the basic stamps that have PICmicros in them? Are these PIC16 or PIC18 and what's the difference between them?

    Thanks again.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    E=mc^2
  • GadgetmanGadgetman Posts: 2,436
    edited 2007-01-19 19:36
    The definition I learned is that 'an 8bit micro' has an 8bit wide data-bus, and also works with 8bit instructions.
    The 6502, the Z80, 8080, 8085 are all classic examples of 8bit micros, though...
    the Z80 can do some 16bit load, logic and arithmetic operations.

    Also, the Intel 8088(used in the first PCs), while it had an 8bit databus, it worked with 16bit 'wide' instructions internally. It just had to load all instructions Byte by Byte.
    (It was a 8086 in disguise. IBM picked the 'castrated' version because a narrower Databus meant a simpler and cheaper design... )
    A Zilog Z8000 can do 64bit arithmetic, but it is still only a 16bit processor.

    Where it gets really weird is in some microcontrollers, the instruction-width may be 12 or 14 bits(or whatever suits the designer) so that they can fit the entire instruction-set into it..
    (The Z80 on the other hand had such a large set of instructions, they had to use 'prefix' Bytes and expand the available 'command space' All in all, they had 138 different instructions, and with variations, hit about 760 or so)
    anyway, on microcontrollers where the instruction-width is different from the data-width, these also occupies separate memory-banks, and we usually think of the DATA area when we count the bits.

    That just leaves the bit-slice processors...
    (But that way lies madness... )

    Speaking of madness...
    the 6502 had something called 'zero-page adressing', in which all jumps and Load operations were 8bit. It made for pretty fast code, IF you could squeeze it into the first 256Bytes of the address space.
    Yes, an addressing mode which only worked in the first 256Bytes of the 65536Bytes of addressable RAM/ROM...

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Don't visit my new website...
  • Mike GreenMike Green Posts: 23,101
    edited 2007-01-19 20:39
    Aaqil Khan,
    When I said "pretend to be". I was referring to the use of an interpreter or emulator. You may know the statement about dogs having fleas and the fleas having fleas "ad infinitum". The same kind of thing holds true in computing. Somewhere on the bottom of it all is basic hardware. There may be an adder and some logic for Boolean arithmetic (and, or, exclusive or, etc.) and shifting. There's usually some registers including something that indicates where the next instruction is coming from as well as some logic to interpret parts of the instruction. On the other hand, what the user sees and what's in the published manual and sales brochure may have little to do with the functioning of the hardware. The current Stamps, for example, at least the BS2p series, use a chip from the SX microcontroller series from Ubicom which Parallax sells. You can download the manuals from Parallax's website. This is an 8-bit processor (with all 8 bit data paths, an 8-bit arithmetic unit, 8-bit data memory) although the instruction memory (an electrically erasable programmable mostly read-only memory - EEPROM) is 12 bits wide and some registers, like the program counter and call/return stack are 12 bits wide. Parallax sells some of these preprogrammed with a program that is an emulator for a 16 bit processor (the Stamp interpreter) with many specialized instructions for I/O like bit serial, frequency generation, and lots more. This emulated instruction set has no resemblance to the native instruction set of the hardware, yet is what the user sees (and buys) when a Stamp is bought. Similarly, the IBM System 360 series of computers from the 1960's and 1970's was an instruction set and a series of compatible models with a wide variety of hardware from an 8 bit processor to one with 64 bit data paths, all of which actually emulated the System 360 instruction set. The actual native instruction sets (what the hardware actually did) were all different and documented only in internal manuals. What the user saw and bought was something that did the System 360 instructions at a speed that corresponded roughly to how much was paid for the hardware.

    By the way, the PIC12, PIC16, and PIC18 names refer to the width of the instructions, so the PIC12 like the SX have 12 bit instructions while the PIC16 has 16 bit instructions and the PIC18 has 18 bit instructions. Usually the width allows for larger program memories and more data registers, sometimes additional instructions.
  • well wisherwell wisher Posts: 4
    edited 2013-02-01 01:41
    Hi all,
    Is it possible to interface a 16 bit SPI device with a 8 bit microcontroller.?(16 bit SPI device in the sense it's registers are of 16 bit.)
  • jmgjmg Posts: 15,140
    edited 2013-02-01 02:01
    Aaqil Khan wrote: »
    Thank you everyone for your responses. By the way, I didnt know than an 8-bit microcontroller could "pretend to be" a 16-bit controller?

    I have a Calculator program on the PC that can give 100 digit answers,
    Here, the PC 'pretends to be' a 300bit processor, in that it is processing 300 bit numbers, and giving 300bit results.
    Of course, it does this by splitting them into smaller numbers while it works, and it will not be as fast as 32b x 32b
    Even inside a Microcontroller, some mathops can take quite a number of cycles, as the device does the same splitting into smaller numbers, which is why you need to check the CPU speed in MHz and also the cycles per opcode.
  • localrogerlocalroger Posts: 3,451
    edited 2013-02-01 17:25
    Aaqil Khan wrote: »
    By the way, I didnt know than an 8-bit microcontroller could "pretend to be" a 16-bit controller? I can imagine the other way around but dont get how an "lower" microcontroller can pretend to be a "higher" controller?

    As long as they have enough memory, any proper computer can "pretend to be" any other type of computer. (Such "proper" computers are described as being "Turing complete," because the first computer ever described that they might pretend to be was the theoretical Universal Turing Machine.) What you lose in the translation is performance. When an 8-bit micro emulates a 16-bit multiply which is a single instruction on the 16-bit micro, the 8-bitter has to run a little subroutine that may execute 30 or 40 instructions before producing a result.

    Check this out: http://dmitry.gr/index.php?p=./04.Thoughts/07.%20Linux%20on%208bit

    That's a hilariously extreme example of someone building a system that boots Linux -- something you "need" some fairly hefty hardware to support, but he gets it going on a chip about the same power as the one at the heart of a Basic STAMP. It manages to boot to the BASH command line in only 2 hours, and you can log into Ubuntu in 6 hours. It also runs X windows but getting that started takes *cough* a lot longer.

    All in all, bit width at the hardware level is mostly about performance. A wider bus to memory transfers more data per cycle; a wider bus in the ALU does more math per cycle. But assuming you're willing to do a little code translation or emulator writing, there's no reason an 8-bit CPU can't run code meant for a 64-bit or vice-versa.
  • evanhevanh Posts: 15,126
    edited 2013-02-01 18:36
    It can be a reference to the address bus width also. This the recent meaning used by the PC industry since the advent of the AMD64 instruction set. Even though the AMD64 architecture also bought with it 64 bit wide data registers, it's the >4GB(2GB) address range that most are concerned with. After all, the frontside databus width of the x86 processors has been 64 bit since the Pentium 1.

    Interestingly, another variation, that I thought was a bit rude, is the 68000, with it's 32 bit architecture from the late 1970's, seemed to get classed as a 16 bit processor (in the 1990's at least) just because it's main databus width was 16 bit. I think that might just have been sour grapes from the PC industry though.

    If one uses the data registers alone as the definition of the processor size then the 80386 and 80486 were for the most part only ever a 16 bit processor. This is just because they were mostly used on PC's running MSDOS and Windows 3.x. Both of which did not use the 32 bit mode.
  • evanhevanh Posts: 15,126
    edited 2013-02-01 18:51
    Hi all,
    Is it possible to interface a 16 bit SPI device with a 8 bit microcontroller.?(16 bit SPI device in the sense it's registers are of 16 bit.)
    Lol, I just noticed the dates. Everyone had replied to the OP rather than you. :)

    Clocking the bits in is no biggie if you have bit banging access to the hardware. Otherwise expect problems.

    After that you just have to manage the values as 16 bit words rather than 8 bit words. That might mean having upper and lower bytes as two separate variables or, depending on your language, you may have 16 bit data type already or even be able to construct one.
  • User NameUser Name Posts: 1,451
    edited 2013-02-01 20:06
    localroger wrote: »

    That's fabulous!!

    A couple hours to boot Linux gave me an odd sense of d
  • kwinnkwinn Posts: 8,697
    edited 2013-02-01 21:12
    AFAIK the de-facto standard is now: 8 bits - byte, 16 bits - word, 32 bits - long, 64 bits dlong (double long). This is what I see most often in the literature.
  • Heater.Heater. Posts: 21,230
    edited 2013-02-02 01:05
    evanh,

    If one uses the data registers alone as the definition of the processor size then the 80386 and 80486 were for the most part only ever a 16 bit processor. This is just because they were mostly used on PC's running MSDOS and Windows 3.x. Both of which did not use the 32 bit mode.

    No. The 386 and 486 were very much 32 bit machines. At the time I was writing 32 bit programs in C using the wonderful Watcom compiler (Which is still available as an opensource project http://www.openwatcom.org/index.php/Main_Page ) Those programs ran under DOS just fine.

    Also even when using a 16 bit assembler for 16 bit mode programs one could just put a prefix byte in front of instructions to extend them to 32 bit operations. That made my Mandlebrot set program really fly.

    The fact that it took a decade for Microsoft to catch up to 32 bits is shameful. The 386 was introduced in 1985.
  • LoopyBytelooseLoopyByteloose Posts: 12,537
    edited 2013-02-02 04:13
    If you need 32bit, you will understand the difference - usually applications are for extreme precision of numbers, color video or large storage devices.

    There is a large world out there that can easily use 8 bit and 20mhz.

    I have a 64bit Quad desktop and only once has it been really needed -- to cross-compile a binary image from source for my Cubbieboard. And this was only because the source was written for a 64bit machine. I normally run a 32bit OS on it as there is more software available with less bugs.

    The 32bit Propeller makes writing code a bit easier and faster as it has 32 i/o lines. Having to manage i/o banks requires more planning and consideration.
  • evanhevanh Posts: 15,126
    edited 2013-02-02 18:17
    Heater. wrote: »
    The 386 and 486 were very much 32 bit machines.
    You'll note I said "... for the most part only ever a 16 bit processor."

    Of course they were capable of becoming 32 bit processors but were typically only ever a 16 bit chip as per use. All the rest of the babble about being 32 bit processors wasn't any more than marketing hype for the average joe.

    Where as all versions of the 68k were 32 bit in nature, even on the Sinclair QL - with it's 8 bit databus.
  • evanhevanh Posts: 15,126
    edited 2013-02-02 18:31
    If you need 32bit, you will understand the difference - usually applications are for extreme precision of numbers, color video or large storage devices.

    There was a time I would have declared 4GB of RAM to be exceesively huge and never would anyone every want more. Hell, even a 4 GB file seemed insanely huge, one would never want to have the whole thing in main memory all at once. But things change, RAM speeds - 128/256 bit wide buses - are such now that massaging such huge volumes of data is a walk in the park. Addressing range beyond the 32 bit limit is suddenly useful.
  • potatoheadpotatohead Posts: 10,253
    edited 2013-02-02 18:42
    Heh...

    My workhorse machine has 12GB RAM, and I'm going to double that this year. I run virtual machines that typically will eat up 2GB or so, but the big data hog is just large data sets. CAD models are impressive these days and there is always a call for more RAM. 64 bit computing was adopted early and easily by people doing this work. Simulation is another data hog. Those guys take all the RAM they can get. 64GB machines are common. GM specs a very high end nVidia Quadro graphics engine, 64 bit OS (Linux / Windows) and 24GB just to start.
  • evanhevanh Posts: 15,126
    edited 2013-02-02 18:57
    kwinn wrote: »
    AFAIK the de-facto standard is now: 8 bits - byte, 16 bits - word, 32 bits - long, 64 bits dlong (double long). This is what I see most often in the literature.

    Long is a C term that has no defined size. PC weanies may have borrowed it but there is a distinct conflict given the broad usage of C.

    Word is a hardware engineering term that, like long, is dependant on the design/architecture for it's size. Word size can also be very dynamic in that it can be context driven, ie: The data path point of focus can be referred to as the word size. This occurs particularly in mixed signal conversions.

    Again, because of the broad utilisation of digital hardware in the various computing industries there is many conflicts when reusing such a term.
  • kwinnkwinn Posts: 8,697
    edited 2013-02-02 21:06
    Valid points, which is why I said de-facto standard. I have worked with 5 bit character codes, 6, 7, 8, and 9 bit bytes, 12, 14, 16, and 18 bit words, 24, 28, 32, and 36 bit longs as well as a few other oddball data and address sizes over the years. Even worked with a 4 bit nibble (nybble?) processor at one point. Some were mini or microcomputer systems, and some were custom processors built into instruments using 7400 series TTL chips. At the present time when someone says byte, word, or long I assume 8, 16, or 32 bits unless they specify something else, and I think most people do the same.
  • LoopyBytelooseLoopyByteloose Posts: 12,537
    edited 2013-02-02 21:56
    I suspect if you investigated what people really were using, you would get a statistical bell curve with a center somewhere between 16 bit and 32 bit.

    On the other hand, the marketing of computers is like the what the1950-60s was to automobiles, where everyone pushed the buyer into a bigger and bigger horsepower for the sake of status.

    I am not a leading edge developer and unfortunately the motherboard for my Quad 64 bit is oddly limited to 4Gbytes of ram. So that's what I have and use... quite comfortably.

    Color video seems to have really reached its limit at 24 bit color gradation as people just can't see any finer difference. The industry went to 32bit only by adding 8 bits to indicate transparency of colors. Audio and digital synthesis have similar limits that we have already reached.

    So much of the big and fast are now only required for large data schemes or special purposes. Robotic control seems to not need go to the limits, unless you are in a top gun fighter jet and just hoping that an extra nano-second will defeat your foe.

    So it is all about what the task is, how fast the timing has to be, and how precise your number needs to be. Status and bragging rights seem counter-productive as often the guy who usually can buy the most extreme hasn't a clue how to use it.

    Limits are good for the learner and they demand real solutions and creative abilities to seek alternatives.
  • potatoheadpotatohead Posts: 10,253
    edited 2013-02-02 22:26
    Actually they can. Medical display systems use 10 bits of color per channel. I've got one PC graphics system capable of that and 256 grey levels is well beneath what we can really see. Adding a couple of bits really does make a difference. The steps can still be seen, but one has to work for it.

    I've noted 8 bit systems employ dithering to blend the coarse steps away. Same for many consumer grade displays. My plasma TV does this. It has a mode where I can turn it off, and the difference is very significant. Steps can very clearly be seen. For video, the trade off of resolution due to dithering really isn't a big deal. If there is any serious visualization going on, then it is.

    The medical displays are a notch better in that they are pixel perfect at the 10 bit resolution, and I'm sure that's where the high cost is. FWIW, older Matrox display cards for PC offered the 10 bit display. On CRT displays, it really pops! Those displays can easily render the better signal. LCD computer monitors not so much, unless they are very high quality ones.

    As for robotic control, yeah we for the most part don't need 64 bits on the execute end, but front end programming and simulation eats data like no other. Modern controls are rapidly moving to 64 bit territory. As part complexity goes up along with speed requirements, the incoming data streams need to be more than simple primitives. The better multi-axis controls need serious compute and floating point to process curves and account for the kinematics of the robots to remain both fast and accurate.

    CAM data to build that jet or car is huge right now and real machine simulation, including material cut away visualization can bury even a 64GB RAM computer on models that are less than you would first expect. Turns out, the simulation is growing very important for manufacturing cells where robots work in tandem with one another and multi-axis machines are difficult to visualize and high speed machining makes the simulation near mandatory so that material conditions are known well enough to allow maximum case cutting near tool tolerances.

    IMHO, the data requirements for CAM / robotics exceed CAD easily, competing with high end analysis. (CFD / FEA)
  • evanhevanh Posts: 15,126
    edited 2013-02-03 01:05
    kwinn wrote: »
    Valid points, which is why I said de-facto standard.
    While, at the same time, acknowledging good reasons to get it right? :/
  • evanhevanh Posts: 15,126
    edited 2013-02-03 01:09
    potatohead wrote: »
    I've noted 8 bit systems employ dithering to blend the coarse steps away. Same for many consumer grade displays. My plasma TV does this. It has a mode where I can turn it off, and the difference is very significant.

    Cool, what is the name of this setting in the TV?
  • kwinnkwinn Posts: 8,697
    edited 2013-02-03 02:01
    evanh wrote: »
    While, at the same time, acknowledging good reasons to get it right? :/

    I am missing your point here. Get what right? That almost all data is in multiples of 8 bits and referred to as bytes, words, longs, and double longs in most cases? Even Harvard architecture chips like the PIC's handle data in 8 bit bytes regardless of the word size they use for instructions and addressing.
  • Heater.Heater. Posts: 21,230
    edited 2013-02-03 02:13
    evanh
    .. for the most part only ever a 16 bit processor.
    Well, alright "for the most part". It's was just annoying that Microsoft's operating systems were so far behind the hardware at the time, and for a long time.
    ...babble about being 32 bit processors wasn't any more than marketing hype for the average joe.
    Granted, for Average Joe on MSDOS he was stuck on 16 bits, but strangely in this case it was not marketing hype. They really were 32 bit machines. The 386 was the first Intel chip I could warm up to as it got us out of messing with that horrendos 64K segmented memory.
    Where as all versions of the 68k were 32 bit in nature, even on the Sinclair QL - with it's 8 bit databus.
    Yep, wonderful machines.
  • Heater.Heater. Posts: 21,230
    edited 2013-02-03 02:32
    Re: 64 bit.

    Many have the veiw that 64 bit is not really required and normally I would have agreed with you all.

    But there is one place where I think 64 bits is perhaps a big win even if you aren't needing data sets bigger that 4GB. JavaScript.

    You see JS only has one number type and that number type is 64 bit floating point. Which of course can also be used to accurately represent 53 bit integers.

    Now we have things like Google's V8 JavaScript engine that compiles JS to native code on the fly and is astonishingly fast. It goes through some contortions to actually use 32 bit integer operations if it can determine that your numbers are small enough ints. With a 64 bit processor those contortions can go away for even more native speed.

    "Whos cares about JS?", I hear you asking, "that's only a dinky scripting language for web browsers".

    Turns out JS is a very sophisticated language and is taking over the world. It's already in all your browsers. It's being used for mobile apps in phones/tabs. It's being adopted for server side use. It's used in the Qt GUI tool kit. It's the basis for things like WebOS.

    I'm even using it for the heavy networking code I need in a small ARM based embedded system. It flies there as fast similar code in C++ and is a few dozen times easier to write.

    Soon we will need 64 bits everywhere as JS continues it's expansion.
  • evanhevanh Posts: 15,126
    edited 2013-02-03 06:28
    Heater. wrote: »
    Well, alright "for the most part". It's was just annoying that Microsoft's operating systems were so far behind the hardware at the time, and for a long time.
    Such was the state of the Wintel world and yet they still claimed the moral high ground - calling everyone else religious zealots. And still do. :/
    Granted, for Average Joe on MSDOS he was stuck on 16 bits, but strangely in this case it was not marketing hype.
    Getting a little off topic here but that is exactly what made it hype. Being talked into buying this rather than that - it's technically better and so forth ... except it wasn't being used! Where as buying the "other" did get you using the technically better product straight away and it was like that even earlier.

    I guess what I'm saying is the specs don't actually matter when it comes to sales of alternative hardware when it doesn't run the same software. Particularly if needs porting - recompiling is bad enough. What matters is popularity of the software. "I want to use the same software as my competitor" or "... my friend". "Preferably the copy I've put on this blank disc." or more recently "... the copy I've downloaded."

    So many dreams have been squashed through this mechanic. Can this hardware (and OS) bias be eliminated? Tablets hardly count (Unless you can put a full blown CAD or DTP on one and have it usable?). The Mac survives (as rebadged PC hardware since the mid 1990s), thanks mainly to the Web being born and then the music industry completely burying it's head in the sand, and has always had some decent software for it but is still insignificant, to the point of irrelevant, up against Windoze. High end workstations presumably still exist too but will just be expensive PC based dongles in packaged deals.

    It would need some serious regulation me thinks. I'm not sure the computer as we know it would survive, ie: even the mighty PC could be regulated out of existence. Maybe open source can pull through and keep the industry honest, thereby getting the best of both worlds - experimental/flexible architectures and the common apps we want to use on them - dunno.


    Cripes, I'm starting to make Potato sized posts here.
  • evanhevanh Posts: 15,126
    edited 2013-02-03 06:33
    Heater. wrote: »
    You see JS only has one number type and that number type is 64 bit floating point. Which of course can also be used to accurately represent 53 bit integers.

    Lol, pull the other one! :P We had BASIC and many others long ago with floats as their only numerical datatype. They may or not have utilised an FPU but none in any way defines the word size of the hardware. And the only way the FPU can be considered is if it's singled out exclusively.
Sign In or Register to comment.