+ Reply to Thread
Page 1 of 7 12345 ... LastLast
Results 1 to 20 of 139

Thread: 8-Bit 16-Bit 32-Bit Whats the difference?

  1. #1

    Default 8-Bit 16-Bit 32-Bit Whats the difference?

    - Can someone explain in layman terms, whats the difference between 8-bit , 16-bit (or n-bit for that matter) microcontrollers?
    - how many bits is the BS2 microcontroller?
    - what does it mean to the hobby programmer?

    Thanks.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    E=mc^2
    Last edited by ForumTools; 10-01-2010 at 11:59 AM. Reason: Forum Migration

  2. #2

    Default

    Ok....I'll take a stab at a laymens definition:

    8 bit : 00000000 capable of doing number up to 255 (0-255)

    16 bit: 00000000_00000000 capable of doing numbers up to 65535 (0-65535)

    32 bit: 00000000_00000000_00000000_00000000 capable of doing numbers up to 4294967295 (0-4294967295)

    The bit level is how many bits the controller can handle.

    The BS2 is a 16 bit microcontroller.

    The usual limitation for the hobbiest is the limitation of calculations. If your number does not fall within these number ranges....then you have a problem. I didn't go into detail about -/+ because I don't want to have to type that much. If you use -/+ you just split the range in half......half negative, and half positive.

    Let me know if this doesn't make sense, I will explain further.

    James L
    Last edited by ForumTools; 10-01-2010 at 11:59 AM. Reason: Forum Migration

  3. #3

    Default

    In addition to the size of the basic arithmetic operations, the "width" of the processor affects the speed and sophistication of the instructions as well. The IBM 360 series of computers was a good example with identical instruction sets, but with models with 8-bit, 16-bit, 32-bit, and 64-bit data paths. The wider the data paths (including memory width), the faster they ran and the more expensive and physically larger they got.

    As a practical example with Parallax, the Stamps are actually 8-bit processors (SX or PIC) internally that run a program (the PBasic interpreter) that "pretends to be" a 16-bit processor for the PBasic machine which the PBasic compiler translates your Stamp programs into. The Propeller is a true 32-bit processor (actually 8 separate 32-bit processors on one chip) that sometimes runs a program (the Spin interpreter) that "pretends to be" a 32-bit processor for the Spin machine which the Propeller Tool translates your Spin programs into. The Propeller can also run programs or pieces of a program written in its native instruction set (assembly language) for speed or precise timing.
    Last edited by ForumTools; 10-01-2010 at 11:59 AM. Reason: Forum Migration

  4. #4

    Default

    Mike,

    You never stop amazing me with your knowledge. I didn't know the Stamp as an 8 bit micro that "pretends to be a 16 bit".

    Interesting.

    Thanks for the one new thing for today.

    James Long
    Last edited by ForumTools; 10-01-2010 at 11:59 AM. Reason: Forum Migration

  5. #5

    Default

    My 2-bits:

    An 8 bit processor processes 8 bits at a time, a 16 bit processor 16, a 32 bit processor 32, ad infinitum…. I suspect that carries little meaning; it’s not always clear what the value of a bit is unless you need one.
    First, understand that a byte is the number of bits that the processor can process in one instruction. An 8 bit processor byte is 8 bits, like: 10110011; a 16 bit processor byte is like: 1100011101110011.

    Let’s start simpler… Suppose you had a one bit processor (a byte would be like: 1). Now one bit can carry some information—a 1 or a 0 (because bits are tiny little switches that are either on or off. Instructions are also carried to the processor in this way, so you could have two instructions—one represented by a 0 and one represented by a 1. A microprocessor works by loading a data byte and an instruction; the instruction tells the processor what to do with the data—perhaps to add it to the next byte it receives, or to subtract it, hold it, or store it. What can we do with 2 instructions (1 or 0)? One must be used to tell the processor to go and get the next byte, so that leave us only one instruction—let’s say we use it to tell the processor to add the current byte to the next byte received. Ok… good, we’ve got a computer that adds, but we can represent only two totals: 0 or 1. What happens when we add 0+0? We get zero and our computer works as we expected; the same with 0+1 and 1+0, we’re golden everything hums along perfectly. What happens when we add 1+1? The answer is not one—it’s zero! The processor “rolls over” and starts over because we’re out of room. We’re stuck—no way to add an instruction telling the computer to put the extra byte someplace and no byte to hold it.

    Let’s build a two bit processor (a byte would be like: 01). Using the same example as above, we can now add 1+1 and handle the answer: 2. We represent this in bits by 10—one group of 2 and no groups of 1. We can add 2+1, our byte to represent this is: 11—one group of 2 and one group of 1. When we add 2+2, we run out of space again, but we now have extra instructional bits as well! We can add an instruction that tells the computer to put the extra bit someplace so we can display it as part of the answer later. We’re saved! We can just keep putting our extra bits someplace and let them stack up because we just tell the computer to go and print them out as part of the answer when we need them… almost. Our problem is our computer won’t count any higher than three—how do we tell the processor where all our 'extra bits' are? Maybe we could add extra bytes someplace to tell our computer where the extra bytes are that we need for our answer…maybe, but all this going and getting extra bytes takes time and instructions and we run out of room and it takes longer, so we build a bigger processor.

    Eventually we get to an eight bit processor (like the Stamp). With eight bits we can represent 256 different values with the eight bits like: 00000000, 00000001, 00000011… 11111111. As we did with out 2-bit processor we count them like no groups of anything all the way up to one group of 128 and one group of 64 and one group of 32 and one group of 16 and one group of 8… (there’s a lot of information on binary counting—use Google). 256 instructions and adding chucks of 256 are sufficient for many hobbyist needs. If one of our instructions is “The next number is a color” then we can draw pretty good pictures with 256 colors (like old video games). If one of our instructions is “the next number is a letter” then we can do all the letters of the alphabet UPPERCASE and lowercase and have periods and question marks and just about everything. With 256 instructions we can add, subtract, multiply, and divide… that’s only 4—we have 251 left! When we need to put bits someplace, we can tell the computer 256 places where they are… that’s pretty good, but, we may need more… so we build an even bigger computer with 16 bits in a byte! Now we can draw really good pictures and give the computer even more instructions!

    Maybe you think nobody will ever need this many bits in a byte, but if you think of bits as colors and imagine that children can be happy with only a few colors, then imagine an artist… You see why most hobbyists (like me) are pretty happy with 8-bits and why some people need 16 bits—or more!

    This explanation is WAY over simplified and, frankly, not strictly true in every aspect, but I sacrificed facts only to clarify my explanation. More bits per bite means more capability, more options, and more possibilities. Why am I happy with 8 bit processors? Because I can buy some brands for about a dollar and I’m not an artist.
    Last edited by ForumTools; 10-01-2010 at 11:59 AM. Reason: Forum Migration

  6. #6

    Default

    Thank you everyone for your responses. By the way, I didnt know than an 8-bit microcontroller could "pretend to be" a 16-bit controller? I can imagine the other way around but dont get how an "lower" microcontroller can pretend to be a "higher" controller? I mean at its core its still computing at 8-bits isnt it???

    What about the basic stamps that have PICmicros in them? Are these PIC16 or PIC18 and what's the difference between them?

    Thanks again.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    E=mc^2
    Last edited by ForumTools; 10-01-2010 at 11:59 AM. Reason: Forum Migration

  7. #7

    Default

    The definition I learned is that 'an 8bit micro' has an 8bit wide data-bus, and also works with 8bit instructions.
    The 6502, the Z80, 8080, 8085 are all classic examples of 8bit micros, though...
    the Z80 can do some 16bit load, logic and arithmetic operations.

    Also, the Intel 8088(used in the first PCs), while it had an 8bit databus, it worked with 16bit 'wide' instructions internally. It just had to load all instructions Byte by Byte.
    (It was a 8086 in disguise. IBM picked the 'castrated' version because a narrower Databus meant a simpler and cheaper design... )
    A Zilog Z8000 can do 64bit arithmetic, but it is still only a 16bit processor.

    Where it gets really weird is in some microcontrollers, the instruction-width may be 12 or 14 bits(or whatever suits the designer) so that they can fit the entire instruction-set into it..
    (The Z80 on the other hand had such a large set of instructions, they had to use 'prefix' Bytes and expand the available 'command space' All in all, they had 138 different instructions, and with variations, hit about 760 or so)
    anyway, on microcontrollers where the instruction-width is different from the data-width, these also occupies separate memory-banks, and we usually think of the DATA area when we count the bits.

    That just leaves the bit-slice processors...
    (But that way lies madness... )

    Speaking of madness...
    the 6502 had something called 'zero-page adressing', in which all jumps and Load operations were 8bit. It made for pretty fast code, IF you could squeeze it into the first 256Bytes of the address space.
    Yes, an addressing mode which only worked in the first 256Bytes of the 65536Bytes of addressable RAM/ROM...

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Don't visit my new website...
    Last edited by ForumTools; 10-01-2010 at 11:59 AM. Reason: Forum Migration

  8. #8

    Default

    Aaqil Khan,
    When I said "pretend to be". I was referring to the use of an interpreter or emulator. You may know the statement about dogs having fleas and the fleas having fleas "ad infinitum". The same kind of thing holds true in computing. Somewhere on the bottom of it all is basic hardware. There may be an adder and some logic for Boolean arithmetic (and, or, exclusive or, etc.) and shifting. There's usually some registers including something that indicates where the next instruction is coming from as well as some logic to interpret parts of the instruction. On the other hand, what the user sees and what's in the published manual and sales brochure may have little to do with the functioning of the hardware. The current Stamps, for example, at least the BS2p series, use a chip from the SX microcontroller series from Ubicom which Parallax sells. You can download the manuals from Parallax's website. This is an 8-bit processor (with all 8 bit data paths, an 8-bit arithmetic unit, 8-bit data memory) although the instruction memory (an electrically erasable programmable mostly read-only memory - EEPROM) is 12 bits wide and some registers, like the program counter and call/return stack are 12 bits wide. Parallax sells some of these preprogrammed with a program that is an emulator for a 16 bit processor (the Stamp interpreter) with many specialized instructions for I/O like bit serial, frequency generation, and lots more. This emulated instruction set has no resemblance to the native instruction set of the hardware, yet is what the user sees (and buys) when a Stamp is bought. Similarly, the IBM System 360 series of computers from the 1960's and 1970's was an instruction set and a series of compatible models with a wide variety of hardware from an 8 bit processor to one with 64 bit data paths, all of which actually emulated the System 360 instruction set. The actual native instruction sets (what the hardware actually did) were all different and documented only in internal manuals. What the user saw and bought was something that did the System 360 instructions at a speed that corresponded roughly to how much was paid for the hardware.

    By the way, the PIC12, PIC16, and PIC18 names refer to the width of the instructions, so the PIC12 like the SX have 12 bit instructions while the PIC16 has 16 bit instructions and the PIC18 has 18 bit instructions. Usually the width allows for larger program memories and more data registers, sometimes additional instructions.
    Last edited by ForumTools; 10-01-2010 at 11:59 AM. Reason: Forum Migration

  9. #9

    Default Re: 8-Bit 16-Bit 32-Bit Whats the difference?

    Hi all,
    Is it possible to interface a 16 bit SPI device with a 8 bit microcontroller.?(16 bit SPI device in the sense it's registers are of 16 bit.)

  10. #10

    Default Re: 8-Bit 16-Bit 32-Bit Whats the difference?

    Quote Originally Posted by Aaqil Khan View Post
    Thank you everyone for your responses. By the way, I didnt know than an 8-bit microcontroller could "pretend to be" a 16-bit controller?
    I have a Calculator program on the PC that can give 100 digit answers,
    Here, the PC 'pretends to be' a 300bit processor, in that it is processing 300 bit numbers, and giving 300bit results.
    Of course, it does this by splitting them into smaller numbers while it works, and it will not be as fast as 32b x 32b
    Even inside a Microcontroller, some mathops can take quite a number of cycles, as the device does the same splitting into smaller numbers, which is why you need to check the CPU speed in MHz and also the cycles per opcode.

  11. #11
    localroger's Avatar
    Location
    Culture GSV Quietly Confident
    Posts
    2,424
    Blog Entries
    12

    Default Re: 8-Bit 16-Bit 32-Bit Whats the difference?

    Quote Originally Posted by Aaqil Khan View Post
    By the way, I didnt know than an 8-bit microcontroller could "pretend to be" a 16-bit controller? I can imagine the other way around but dont get how an "lower" microcontroller can pretend to be a "higher" controller?
    As long as they have enough memory, any proper computer can "pretend to be" any other type of computer. (Such "proper" computers are described as being "Turing complete," because the first computer ever described that they might pretend to be was the theoretical Universal Turing Machine.) What you lose in the translation is performance. When an 8-bit micro emulates a 16-bit multiply which is a single instruction on the 16-bit micro, the 8-bitter has to run a little subroutine that may execute 30 or 40 instructions before producing a result.

    Check this out: http://dmitry.gr/index.php?p=./04.Th...ux%20on%208bit

    That's a hilariously extreme example of someone building a system that boots Linux -- something you "need" some fairly hefty hardware to support, but he gets it going on a chip about the same power as the one at the heart of a Basic STAMP. It manages to boot to the BASH command line in only 2 hours, and you can log into Ubuntu in 6 hours. It also runs X windows but getting that started takes *cough* a lot longer.

    All in all, bit width at the hardware level is mostly about performance. A wider bus to memory transfers more data per cycle; a wider bus in the ALU does more math per cycle. But assuming you're willing to do a little code translation or emulator writing, there's no reason an 8-bit CPU can't run code meant for a 64-bit or vice-versa.

  12. #12

    Default Re: 8-Bit 16-Bit 32-Bit Whats the difference?

    It can be a reference to the address bus width also. This the recent meaning used by the PC industry since the advent of the AMD64 instruction set. Even though the AMD64 architecture also bought with it 64 bit wide data registers, it's the >4GB(2GB) address range that most are concerned with. After all, the frontside databus width of the x86 processors has been 64 bit since the Pentium 1.

    Interestingly, another variation, that I thought was a bit rude, is the 68000, with it's 32 bit architecture from the late 1970's, seemed to get classed as a 16 bit processor (in the 1990's at least) just because it's main databus width was 16 bit. I think that might just have been sour grapes from the PC industry though.

    If one uses the data registers alone as the definition of the processor size then the 80386 and 80486 were for the most part only ever a 16 bit processor. This is just because they were mostly used on PC's running MSDOS and Windows 3.x. Both of which did not use the 32 bit mode.

  13. #13

    Default Re: 8-Bit 16-Bit 32-Bit Whats the difference?

    Quote Originally Posted by well wisher View Post
    Hi all,
    Is it possible to interface a 16 bit SPI device with a 8 bit microcontroller.?(16 bit SPI device in the sense it's registers are of 16 bit.)
    Lol, I just noticed the dates. Everyone had replied to the OP rather than you.

    Clocking the bits in is no biggie if you have bit banging access to the hardware. Otherwise expect problems.

    After that you just have to manage the values as 16 bit words rather than 8 bit words. That might mean having upper and lower bytes as two separate variables or, depending on your language, you may have 16 bit data type already or even be able to construct one.

  14. #14

    Default Re: 8-Bit 16-Bit 32-Bit Whats the difference?

    Quote Originally Posted by localroger View Post
    That's fabulous!!

    A couple hours to boot Linux gave me an odd sense of déjà vu.

  15. #15

    Default Re: 8-Bit 16-Bit 32-Bit Whats the difference?

    AFAIK the de-facto standard is now: 8 bits - byte, 16 bits - word, 32 bits - long, 64 bits dlong (double long). This is what I see most often in the literature.
    In science there is no authority. There is only experiment.

  16. #16

    Default Re: 8-Bit 16-Bit 32-Bit Whats the difference?

    evanh,

    If one uses the data registers alone as the definition of the processor size then the 80386 and 80486 were for the most part only ever a 16 bit processor. This is just because they were mostly used on PC's running MSDOS and Windows 3.x. Both of which did not use the 32 bit mode.
    No. The 386 and 486 were very much 32 bit machines. At the time I was writing 32 bit programs in C using the wonderful Watcom compiler (Which is still available as an opensource project http://www.openwatcom.org/index.php/Main_Page ) Those programs ran under DOS just fine.

    Also even when using a 16 bit assembler for 16 bit mode programs one could just put a prefix byte in front of instructions to extend them to 32 bit operations. That made my Mandlebrot set program really fly.

    The fact that it took a decade for Microsoft to catch up to 32 bits is shameful. The 386 was introduced in 1985.

  17. #17

    Default Re: 8-Bit 16-Bit 32-Bit Whats the difference?

    If you need 32bit, you will understand the difference - usually applications are for extreme precision of numbers, color video or large storage devices.

    There is a large world out there that can easily use 8 bit and 20mhz.

    I have a 64bit Quad desktop and only once has it been really needed -- to cross-compile a binary image from source for my Cubbieboard. And this was only because the source was written for a 64bit machine. I normally run a 32bit OS on it as there is more software available with less bugs.

    The 32bit Propeller makes writing code a bit easier and faster as it has 32 i/o lines. Having to manage i/o banks requires more planning and consideration.
    Hwang Xian Sheng
    Kaohsiung/Gaoxiung
    Taiwan/Formosa
    R.O.C/Province of China, P.R.C.

    "My comments are independent... and at times just plain wrong. At other times, they just might be helpful. So consider the source."

  18. #18

    Default Re: 8-Bit 16-Bit 32-Bit Whats the difference?

    Quote Originally Posted by Heater. View Post
    The 386 and 486 were very much 32 bit machines.
    You'll note I said "... for the most part only ever a 16 bit processor."

    Of course they were capable of becoming 32 bit processors but were typically only ever a 16 bit chip as per use. All the rest of the babble about being 32 bit processors wasn't any more than marketing hype for the average joe.

    Where as all versions of the 68k were 32 bit in nature, even on the Sinclair QL - with it's 8 bit databus.

  19. #19

    Default Re: 8-Bit 16-Bit 32-Bit Whats the difference?

    Quote Originally Posted by Loopy Byteloose View Post
    If you need 32bit, you will understand the difference - usually applications are for extreme precision of numbers, color video or large storage devices.
    There was a time I would have declared 4GB of RAM to be exceesively huge and never would anyone every want more. Hell, even a 4 GB file seemed insanely huge, one would never want to have the whole thing in main memory all at once. But things change, RAM speeds - 128/256 bit wide buses - are such now that massaging such huge volumes of data is a walk in the park. Addressing range beyond the 32 bit limit is suddenly useful.

  20. #20

    Default Re: 8-Bit 16-Bit 32-Bit Whats the difference?

    Heh...

    My workhorse machine has 12GB RAM, and I'm going to double that this year. I run virtual machines that typically will eat up 2GB or so, but the big data hog is just large data sets. CAD models are impressive these days and there is always a call for more RAM. 64 bit computing was adopted early and easily by people doing this work. Simulation is another data hog. Those guys take all the RAM they can get. 64GB machines are common. GM specs a very high end nVidia Quadro graphics engine, 64 bit OS (Linux / Windows) and 24GB just to start.
    Do not taunt Happy Fun Ball! @opengeekorg ---> Be Excellent To One Another

    Parallax colors simplified: http://forums.parallax.com/showthrea...hics_Demo.spin
    PropGCC Mac OS 10.6.8 + https://www.dropbox.com/sh/pf1uulr4b...Xx0wYC?v=1mcis




+ Reply to Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts