localroger and kwinn, the Digital Equipment Corporation PDP-10/Tops-10 machines had a 36 bit word length and didn't have the concept of a byte as we're used to it. Instead it had bit field operations which were used to access any size byte you cared to define. Actually come to think of it I'm pretty sure the System 360 was only 32 bit word addressable as well. You used masks and bit shifts to get down to the byte which again was convention.
No, System 360 was byte addressable, 8 bit "octets", 16 "half-words", 32 bit "words", 64 bit "double-words". Bits were perversely numbered from 1 from the most-significant end.
localroger and kwinn, the Digital Equipment Corporation PDP-10/Tops-10 machines had a 36 bit word length and didn't have the concept of a byte as we're used to it. Instead it had bit field operations which were used to access any size byte you cared to define. Actually come to think of it I'm pretty sure the System 360 was only 32 bit word addressable as well. You used masks and bit shifts to get down to the byte which again was convention.
So many computers and so many architectures under the bridge. That's probably the machine I was thinking of for the 6 bit bytes and they were used mainly for text.
Now that you have jogged my memory it was most likely the one that had 9 bit bytes as well. All done by bit field manipulation I presume. At the time most of the software I worked on was in FORTRAN or one of the more specialized macro implementations like OPL (Our Programming Language), MML (My Macro Language) and others along the same line I have long since forgotten.
I feel a bit foolish as I just' found out that Blender also suppports 2D animation and storyboards - so no need for Gimp.
With 32bit per pixel and generally 4Gytes of Dram or less, it is hard to see the need for 64bit processing. I am sure there are 'Yes... but's'.
More than 32 bits is required for memories larger than 4 gig, virtual memory implementations, and addressing large databases. The next logical and practical sized step is 64 bits. That should be enough to last for a while. As for a 64 bit Propeller the biggest advantages I can see is the ability to address more than 512 cog locations, increased math precision, 64 bits of I/O in 1 instruction, and more bits for additional instructions and conditionals.
But I have my hands full with 32bits in a Propeller and why do we need color or graphic resolution far beyond our ability to see?
Yes, you can get more Dram - a lot more. But I am certainly not there yet.
For displaying images to eyeballs studies have shown that the eye cannot distinguish more than 32 shades of grey. For color images the eye cannot see any difference between two identical images where one is rendered using 32 bits and the other with 16 bits. Not much point in most cases for going beyond what the human eye can perceive.
Since "Long" seems to be in somewhat common use for 32 bits we may as well make it standard usage and save the prefixes for sizes beyond 32 bits. They also roll off the tongue fairly easily.
Hmmm, this talk of nibbles and nybbles has me trying to recall a computer from the late 70's that I seem to recall being named "the nibbler", possibly by National Semiconductor or National Instruments. I recall seeing it in the Digi-Key catalogs back them.
Ring a bell for anyone?
Might provide some fodder for another prop based emulator...
C.W.
There were some 4 bit cpu chips produced around that time, and I have a vague recollection of a 1 bitter as well. IIRC the one bitter performed it's functions serially.
There were some 4 bit cpu chips produced around that time, and I have a vague recollection of a 1 bitter as well. IIRC the one bitter performed it's functions serially.
The Intel 4004 was a 4-bit microprocessor - considered to be the first microprocessor.
My beloved Sperry 1100 mainframes had 36 bit words and natively used the FIeld Data character set (6 bit). The word was addressable as full (36 bit), half (18 bit), quarter (9 bit - a later addition to support ASCII), and sixths (6-bit).....and we commonly talked in Octal, no letters in our numbers!!!
@mindrobots, the 4004 is the only one I can remember the part number for but there were several others used in some of the equipment I worked on at that time. They were low cost chips intended for calculators, not for controlling instrument mechanical systems and running programs stored in memory chips so the interface between the chip and the rest of the system ranged from "interesting" through "ingenious" and all the way to "you most be joking". In one case a 40 pin chip was surrounded by 5 or 6 boards with around 40 chips per board.
Since "Long" seems to be in somewhat common use for 32 bits we may as well make it standard usage and save the prefixes for sizes beyond 32 bits. They also roll off the tongue fairly easily.
LOL, good one. Probably better than any "official" designation. Only official one I could think of was HDlong (HexaDecimal long). Of course the next one is pretty obvious.
@mindrobots, the 4004 is the only one I can remember the part number for but there were several others used in some of the equipment I worked on at that time.
The HP Saturn CPU, used in many of their 80's-era calculators, was 4-bit.
The HP Saturn CPU, used in many of their 80's-era calculators, was 4-bit.
It would be interesting to have a list of all these chips along with the date they came to market. As I recall it, the 4004 was designed to be the central component of a calculator, but made somewhat general purpose so it could be programmed for use in calculators aimed at different markets.This was how it became the first microprocessor.
Since C related languages seem to otherwise be so ambiguous about number of bits in a Word, I have forced myself to start using their way out with <stdint.h>
That way with a proper header file, it's unambiguous as to what size I mean and what value I expect as maximum, with no surprises. Jaded at this point, I HATE surprises.
Needless to say I prefer Ada-like languages where I type exactly what I mean, and perform as little of implicit type conversion as possible (and warn or error on everything else).
uint8_t = generally accepted as a byte (and to me bytes don't have signs unless something down the line sign extends it out into an integer).
int16_t = word sized value that holds an integer that's neither short nor long.
uint16_t = word for my "incorrect" way of thinking.
int32_t = signed integer for my "incorrect" way of thinking. sometimes a "double integer" or "DINT" if i have to talk to others and distinguish it.
uint32_t = unsigned long for my "incorrect" way of thinking.
INT32_MAX = the most positive value of int32_t
INT32_MIN = the most negative value of int32_t (and wonderful for signaling "invalid")
uint64_t = I don't have a name for it other than "64 bit integer".
uint128_t = I don't care for a name.
uint256_t = "excessive"
uint512_t = probably have grown up kids by this point.
uint1024_t = retired and don't care at this point.
A thought: is a 3-bit unit a tribble? 5-bits a quibble?
I don't know, but a single switching element capable of 3 states is called a "trit" and there have been working computers built with both signed (-1-0-1) and unsigned (0-1-2) base 3 architectures.
I won't bother read all the post seeing if one answered the answer, but seems many were taking about bit sizes.
I am maybe the least expert here but here is my way:
When we talk about 32 or 64 bits systems, we not only talk about the physical architecture of the machine, in how bit wide a chunk can be moved from RAM to the processor, etc.
This has an effect on the way we do calculations: for big figures, do we write code for computing 4 byte wide values or only 1 Long (4 bytes)?.
Then comes the instructions set of the processor in order to do these things in one instruction rather than many more.
Then comes the compiler with which your program was compiled (one that uses instructions that uses less data chunks in a wider format or many more elementary instructions) ?
It's like another language! Example: At low level, your program can store 4 letters in one memory cell or needs to store them in 4? the job is not the same. Depending on the technology, the software works differently, is written differently.
I suppose some of us have an innate need to be a walking technological dictionary, but I just figure 64 bits is 64bits and live with it.
In the real world, 2 bits is a quarter, 4 bits is a half dollar, 8 bits is a dollar, and that would make 64 bits the same as $8 USD. That works for me.
The more I study linguistics and the longer I teach English as a second language, the more I just work with communicating and run away from the ego mania of authority or standardization. You know what I mean?
Comments
No, System 360 was byte addressable, 8 bit "octets", 16 "half-words", 32 bit "words", 64 bit "double-words". Bits were perversely numbered from 1 from the most-significant end.
So many computers and so many architectures under the bridge. That's probably the machine I was thinking of for the 6 bit bytes and they were used mainly for text.
Now that you have jogged my memory it was most likely the one that had 9 bit bytes as well. All done by bit field manipulation I presume. At the time most of the software I worked on was in FORTRAN or one of the more specialized macro implementations like OPL (Our Programming Language), MML (My Macro Language) and others along the same line I have long since forgotten.
More than 32 bits is required for memories larger than 4 gig, virtual memory implementations, and addressing large databases. The next logical and practical sized step is 64 bits. That should be enough to last for a while. As for a 64 bit Propeller the biggest advantages I can see is the ability to address more than 512 cog locations, increased math precision, 64 bits of I/O in 1 instruction, and more bits for additional instructions and conditionals.
For displaying images to eyeballs studies have shown that the eye cannot distinguish more than 32 shades of grey. For color images the eye cannot see any difference between two identical images where one is rendered using 32 bits and the other with 16 bits. Not much point in most cases for going beyond what the human eye can perceive.
4 bits - Nibble (or Nybble)
8 bits - Byte
16 bits - Word
32 bits - Long
64 bits - Dlong (Double long)
128 bits - Qlong (Quad long)
256 bits - Olong (Octo long)
Since "Long" seems to be in somewhat common use for 32 bits we may as well make it standard usage and save the prefixes for sizes beyond 32 bits. They also roll off the tongue fairly easily.
I propose the following:
If "1" is a bit,
and "4" is a nybble,
and "8" is a byte,
then "2" must be a snak.
Matt
Boy...wish I had thought of that one. OK, modified list:
1 bit - bit
2 bits - snak
4 bits - Nibble (or Nybble)
8 bits - Byte
16 bits - Word
32 bits - Long
64 bits - Dlong (Double long)
128 bits - Qlong (Quad long)
256 bits - Olong (Octo long)
80 bits - Twolong (twin Oolongs)
160 bits - Twotwolong
320 bits - Waytoolong
-Phil
http://forums.parallax.com/showthread.php?124495-Fill-the-Big-Brain&p=1025425&viewfull=1#post1025425
bit/nibble/byte/halfword/word/dword/qword
-Tor
There were some 4 bit cpu chips produced around that time, and I have a vague recollection of a 1 bitter as well. IIRC the one bitter performed it's functions serially.
The Intel 4004 was a 4-bit microprocessor - considered to be the first microprocessor.
My beloved Sperry 1100 mainframes had 36 bit words and natively used the FIeld Data character set (6 bit). The word was addressable as full (36 bit), half (18 bit), quarter (9 bit - a later addition to support ASCII), and sixths (6-bit).....and we commonly talked in Octal, no letters in our numbers!!!
So is 512 bits a Slong? 'Super Long'?
The HP Saturn CPU, used in many of their 80's-era calculators, was 4-bit.
It would be interesting to have a list of all these chips along with the date they came to market. As I recall it, the 4004 was designed to be the central component of a calculator, but made somewhat general purpose so it could be programmed for use in calculators aimed at different markets.This was how it became the first microprocessor.
I think 'hexadecaBYTE' is the correct one.
One more for the list Phil...
640 bits - RidicuLong (Also nicknamed "BigMeal" - 80 Bytes)
That way with a proper header file, it's unambiguous as to what size I mean and what value I expect as maximum, with no surprises. Jaded at this point, I HATE surprises.
Needless to say I prefer Ada-like languages where I type exactly what I mean, and perform as little of implicit type conversion as possible (and warn or error on everything else).
uint8_t = generally accepted as a byte (and to me bytes don't have signs unless something down the line sign extends it out into an integer).
int16_t = word sized value that holds an integer that's neither short nor long.
uint16_t = word for my "incorrect" way of thinking.
int32_t = signed integer for my "incorrect" way of thinking. sometimes a "double integer" or "DINT" if i have to talk to others and distinguish it.
uint32_t = unsigned long for my "incorrect" way of thinking.
INT32_MAX = the most positive value of int32_t
INT32_MIN = the most negative value of int32_t (and wonderful for signaling "invalid")
uint64_t = I don't have a name for it other than "64 bit integer".
uint128_t = I don't care for a name.
uint256_t = "excessive"
uint512_t = probably have grown up kids by this point.
uint1024_t = retired and don't care at this point.
etc.
I don't know, but a single switching element capable of 3 states is called a "trit" and there have been working computers built with both signed (-1-0-1) and unsigned (0-1-2) base 3 architectures.
Also that reminds me of negabinary, an old computer arithmetic scheme using base -2
What a Tribble does after too much to drink?
I am maybe the least expert here but here is my way:
When we talk about 32 or 64 bits systems, we not only talk about the physical architecture of the machine, in how bit wide a chunk can be moved from RAM to the processor, etc.
This has an effect on the way we do calculations: for big figures, do we write code for computing 4 byte wide values or only 1 Long (4 bytes)?.
Then comes the instructions set of the processor in order to do these things in one instruction rather than many more.
Then comes the compiler with which your program was compiled (one that uses instructions that uses less data chunks in a wider format or many more elementary instructions) ?
It's like another language! Example: At low level, your program can store 4 letters in one memory cell or needs to store them in 4? the job is not the same. Depending on the technology, the software works differently, is written differently.
In the real world, 2 bits is a quarter, 4 bits is a half dollar, 8 bits is a dollar, and that would make 64 bits the same as $8 USD. That works for me.
The more I study linguistics and the longer I teach English as a second language, the more I just work with communicating and run away from the ego mania of authority or standardization. You know what I mean?