why 64K, not 65K?
TC
Posts: 1,019
Hello all,
I'm betting some of you are probably getting tired of me asking questions, well I'm sorry. There is so much I want to know, and Google does not have the answers the way I can understand.
I was looking a Atmel's AT28C64B just trying to get a better understanding of logic. And I noticed some thing. There is more than 64,000 bits of space, then why does the datasheet say 64K (8K x 8)?
For the chip there are 13 address lines (A0-A12)
So to if I selected every address from $0 to $1FFF, that would be 8192 address locations, then *8 bits, that would be 65,536 bits. Why are chips marketed low? Why not say it is a 65K EEPROM?
This question is not just for this EEPROM, I have seen it on a lot of other things.
Thanks
TC
I'm betting some of you are probably getting tired of me asking questions, well I'm sorry. There is so much I want to know, and Google does not have the answers the way I can understand.
I was looking a Atmel's AT28C64B just trying to get a better understanding of logic. And I noticed some thing. There is more than 64,000 bits of space, then why does the datasheet say 64K (8K x 8)?
For the chip there are 13 address lines (A0-A12)
So to if I selected every address from $0 to $1FFF, that would be 8192 address locations, then *8 bits, that would be 65,536 bits. Why are chips marketed low? Why not say it is a 65K EEPROM?
This question is not just for this EEPROM, I have seen it on a lot of other things.
Thanks
TC
Comments
I look forward to reading the answers to your (appropriate) question.
Erlend
Just don't try to figure out what is 1 hByte (hecta-), or dByte (deci-).
Erlend
SI tried to resolve the mess by introducing 'KiB' for 1024 bytes, 'MiB' for 1024 KiB, and 'GiB' for 1024 MiB. This works reasonably well. But for RAM and ROM and similar tech there is really only one way to count bytes.. by 2^n. So 64KB (not kB which is just plain wrong any which way, for RAM) still unambiguously means 65536 bytes. Or 64Kb, which means 65536 bits.
-Tor
Total confusion..
I'm lost already, so I'm not going to be figuring them out. Not just yet.
So from what I am understanding, SI ran out of ways to desinate the value.
Should I do what mindrobots is having me consider? where 1K is = to 1,024. So if I had 128K, *1024, there would be 131,072 bytes?
I never knew that. I always thought they were the same thing.
Lots of people use Kb or kb when they should have used KB, or even KiB (same with GB, Gb, gb). Sometimes there's no ambiguity, as when talking about disk capacity - it's always about bytes. But for bandwidth, or any kind of serial transmission, or where bits are potentially involved, e.g. RAM or EEPROM chips, it's a very good idea to use the correct terminology. Someone says that the measured transfer rate is 50Mb/s.. sounds good, but is it megabytes, or megabits? It could be bits, or it could be bytes. On a gigabit network 50 megabytes per second is a typical transfer rate you get in practice. But it could be that the person was talking about the internet connection.. which could well be 50 Mbit/s, and then it's not unreasonable to actually get 50Mbit/sec on a corporate internet connection. Which would be approximately 6MB/s. So it is important to always be clear: 50MB/s means 50 megabytes per second, 50Mb/s means 50 megabits per second.
-Tor
10x0 = 0
10x1 = 10
10x10 = 100
16x0 = 0
16x1 = 0x10 (16)
16x16 =0x100 (256)
16x16x4=0x400 (1024)
16x16x16 = 0x1000 (4096)
If I remember correctly 1024 was chosen as K because 1024 is close to 1000. A marketing thing...
Im going to have to make it second nature.
Ah, now I get it.
It gets confusing when such usage leaks out into the world of disk drives and such that are purchase by people who have no clue about binary, and the the marketeers can have their fun taking advantage of that confusion.
-Tor
Begin to use them like you do powers of 10. These numbers are core to computing. You will encounter them everywhere.
0000 = 0
0001 = 1
0010 = 2
0011 = 3
0100 = 4
Etc.
Also learn about the Hexadecimal, Octal, and Decimal numbering systems. (Count past 9 and see what happens. See how that relates to the 1's and 0's.)
Reading about the history of computers or a book on the history of computers can make these things much more understandable!
When we connect RAM to other devices, we use address lines. Each of those represents a power of 2 addresses. And so the nearest power of two is optimal in that every bit of every address line actually addresses some RAM. A kilobyte (1024) is the nearest power of two in terms of address lines. Using 1000 leaves things on the table, has odd page sizes, and is in general difficult to work with given how easy hex is.
And since we would still need 16 address lines to get at 64,000 bytes of RAM, why not just use all the addresses up through 65535? And there you go. Powers of 2 are used because address lines are used and we want to completely use them. No point in doing it otherwise. Just adds time and complexity.
64K of RAM runs from $0000 to $FFFF In base 16, 64K is a nice, round, easy number. That's the powers of two thing I mentioned earlier.
65536 is messy, because it's base 10. We don't do RAM in base 10, we do it in powers of 2. So a page of RAM, for example, is 256 bytes, or $FF. Simple in base 16, messy in base 10. One byte = two hex digits! 00-$FF, and one byte indexes a page perfectly too.
64K of ram breaks down nicely in base 16. You pick your page size, and then the number system tells you all you want to know. Say we pick a 1Kbyte page size, which is 1024 base 10. But it's a really nice $400 in hex, base 16. Or, as programmers think, $000-$3FF, because we count from zero, which is the very first address.
$10000 / $400 = $40, or 64 in base 10, which again is 64K of memory, where a "K" is 1024 bytes, or a nice power of 2 page size that works in round numbers with everything else.
And those round numbers are round in base 16. Base 10 has nothing to do with any of it really.
There they used Octal representation in the Source... Supposedly because 'octal was closer to bare 10 than hexadecimal'...
That was true, too... With the numbers 0 to 7...
Octal fits nicely in 3 bits, but not so nicely in the 8bits of a Byte...
and even worse when you use 16bit Words.
Hex and the 1024 'Kilo' is just the most practical choices given the chosen (8/16/32/64bit) computer architecture.
On the Sperry UNIVAC 1100 series mainframes, we used octal all the time since we had a 36bit word which divided up by three much better than it did into 4 bit nibbles or bytes. Maybe each octal digit should be referred to as a tribble? We also used 6 bit fielddata for character representation which let you pack 6 characters per word. When the hardware did get ascii support, it was 9 bit ascii since then you could break the 36bit word into quarters.
In the end, bits are bits and it was all binary and K was 1024 still.
They also tried out different representation, some used decimal like Babbage's designs or the IBM 7010.
Then there is the weird Russian Setun that did not use binary at all, it used tri-level logic, which brings us to "trits" and "trytes".
So fascinating in fact that there is a whole long wikipedia page just on that topic listing word lengths for just about every machine ever. http://en.wikipedia.org/wiki/Word_(computer_architecture)
Octal still lives on to plague us. Writing "010" in C will get you eight and not ten. The leading zero indicates octal representation!
This error inducing bizarreness is carried forward into much newer languages like JavaScript.
When octal was used for 8- and 16-bit computers where hex is a better match (try it - you'll see) I suspect that was simply because, compared to hex, it's quicker to learn to translate from octal to binary in your mind. You only need to map numbers up to 7, and 3 bits, unlike hex where you go all the way to F, and 4 bits. Octal disappeared from common use with the keyed front panels, basically.. except where it lived on simply because the same operating system continued to be used, although I'm familiar with only one such: The Norsk Data SINTRAN III OS.
-Tor
-Tor
It has been said a lot that editing wikipedia has become quite difficult, I don't believe it can be so bad though.
Until a few years back computing always used decimalised base-two for measuring memory capacity, including hard drive storage. The SI letters were just borrowed as a convenience for human readability, all the engineering standards bodies involved (ISO didn't have a definition then) had them as base-two for computing. The only examples of, the non-standard, base-ten was in marketing labels on HDDs and the likes. That didn't matter as everyone knew the labelling scheme.
Since the more recent involvement of ISO there has been a concerted effort to convince the computing world that base-ten measuring is useful. However, it's a bit like trying to decimalise the calendar, it isn't likely to be very effective due to computers being base-two machines.
Well that would screw me over, I use leading "0" for a lot of things just to keep things inline so it makes it easier to debug. IE..
Base 10 numbers
001
005
010
050
100
Compared to,
1
5
10
50
100
Just easier for me to read, and to see problems.
Used casually they are close enough the same as to bot be worth quibbling about.
When you are buying that memory upgrade for your PC telling the guy in the shop you want 64K is just fine. You even get a little bonus of 1536 bytes!
Of course in an engineering sense you want to be precise but then you know what you are doing anyway.
And so it goes with mega, giga, tera. By wich time you are getting 995,116,277,76 bonus bytes for each tera byte, almost a free gigabyte!
The ISO/IEC standards are just horrible:
1000
k
kilo
10002
M
mega
10003
G
giga
10004
T
tera
10005
P
peta
10006
E
exa
10007
Z
zetta
10008
Y
yotta
1024
K
kilo
Ki
kibi
10242
M
mega
Mi
mebi
10243
G
giga
Gi
gibi
10244
-
-
Ti
tebi
10245
-
-
Pi
pebi
10246
-
-
Ei
exbi
10247
-
-
Zi
zebi
10248
-
-
Yi
yobi
I'm really, "kibi", "mebi", "gibi", who are the trying to kid? Bletch.
Yep, I like to do that sort of thing as well. Don't do it. Hence the "...plague... error inducing bizarreness...".
I really wish the C/C++ standards could kill off gibberish like that and that people like the JavaScript guys would not follow along with it.
I am wondering about something, Could the ISO/IEC standards be changed? I understand that it would be imposable to do, and I don't think anyone would actually consider changing it. I'm just asking in theory, could it be changed, and if so, what would be changed?
I'm so glad I don't have to worry about it yet, since I only know PBASIC, and Spin. But that will probably change when I decide to go back to school.
Of course much of modern documentation just ignores the new labelling push and sticks with the original base-two scaling and labelling, ie: k = 1024.