Why use Hex Values?
SN96
Posts: 318
I have searched endlessly and I can't find the answer I'm looking for.
·
There are several ways to express a number:
·
BIN
HEX
DEC
ASCII
·
In most cases its best to use the method that makes the most sense in your code like using ascii to represent the letter you want to use vs. a number representation of that letter, or binary numbers to represent what pins you want to set as inputs or outputs, but what about HEX values?
·
What is the benefit of using a hex number vs. its dec value? Why use $7D when you can·just use·125?
·
Any help is appreciated.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Mike
·
·
There are several ways to express a number:
·
BIN
HEX
DEC
ASCII
·
In most cases its best to use the method that makes the most sense in your code like using ascii to represent the letter you want to use vs. a number representation of that letter, or binary numbers to represent what pins you want to set as inputs or outputs, but what about HEX values?
·
What is the benefit of using a hex number vs. its dec value? Why use $7D when you can·just use·125?
·
Any help is appreciated.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Mike
·
Comments
The main reason to use hexidecimal numbers is that they are shorthand for binary. A·2 digit hex number is a direct representation of·each nibble of an 8 bit·binary number whereas a decimal number would require a translation that is not so direct. Any number system that you can reliably use is fine but for most people the hex numbers are generally easier to relate to when using bit oriented interfaces such as on the BASIC Stamp.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Mike
·
For instance, if you have a "bit pattern" like 1000 0101, it's much easier to confirm that's the pattern you want in hex ($89) than it is in decimal (137? What binary bits are set to get you 137? But in hex (or binary) it's obvious).
Now, 8-bit numbers are easy to see in binary. But 16-bit numbers get a little more difficult. And for 32 bit numbers you really need hex.
NONE of this affects "program efficiency", the computer doesn't care what format the numbers are in. It does affect how easy it is for the programmer to troubleshoot and verify correctness.
Whether you write down a number using decimal, binary, octal, hexadecimal, EBCDIC, or ASCII, it is still the same bit pattern in storage.· The machine doesn't have to convert anything, because there's nothing to convert.
The above is true at execution time.· At compile time, the compiler (which in our case runs in a PC for most of us) has to do some conversions, of course -- but at execution time the compiled program (which runs in the Stamp) hasn't any conversions to do.· %00110001 and $31 and 49 and "1"·are identical in storage.· The·choice of which way to express it depends only on which one is easiest for the programmer (you).
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
· -- Carl, nn5i@arrl.net
HEX (Base 16) notation is just a method of VIEWING 8-bit, binary data in a more convenient form. It IS binary data, just represented in a more human readable form. HEX goes from 00 to FF. In Stamp-speak, it's written thusly: $00 to $FF. That's all there is to it!
Just by way of comparison, OCTAL (Base 8) goes from 00 to 7F. In Stamp-speak, it is written thusly: $00 to $7F.
In NO case is the data changed in any way, nor are the characteristics of HEX any different than if it were VIEWED as binary, since it's just a VIEWING METHOD.
Regards,
Bruce Bates
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
When all else fails, try inserting a new battery.
For me DEC is much better as a unit of measure vs. hex or binary. For example: how fast or how hot something is, dec is much more clear to me.
I have learned a lot but still a long way off. I still don't know when to use HEX vs. binary or DEC. For me, binary is great when thinking on terms of switches (on or off) Dec for raw values (such as a unit of measure) and Hex seems like a binary "label" or a dec "label" I guess I don't understand its application.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Mike
·
And yes, for "Quantities" like how fast or how hot something is, humans read Decimal much more easily than Hex. That's why Hex is only used for bit-patterns.
0000 $0
0001 $1
0010 $2
0011 $3
0100 $4
0101 $5
0110 $6
0111 $7
1000 $8
1001 $9
1010 $A
1011 $B
1100 $C (See? $C is easy -> %1100 )
1101 $D
1110 $E
1111 $F
So, if you have a 16 bit number, $FFCF is MUCH easier (and shorter) to read than
%1111 1111 1100 1111 -- especially if you're not allowed those "spaces", so it becomes
%1111111111001111 -- Which is REALLY hard to verify by just glancing at it.
This was very helpful.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Mike
·