We'll start with some general number system theory to put things in context. Otherwise, using common computer number systems tends to become a mechanical process that students don't really understand. A little depth of understanding will make you better at your job and prevent costly problems down the road.

Any information can be represented using patterns of two or
more symbols. The English alphabet uses 26. Our number system
uses 10, the digits 0 through 9, because most of us have 10 fingers.
True story: My former barber is an exception with 9, because he was
a bit clumsy with sharp instruments.
Our 10-digit number system is called decimal,
and our 26-letter alphabet would be a
*hexavigesimal* information system, which
has nothing to do with witchcraft. A system with 8 symbols is
called *octal*,
and one with 16 is called *hexadecimal*.

Computers use just two symbols, each represented by a different
voltage, because it is easier to design
circuits with just two states. Any information system using
only two symbols is called *binary*.
In modern integrated circuits, these states are commonly
0 volts and 3.3 volts. On paper, we represent these as 0 and 1,
because *sometimes* the patterns represent
numbers.
0V can represent 0 while 3.3V represents 1 (positive logic)
or the other way around (negative logic). It makes no difference
to functionality, though it could affect power consumption if
there tend to be more 0s or more 1s in the device.

In ASCII (American Standard Code for Information Interchange, pronounced "askee") and the ISO (International Standards Organization) extensions of ASCII, the letter 'A' is represented as 01000001. THIS IS NOT A NUMBER. However, we can treat it like a binary number and convert it to decimal 65 for convenience. Conversions like this one are introduced in the section called “Binary Fixed Point”.

Whether or not the 0s and 1s in a pattern represent a number, we
often refer to them as *binary digits*,
or *bits* for short.

What is the minimum number of symbols needed to represent information?

Why does computer hardware use binary rather than decimal?

How are 0s and 1s of binary represented in digital circuits?

Do all values in a computer represent numbers? If not, what is an example of non-numeric data?