Numbers are an abstract idea. A concept that can only be approximated in reality. To illustrate this concept a little better, take out some paper and write down the answer to 1/3.

Are you done? How could you be? Whatever you wrote down I could make a better by adding another 3 at the end, or another trillion 3’s at the end. How about the biggest number you can think of? Any number you can write down I can make better by adding one more digit.

So you can see that we have to put some limits somewhere. The limits that we put on numbers in a computer correspond to the amount of memory it takes to represent those numbers.

To understand this, we need to know a little bit about computer memory. Computers store information in **bits**. A **bit** is a binary number (1 or 0) that represents the on or off state of the memory. Since a single bit is often too little to use effectively, computer memory is organized into blocks of 8 bits called **bytes**.

The 8 bit byte can represent the numbers 00000000 through 11111111 in binary, which is 0 to 255. Your computer’s memory is organized into a big row of bytes.

Each address above refers to one byte. A **char** is one byte. However, what do we do when we need a number bigger than 255? We have to use more than one **byte**. A number that is 2 bytes can represent a number from 0 to 65,535 and is called a **short int**.

A 4 byte number (32 bit) could represent 0 to 4,294,967,295 and is called an **int** or **integer**.

An 8 byte number (64 bit) could represent 0 to 18,446,744,073,709,551,615 and is called a **double**.

Another way to look at the basic numeric types is as names for sizes of memory.

## What happens when we want negative numbers?

Let’s go back to the **char** or the single byte. A char can represent 256 values and only 256 values, so to get negative numbers we need to give up some of the positive ones. In fact we’ll give up exactly half of the positive ones and represent the numbers -128 through 127.

When we want to represent negative numbers we use a **signed** type. **Unsigned** types represent only positive numbers and 0. Any of char, short int, int or double can be signed or unsigned and each follows the split in half rule.

## What about the decimal point?

Numbers with a decimal point in them are called **floating point** numbers. An entire book can probably be written about floating point numbers as they can become quite complicated. A future WITH article will cover floating point numbers in more detail.

* The size of each char, int, etc can vary depending upon the hardware. This article used common sizes for a 32 bit architecture.

* Teaching Tip: Write down numbers on a whiteboard/paper and measure them with a ruler. Imagine each inch is a block of memory. How many different numbers can you store in one block? How about two or four? What would you name those blocks so you could talk about them with other people? Bonus: how many blocks does it take to write a character?