Bits are great because they are very simple representation that can be used for many different types of data. But if you want to represent anything more complex, you need combine bits together. On a modern computer, you almost never deal with individual bits. It's way too inefficient to try to access data at that level of single bits. Memory's always addressed in larger chunks. One very commonly used chunk of memory is a byte. A byte is eight bits. It's the standard measure of memory. We measure files in terms of kilobytes, megabytes, gigabytes, and terabytes. In fact, even a single byte is too small to access directly on a modern computer. In a typical modern computer, the smallest element you can access is 64 bits or eight bytes. That means, if we want a variable to represent just true or false, we still have to use 64 bits to store it. That sounds like it's very inefficient, but actually accessing data in 64 bit chunks speeds up the whole computer. So, it does turn out to be faster. Having said that, a byte is still a standard measure. It's easier to get our heads around so let's think about bytes. If you represent a bit as a one or a zero, a bite looks like this. A byte can represent many. In fact, 256 different patterns of ones and zeros called bit patterns. Each pattern can represent something different. For example, this pattern represents letter A. This pattern represents letter G. But any pattern could represent many different types of data. So, this pattern can also represent the number 71 or dark gray. Let's start by looking at numbers. Numbers are really useful because you can use them to represent many other things. In fact, I'll talk about representing everything else in terms of numbers. Some things like letters are represented in a way that you could think about in terms of bit patterns or numbers, but I just find numbers easier to read in bit patterns. This is an example of using one abstraction bit patterns to build a high level one numbers, that's easy to work with for humans at least, and could be used to build other abstractions such as letters, images, and sounds. How do we represent numbers? Let's start by looking at how we represent numbers in the decimal system. As an example, let's look at the number 3,168. The digit on the far right, eight is multiplied by one. The next digit, six is multiplied by 10. The one is multiplied by 10 times 10 or a 100. The digit on the left, three is multiplied by 10 times 10 times 10 or thousands. So, each place in a number represents a number 10 times larger than the previous one. Binary numbers work the same way, except instead of multiplying by 10, we use two. The right most number is multiplied by one. The next right most by two. Then two times two, four, two time two times two, eight. So, the binary number 1101 is starting from the right one times one, plus zero times two, plus one times four, plus one times eight that makes one plus zero plus four plus eight which is the decimal number 13. So, you can use binary notation to represent decimal numbers as a pattern of bits. I won't go into a lot of detail about binary since you'll learn about it in other courses. The important thing is that you can represent numbers as bits. A single byte can represent numbers up to 255. But most computers represent numbers as either 32 bits or 64 which is either over four billion or over quintillion. What I've just described only represents whole positive numbers but binary can also be used to represent negative or fractional numbers. You can read more about that in the textbook if you're interested. Numbers are the fundamental building block of everything on a computer. So, they'll also be your building block for understanding other data representations.