Electronic memory is categorized into two major types:
RAM, or Random Access Memory, is both readable and writable. The acronym RAM comes from the fact that we can access any part of RAM instantly, as opposed to a tape device which can only be accessed sequentially. I.e., to read something from the middle of a tape, we must read everything before it in order to find it. RAM is a poor acronym, however, because it is generally used to refer to read-write memory as opposed to read-only memory, which is also randomly accessible. RWM would be a better initialism than RAM.
RAM is volatile, which means it requires power in order to retain its contents. The term volatile is borrowed from chemistry, where it refers to something that evaporates.
Computer programs and data are normally stored in disk files when they are not in use. When a program is run (executed), the program and some of the data it manipulates are loaded from disk into RAM, which is about 1,000,000 times faster than disk. This greatly improves speed when the same data is accessed repeatedly.
ROM, or Read-Only Memory is not generally writable. The original ROMs were set to a specific content at the factory and could never be changed again.
Today, we use EEPROM, or electronically erasable programmable ROM, such as Flash memory. EEPROMS are writable, but not as easily as RAM. There are special procedures to altering the contents of a EEPROM. The important feature of ROM and EEPROM is that it is non-volatile, so it retains its content even when the power is cut.
Non-volatile memory such as EEPROM is used to store firmware, which is essentially software that stays in memory when the power is off. Firmware makes it possible for computers to start when the power is turned on (cold boot), and allows small and embedded devices which are often powered down to function. The boot sequence cannot be started from a program on a disk, since reading a program from disk requires a program! Hence, there must be a minimal amount of program code in memory when the power comes on to start the boot sequence. In a personal computer, the core firmware, sometimes called BIOS (Basic Input/Output System or Built-In Operating System), initializes the hardware and loads the first part of the operating system from disk into RAM. From there, the operating system takes over.
Memory is a one-dimensional array of storage cells. In virtually all computers today, each cell holds one byte. Memory cells are selected using an integer index called the memory address. Memory addresses begin at zero.
Table 12.1. Example Memory Map
|0||01001010 (decimal 74, character '<', or anything else!)|
|Memory size - 1||10100110|
Each 8-bit cell can hold a character such as 'a' or '?', an integer between 0 and 255, or part of a larger piece of information such as a 64-bit floating point number, which would occupy eight consecutive cells.
Memory is actually multilayered in what we call the memory hierarchy, depicted in Table 12.2, “The Memory Hierarchy”.
Table 12.2. The Memory Hierarchy
|Name||Technology||Typical size||Access time|
|Registers||Static RAM||256 bytes (32 words)||1 clock cycle|
|Level 1 cache||Static RAM||4 MiB||1 to a few clock cycles|
|Level 2 cache||Static RAM||16 MiB||A few clock cycles|
|Level 3 cache||Static RAM||256 MiB||Several clock cycles|
|Main memory||Dynamic RAM||16 GiB||Dozens of clock cycles|
|Solid State Drive||Flash RAM||1 terabyte||Tens of microseconds|
|Magnetic disk||Platters and moving heads||4 terabytes||A few milliseconds|
|Magnetic tape||Reel to reel tape||Many terabytes||Seconds to hours|
Cache, which is French for "hidden", is a small, fast memory, where the hardware places copies of data read from larger, slower memory levels. Since programs tend to access the same small sections of memory repeatedly for a while (due to program loops), the CPU can usually get the data it needs from the cache. Cache is "hidden" because programs are unaware of it. They ask for data from a DRAM address, and much of the time the hardware gives them the cached copy much more quickly.
Virtual memory uses disk to extend the amount of RAM that is apparently available to programs. When a computer using virtual memory runs out of RAM, it swaps blocks of data from RAM out to a special area of disk known as swap space.
Since disk is typically 1,000 to 1,000,000 times slower than RAM, swapping is very expensive. A single swap to or from disk usually takes only a few milliseconds, but this is far longer than RAM access, which is on the order of nanoseconds, so if a program causes many swaps, it can become unbearably slow. Swap space is therefore useful mainly for inactive programs such as word processors, which spend most of their time waiting for user input. Parts of the program or data that are not actively in use can be swapped out to disk to make room for more active programs. If it takes a fraction of a second to swap something back into RAM when the user presses a key or clicks an icon, the user won't generally notice.
For very active programs, swap is of little use. Programs that actively use more memory than the system has available as RAM may spend most of their time waiting for swap operations. When this happens, the system is said to be thrashing, like someone struggling to stay afloat, but not moving forward through the water. Therefore, it is important for computationally intensive programs to use as little memory as possible.
So, memory, from a program's perspective, consists of everything from level 1 cache to swap space. The hardware stores as much as it can in the level 1 cache. When it is full, it must use the level 2 cache. And so on all the way down to swap space. Hence, the less memory a program uses, the faster memory access will be. If you can reduce memory use to the point where the machine code and data all fit into the level 1 cache, the program will run significantly faster than the same program frequently accessing DRAM.