Created Wednesday 06 May 2020
Memory (Chapter 6)
Today we'll be covering memory.
There are different levels of memory. It's not all just one giant monolith.
- Cache Memory
- Virtual Memory
- Memory Segmentation
- Paging
- Address Translation
Memory lies at the heart of the stored-program computer.
We'll be focusing on memory organization.
There are two kinds of main memory.
RAM (random access memory)
ROM (read only memory)
There are two types of RAM
dynamic RAM (DRAM)
static RAM (SRAM)
DRAM consists of capacitors that slowly leak their charge over time. Thus, they must be regreshed every few milliseconds to prevent data loss.
DRAM is 'cheap' memory owing to its simple design.
SRAM consists of circuits similar to the D flip-flop that we studied in Chapter 3.
SRAM is very fast memory and it doesn't need to be refrhesed like DRAM does. It is used to build cache memory, which we will discuss in detail later.
ROM also does not need to be refreshed, either. In fact, it needs very little charge to retain its memory.
ROM is used to store permanent, or semi-permanent data that persists even while the system is turned off.
Generally speaking, faster memory is more expensive than slower memory.
To provide the best performance at the lowest cost, memory is organized in a hierarchical fashion.
Small, fast storage elements are kept in the CPU, larger slower main memory is accessed through the data bus.
Large, (almost) permanent storage in the form of disk and tape drives is still further from the CPU.
We are most interested in the memory hierarchy that involves registers, cache, main memory, and virtual memory.
Registers are storage locations available on the process itself.
Virtual memory is typically implemented using a hard drive; it extends the address space from RAM to the hard drive.
Virtual memory provides more space: Cache memory provides speed.
To access a particular piece of data, the CPU first sends a request to its nearest memory, usually cache.
If the data is not in cache, then main memory is queried. If the data is not in main memory, then the request goes to disk.
Once the data is located, then the data and a number of its nearby data elements are fetched into cache memory.
This leads us to some definitions:
hit is when data if found at a given memory level.
miss is when it is not found.
hit rate is the percentage of time data is found at a given memory level
miss rate is the percentage of time it is not
miss rate = 1 - hit rate
hit time is the time required to access data at a given memory level.
miss penalty is the time required to process a miss, including the time that it takes to replace a block of memory plus the time it takes to deliever the data to the processor.
An entire block of data is copied after a hit because the principle of locality tells us that once a byte is accessed, it's likely that a nearby data element will be needed soon.
There are three forms of locality:
- Temporal locality: Recently-accessed data elements tend to be accessed again.
- Spatial locality: Accesses tend to cluster
- Sequential locality: Instructions tend to be accessed sequentially.
Cache Memory
The purpose of cache memory is to speed up accesses by storing recently used data closer to the CPU, instead of storing it in main memory.
Although cache is much smaller than main memory, its access time is a fraction of that of main memory.
Unlike main memory, which is accessed by address, cache is typically accessed by content; hence, it is often called content addressable memory.
Because of this, a single large cache memory isn't always desirable — it takes longer to search.
The simplest cache mapping scheme is direct mapped cache.
In a direct mapped cache consisting of N blocks of cache, block X of main memory maps to cache block Y = X mod N.
Thus, if we have 10 blocks of cache, block 7 of cache may hold blocks 7, 17, 27, 37, ... of main memory.
To perform direct mapping, the binary main memory address is partitioned into the fields shown below:
- The offset field uniquely identifies an address within a specific block
- The block field selects a unique block of cache.
- The tag field is whatever is left over.
| Tag | Block | Offset |
← Bits in main memory address →
The sizes of these fields are determined by characteristics of both memory and cache.
Consider a byte-addressable main memory consisting of 4 blocks, and a cache with 2 blocks, where each block is 4 bytes.
This means Block 0 and 2 of main memory map to Block 0 of cache, and Blocks 1 and 3 of main memory map to Block 1 of cache.
Using the tag, block, and offset fields, we can see how main memory maps to cache as follows:
First, we need to determine the address format for mapping.
Each block is 4 bytes, so the offset field must contain 2 bits; there are 2 blocks in cache, so the block field must contain 1 bit; this leaves 1 bit for the tag (as a main memory address has 4 bits because there are a total of 24 = 16 bytes).
| 1 | 1 | 2 |
| Tag | Block | offset |
← Main memory format →
Suppose we need to access main memory address 316 (0x0011 in binary). If we partition 0x0011 using the address format we get the following:
Thus, the main memory address 0x0011 maps to cache block 0.
Bottom figure shows this mapping, along with the tag that is also stored with the data.
slide 19
Direct mapped cache maps main memory blocks in a modular fashion to cache blocks. The mapping depends on:
- The number of bits in the main memory address (how many addresses exist in main memory).
- The number of blocks are in cache (which determines the size of the block field)
- How many addresses (either bytes of words) are in a block (which determines the size of the offset field).
Suppose instead of placing memory blocks in specific cache locations based on memory address, we could allow a block to go anywhere in cache.
In this way, cache would have to fill up before any blocks are evicted.
This is how fully associative cache works.
A memory address is partitioned into only two fields: the tag and the offset.
Suppose, as before, we have a 14-bit memory addresses and a cache with 16 blocks, each block of size 8. The field format of a memory reference is:
| 11 bits | 3 bits |
| Tag | Offset |
← 14 bits →
When the cache is searched, all tags are searched in parallel to retrieve the data quickly.
This require special, costly hardware.
Direct mapped cache evicts a block whenever another memory reference needs that block.
With fully associative cache, we have no such mapping, thus we must devise an algorithm to determine which block to evict from the cache.
The block that is evicted is the victim block.
There are a number of ways to pick a victim.
Most of today's small systems employ multilevel cache hierarchies.
The levels of cache form their own small memory hierarchy.
Level 1 cache (8kb to 64kb) is situated on the processor itself.
If the cache system used in inclusive cache, the same data may be present at multiple levels of cache.
Strictly inclusive caches guarantee that all data in a smaller cache also exists at the next higher level.
Exclusive caches permit only one copy of the data.
The tradeoffs in choosing on eover the other involve weighing the variables of access time, memory size, and circuit complexity.
Final
From 6 to 8:30 on the 20th. We do have one last class next week. Questions are pretty much just the homework. Replace chapter 4 with the MIPS stuff we did.