Digital Design and Computer Architecture - Lecture 21b: Memory Hierarchy and Caches

This post is a derivative of Digital Design and Computer Architecture Lecture by Prof. Onur Mutlu, used under CC BY-NC-SA 4.0.

You can watch this lecture on Youtube and see pdf.

I write this summary for personal learning purposes.

Readings for Today

  • Memory Hierarchy and Caches
  • Required
    • H&H Chapters 8.1-8.3
    • Refresh: P&P Chapter 3.5
  • Recommended
    • An early cache paper by Maurice Wilkes
      • Wilkes, “Slave Memories and Dynamic Storage Allocation,” IEEE Trans. On Electronic Computers, 1965.

Recall: SRAM vs DRAM

One difference between SRAM and DRAM is you supply not N + M bits to DRAM. Because DRAM chip is a seperated chip. We don’t supply the whole N + M bits into the DRAM chip at the same time to minimize costs.

The Memory Hierarchy

Memory in a Modern System

You have a bunch of cores and caches that are shared. And inside the cores you have L1 caches. The L1 cache is usually considered as part of the core. More outer level caches are not.

Ideal Memory

  • Zero access time (latency)
  • Infinite capacity
  • Zero cost
  • Infinite bandwidth (to support multiple accesses in parallel)
    • To support multiple accesses in parallel (E.g., by banking or multi-porting)

The Problem

  • Ideal memory’s requirements oppose each other
  • Bigger is slower
    • Bigger -> Takes longer to determine the location
  • Faster is more expensive
    • Memory technology: SRAM vs. DRAM vs. Disk vs. Tape
  • Higher bandwidth is more expensive
    • Need more banks, more ports, higher frequency, or faster technology
  • Other technologies have their place as well
    • Flash memory (mature), PC-RAM, MRAM, RRAM (not mature yet)

Why Memory Hierarchy?

  • We want both fast and large
  • But we cannot achieve both with a single level of memory
  • Idea: Have multiple levels of storage (progressively bigger and slower as the levels are farther from the processor) and ensure most of the data the processor needs is kept in the fast(er) level(s)

Locality

  • One’s recent past is a very good predictor of his/her near future.
  • Temporal Locality: If you just did something, it is very likely that you will do the same thing again soon
    • since you are here today, there is a good chance you will be here again and again regularly
  • Spatial Locality: If you did something, it is very likely you will do something similar/related (in space)
    • every time I find you in this room, you are probably sitting close to the same people

Memory Locality

  • A “typical” program has a lot of locality in memory references
    • typical programs are composed of “loops”. And loops operate on similar data.
  • Temporal: A program tends to reference the same memory location many times and all within a small window of time. If you access some memory location, you’re going to access it again. If you’re accessing instructions in a looping manner, you’re going to fetch the same instructions again and again.
  • Spatial: A program tends to reference a cluster of memory locations at a time.
    • most notable examples:
      1. instruction memory references. If you have a loop that you’re iterating, what you’re doing is sequential access which means they’re spatially close in the address space.
      2. array/data structure references. Because you’re accessing consecutive addresses.

Temporal: You touch some word over and over and over again but don’t touch anything around it.

Caching Basics

Caching Basics: Exploit Temporal Locality

  • Idea: Store recently accessed data in automatically managed fast memory (called cache)
  • Anticipation: the data will be accessed again soon
  • Temporal locality principle
    • Recently accessed data will be again accessed in the near future
    • This is what Maurice Wilkes had in mind:
      • Wilkes, “Slave Memories and Dynamic Storage Allocation,” IEEE Trans. On Electronic Computers, 1965.
      • “The use is discussed of a fast core memory of, say 32000 words as a slave to a slower core memory of, say, one million words in such a way that in practical cases the effective access time is nearer that of the fast memory than that of the slow memory.”

Caching Basics: Exploit Spatial Locality

  • Idea: Store addresses adjacent to the recently accessed one in automatically managed fast memory
    • Logically divide memory into equal size blocks
    • Fetch to cache the accessed block in its entirety
  • Anticipation: nearby data will be accessed soon
  • Spatial locality principle
    • Nearby data in memory will be accessed in the near future
      • E.g., sequential instruction access, array traversal
    • This is what IBM 360/85 implemented
      • 16 Kbyte cache with 64 byte blocks
      • Liptay, “Structural aspects of the System/360 Model 85 II: the cache,” IBM Systems Journal, 1968.

The Bookshelf Analogy

  • Book in your hand
  • Desk
  • Bookshelf
  • Boxes at home
  • Boxes in storage
  • Recently-used books tend to stay on desk
    • Comp Arch books, books for classes you are currently taking
    • Until the desk gets full
  • Adjacent books in the shelf needed around the same time If I have organized/categorized my books well in the shelf

Caching in a Pipelined Design

Basically, the level 1 cache is very special because the L1 cache needs to be tightly integrated into the pipeline.

  • The cache needs to be tightly integrated into the pipeline
    • Ideally, access in 1-cycle so that load-dependent operations do not stall
  • High frequency pipeline -> Cannot make the cache large. Because that will require multiple cycles or many cycles to access.
    • But, we want a large cache AND a pipelined design
  • Idea: Cache hierarchy

A Note on Manual vs. Automatic Management

  • Manual: Programmer manages data movement across levels
    • – too painful for programmers on substantial programs
      • “core” vs “drum” memory in the 50’s
      • still done in some embedded processors (on-chip scratch pad SRAM in lieu of a cache) and GPUs (called “shared memory”)
  • Automatic: Hardware manages data movement across levels, transparently to the programmer
    • ++ programmer’s life is easier
      • the average programmer doesn’t need to know about it
      • You don’t need to know how big the cache is and how it works to write a “correct” program!
      • What if you want a “fast” program? - You’d better know about your caches. If you know those you can change your access patterns and working set.
Automatic Management in Memory Hierarchy
  • Wilkes, “Slave Memories and Dynamic Storage Allocation,” IEEE Trans. On Electronic Computers, 1965.
  • “By a slave memory (cache) I mean one which automatically accumulates to itself words that come from a slower main memory, and keeps them available for subsequent use without it being necessary for the penalty of main memory access to be incurred again.”

Historical Aside: Other Cache Papers

  • Fotheringham, “Dynamic Storage Allocation in the Atlas Computer, Including an Automatic Use of a Backing Store,” CACM 1961.
  • Bloom, Cohen, Porter, “Considerations in the Design of a Computer with High Logic-to-Memory Speed Ratio,” AIEE Gigacycle Computing Systems Winter Meeting, Jan. 1962.

A Modern Memory Hierarchy

Hierarchical Latency Analysis

How do you analyze the latency?

  • For a given memory hierarchy level i it has a technology-intrinsic access time of ti, The perceived access time Ti is longer than ti
    • ti is the access latency of this cache itself, whereas Ti is the perceived access latency to that level.
  • Except for the outer-most hierarchy, when looking for a given address there is
    • a chance (hit-rate hi) you “hit” and access time is ti
    • a chance (miss-rate mi) you “miss” and access time ti +Ti+1
    • hi + mi = 1
  • Thus
    • Ti = hi ti + mi (ti + Ti+1)
    • Ti = ti + mi  Ti+1

hi and mi are defined to be the hit-rate and miss-rate of just the references that missed at Li-1.

Hierarchy Design Considerations

  • Recursive latency equation
    • Ti = ti + mi*T(i+1)
  • The goal: achieve desired T1 within allowed cost
  • Ti = ti is desirable
  • Keep mi low
    • increasing capacity Ci lowers mi, but beware of increasing ti
    • lower mi by smarter cache management (replacement::anticipate what you don’t need, prefetching::anticipate what you will need)
  • Keep T(i+1) low
    • faster lower hierarchies, but beware of increasing cost
    • introduce intermediate hierarchies as a compromise

Cache Basics and Operation

Cache

  • Generically, any structure that “memoizes” frequently used results to avoid repeating the long-latency operations required to reproduce the results from scratch, e.g. a web cache
  • Most commonly in the processor design context: an automatically-managed memory structure based on SRAM
    • memoize in SRAM the most frequently accessed DRAM memory locations to avoid repeatedly paying for the DRAM access latency

Caching Basics

  • Block (line): Unit of storage in the cache
    • Memory is logically divided into cache blocks that map to locations in the cache
    • It could be the size of a word or much bigger size. For example in today’s caches 64-byte blocks are very common. If you want to capture a larger spatial locality, you want your block size to be larger also.
  • On a reference:
    • HIT: If in cache, use cached data instead of accessing memory
    • MISS: If not in cache, bring block into cache
      • Maybe have to kick something else out to do it
  • Some important cache design decisions
    • Placement: where and how to place/find a block in cache?
      • how do you index into the cache? how do you search the cache?
    • Replacement: what data to remove to make room in cache?
    • Granularity of management: large or small blocks? Subblocks?
    • Write policy: what do we do about writes?
    • Instructions/data: do we treat them separately?

Cache Abstraction and Metrics

  • Address -> Tag Store -> Hit/miss?
  • Address -> Data Store -> Data

  • Cache hit rate = (# hits) / (# hits + # misses) = (# hits) / (# accesses)
  • Average memory access time (AMAT) = ( hit-rate * hit-latency ) + ( miss-rate * miss-latency )
  • Aside: Is reducing AMAT always beneficial for performance?

Hardware Cache Design

A Basic Hardware Cache Design

We will start with a basic hardware cache design. Then, we will examine a multitude of ideas to make it better.

Blocks and Addressing the Cache

  • Memory is logically divided into fixed-size blocks

  • Each block maps to a location in the cache, determined by the index bits in the address.

    • E.g., 8-bit address

      Tag Index Byte in block
      2 bits 3 bits 3 bits
    • used to index into the tag and data stores

  • Cache access:

    1. index into the tag and data stores with index bits in address
    2. check valid bit in tag store
    3. compare tag bits in address with the stored tag in tag store
  • If a block is in the cache (cache hit), the stored tag should be valid and match the tag of the block

Direct-Mapped Cache: Placement and Access

  • Assume byte-addressable memory: 256 bytes, 8-byte blocks -> 32 total number of blocks
  • Assume cache: 64 bytes, 8 blocks
    • Direct-mapped: A block can go to only one location
    • Addresses with same index contend for the same location
      • Cause conflict misses. Two blocks that you’re accessing are conflicting in the same index. The reason this happens is you have only one place to store a block in the cache.
  • Direct-mapped cache: Two blocks in memory that map to the same index in the cache cannot be present in the cache at the same time
    • One index can store only one entry
  • Can lead to 0% hit rate if more than one block accessed in an interleaved manner map to the same index
    • Assume addresses A and B have the same index bits but different tag bits
    • A, B, A, B, A, B, A, B, …   conflict in the cache index
    • All accesses are conflict misses

Associativity

The idea to solve conflict miss is called set associativity.

Set Associativity

  • Problem: Addresses 0 and 8 always conflict in direct mapped cache
  • Instead of having one column of 8, have 2 columns of 4 blocks
  • Key idea: Associative memory within the set
    • + Accommodates conflicts better (fewer conflict misses)
    • – More complex, slower access, larger tag store

Higher Associativity

  • + Likelihood of conflict misses even lower
  • – More tag comparators and wider data mux; larger tags

Full Associativity

You don’t index into the cache anymore.

  • Fully associative cache
    • A block can be placed in any cache location

Associativity (and Tradeoffs)

  • Degree of associativity: How many blocks can map to the same index (or set)?
  • Higher associativity
    • ++ Higher hit rate
    • – Slower cache access time (hit latency and data access latency)
    • – More expensive hardware (more comparators)
  • Diminishing returns from higher associativity

Issues in Set-Associative Caches

  • Think of each block in a set having a “priority”
    • Indicating how important it is to keep the block in the cache
    • How likely is this block going to be referenced earliest in the future
  • Key issue: How do you determine/adjust block priorities?
  • There are three key decisions in a set:
    • Insertion: What happens to priorities on a cache fill?
      • Where to insert the incoming block, whether or not to insert the block
    • Promotion: What happens to priorities on a cache hit?
      • Whether and how to change block priority
    • Eviction/replacement: What happens to priorities on a cache miss?
      • Which block to evict and how to adjust priorities
Eviction/Replacement Policy
  • Which block in the set to replace on a cache miss?
    • Any invalid block first
    • If all are valid, consult the replacement policy
      • Random
      • FIFO
      • Least recently used (how to implement?)
      • Not most recently used
      • Least frequently used?
      • Least costly to re-fetch?
        • Why would memory accesses have different cost?
      • Hybrid replacement policies
      • Optimal replacement policy?

LRU

Implementing LRU

  • Idea: Evict the least recently accessed block
  • Problem: Need to keep track of access ordering of blocks
  • Question: 2-way set associative cache:
    • How many bits do you need to store in the tag store to tell you which way is the least recently used block - 1bit
  • Question: 4-way set associative cache:
    • How many bits do you need to store in the tag store to tell you which way is the least recently used block? You need to store all possible orderings. There are 24 possible different orderings.
    • How many different orderings possible for the 4 blocks in the set?
    • How many bits needed to encode the LRU order of a block?
    • What is the logic needed to determine the LRU victim?

Approximations of LRU

  • Most modern processors do not implement “true LRU” (also called “perfect LRU”) in highly-associative caches
  • Why?
    • True LRU is complex
    • LRU is an approximation to predict locality anyway (i.e., not the best possible cache management policy)
  • Examples:
    • Not MRU (not most recently used)
    • Hierarchical LRU: divide the N-way set into M “groups”, track the MRU group and the MRU way in each group
    • Victim-NextVictim Replacement: Only keep track of the victim and the next victim

Cache Replacement Policy

LRU or Random

  • LRU vs. Random: Which one is better?
    • Example: 4-way cache, cyclic references to A, B, C, D, E
  • 0% hit rate with LRU policy
  • Set thrashing: When the “program working set” in a set is larger than set associativity
    • Random replacement policy is better when thrashing occurs
  • In practice:
    • Depends on workload
    • Average hit rate of LRU and Random are similar
  • Best of both Worlds: Hybrid of LRU and Random
    • How to choose between the two? Set sampling
      • See Qureshi et al., “A Case for MLP-Aware Cache Replacement,“ ISCA 2006.

What Is the Optimal Replacement Policy?

  • Belady’s OPT
    • Replace the block that is going to be referenced furthest in the future by the program (If you know this, of course)
    • Belady, “A study of replacement algorithms for a virtualstorage computer,” IBM Systems Journal, 1966.
    • How do we implement this? Simulate?
  • Is this optimal for minimizing miss rate? - Yes
  • Is this optimal for minimizing execution time?
    • No. Cache miss latency/cost varies from block to block! Miss rates is a good indicaor of execution time but it’s not the only thing that affects execution time.
    • Two reasons: Remote vs. local caches and miss overlapping
    • Qureshi et al. “A Case for MLP-Aware Cache Replacement,“ ISCA 2006.

Reading

  • Key observation: Some misses more costly than others as their latency is exposed as stall time. Reducing miss rate is not always good for performance. Cache replacement should take into account MLP of misses.
  • Moinuddin K. Qureshi, Daniel N. Lynch, Onur Mutlu, and Yale N. Patt, “A Case for MLP-Aware Cache Replacement” Proceedings of the 33rd International Symposium on Computer Architecture (ISCA) , pages 167-177, Boston, MA, June 2006. Slides (ppt)

Leave a comment