Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Lecture Slides on Memory Technology - Computer Architecture | CS 35101, Study notes of Computer Architecture and Organization

Material Type: Notes; Professor: Steinfadt; Class: COMPUTER ARCHITECTURE; Subject: Computer Science; University: Kent State University; Term: Spring 2008;

Typology: Study notes

2009/2010

Uploaded on 02/25/2010

koofers-user-j7b
koofers-user-j7b 🇺🇸

10 documents

1 / 56

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Week 15
Chapter 7
Large and Fast: Exploiting Memory Hierarchy
Taken from Kevin Schaffer’s slides
CS 35101
Computer Architecture
Spring 2008
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32
pf33
pf34
pf35
pf36
pf37
pf38

Partial preview of the text

Download Lecture Slides on Memory Technology - Computer Architecture | CS 35101 and more Study notes Computer Architecture and Organization in PDF only on Docsity!

Week 15

Chapter 7

Large and Fast: Exploiting Memory Hierarchy

Taken from Kevin Schaffer’s slides CS 35101 Computer Architecture Spring 2008

Memory Technology

 Dynamic RAM (DRAM)

 Slower, less expensive

 Access time: 50–70 ns

 Static RAM (SRAM)

 Faster, more expensive

 Access time: 0.5–5 ns

Memory Hierarchy

Memory Hierarchy

Hit or Miss

Blocks

 Data is transferred between levels in the

hierarchy in blocks (or lines )

 A block can be as small as a single word or it

can be several thousand words

 Block sizes are typically different between

different levels in the hierarchy, getting larger as

move further from the processor

Cache

 A cache is a small memory that stores a subset

of blocks from a larger memory

 Contains a number of block frames, each of

which holds a single block (or is empty)

 Cache design issues

 How to determine if a block is in the cache

 If so, where in the cache is it?

Cache

Direct-Mapped Cache

Tags

 Each block frame has a tag that identifies which

of the blocks that can be stored there actually is

there

 For a direct-mapped cache, the tag contains the

upper address bits

 We must also be able to detect when no block is

stored in a block frame, so each frame also has

a valid bit

Example

11 N

10 N

01 N

00 N

Index V Tag Data 11 N

10 N

01 N

00 Y 00 Mem[0] Index V Tag Data 11 N 10 Y 00 Mem[2]

01 N

00 Y 00 Mem[0] Index V Tag Data 11 Y 00 Mem[3] 10 Y 00 Mem[2]

01 N

00 Y 00 Mem[0] Index V Tag Data

Example

11 Y 00 Mem[3] 10 Y 00 Mem[2]

01 N

00 Y 00 Mem[0] Index V Tag Data 11 Y 00 Mem[3] 10 Y 00 Mem[2]

01 N

00 Y 01 Mem[4] Index V Tag Data 11 Y 00 Mem[3] 10 Y 00 Mem[2]

01 N

00 Y 01 Mem[4] Index V Tag Data 11 Y 11 Mem[15] 10 Y 00 Mem[2]

01 N

00 Y 01 Mem[4] Index V Tag Data

Spatial Locality

 Previous cache configuration exploits temporal

locality but not spatial locality

 To take advantage of spatial locality we must

use more words per block

 On a cache miss, fetch the requested word plus

others nearby

 Use low order bits of address to select word

Example: Multiword

 Direct-mapped cache

with 2 blocks of 2 words

each

 16 word memory

 Least significant address

bit selects the word

within the block

Address Binary