Lesson 2: Memory Tech Refresher
- discussion thread
-
^ Tiered-Latency DRAM: A Low Latency and Low Cost DRAM Architecture (Sections 1 & 2)
Donghyuk Lee, Yoongu Kim, Vivek Seshadri, Jamie Liu, Lavanya Subramanian, and Onur Mutlu. HPCA '13 -
^ A Case for Exploiting Subarray-level Parallelism (SALP) in DRAM (Sections 1 & 2)
Yoongu Kim, Vivek Seshadri, Donghyuk Lee, Jamie Liu, and Onur Mutlu. ISCA '12 -
^ RAIDR: Retention-Aware Intelligent DRAM Refresh (Sections 1 & 2)
Jamie Liu, Ben Jaiyen, Richard Veras, and Onur Mutlu. ISCA '12 - tasks due January 10
On readings: Recommended background readings are marked with (^) above. Optional historical or fun readings are marked with (*). If you feel confortable with the topic already, you may skip these readings.
Notes
You can find slides on Canvas.
- Memory constitutes a large fraction of modern chip architecture, impacting overall system performance.
- Memory bottlenecks hinder computational efficiency, especially for data-intensive applications such as deep learning and analytics.
- Effective memory organization (e.g., banking and interleaving) can mitigate latency and allow for parallel accesses.
- Different memory technologies (DRAM vs. SRAM vs. PCM) offer trade-offs between speed, cost, and energy consumption, influencing design choices.
- The energy cost of memory access is significantly higher than that of computation, necessitating strategies to minimize data movement.
Tasks
Mostly the same as last time.
- Do the background reading and read the paper for next week.
- Decide on teams for projects. See the syllabus.
- Ask any questions about the course structure or content in this lesson’s discussion topic.
- Pick a paper from the schedule whose discussion you want to lead.
Claim it by opening a pull request that modifies
content.toml
to fill in your name on one of theleader = "TK"
lines.