Open Library logo
New Feature: You can now embed Open Library books on your website!   Learn More
Last edited by ImportBot
July 30, 2014 | History

Multi-core cache hierarchies 1 edition

Multi-core cache hierarchies
Rajeev Balasubramonian

No ebook available.

Prefer the physical book? Check nearby libraries with:

Buy this book

Links (leaves Open Library)

There's no description for this book yet. Can you add one?
There is only 1 edition record, so we'll show it here...  •  Add edition?

Multi-core cache hierarchies
Rajeev Balasubramonian, Norman Jouppi, Naveen Muralimanohar

Published 2011 by Morgan & Claypool in San Rafael, Calif. (1537 Fourth Street, San Rafael, CA 94901 USA) .
Written in English.

About the Book

A key determinant of overall system performance and power dissipation is the cache hierarchy since access to off-chip memory consumes many more cycles and energy than on-chip accesses. In addition, multi-core processors are expected to place ever higher bandwidth demands on the memory system. All these issues make it important to avoid off-chip memory access by improving the efficiency of the on-chip cache. Future multi-core processors will have many large cache banks connected by a network and shared by many cores. Hence, many important problems must be solved: cache resources must be allocated across many cores, data must be placed in cache banks that are near the accessing core, and the most important data must be identified for retention. Finally, difficulties in scaling existing technologies require adapting to and exploiting new technology constraints. The book attempts a synthesis of recent cache research that has focused on innovations for multi-core processors. It is an excellent starting point for early-stage graduate students, researchers, practitioners who wish to understand the landscape of recent cache research. The book is suitable as a reference for advanced computer architecture classes as well as for experienced researchers and VLSI engineers.

Table of Contents

1. Basic elements of large cache design
Shared vs. private caches
Shared LLC
Private LLC
Workload analysis
Centralized vs. distributed shared caches
Non-uniform cache access
2. Organizing data in CMP last level caches
Data management for a large shared NUCA cache
Placement/migration/search policies for D-NUCA
Replication policies in shared caches
OS-based page placement
Data management for a collection of private caches
3. Policies impacting cache hit rates
Cache partitioning for throughput and quality-of-service
QoS policies
Selecting a highly useful population for a large shared cache
Replacement/insertion policies
Novel organizations for associativity
Block-level optimizations
4. Interconnection networks within large caches
Basic large cache design
Cache array design
Cache interconnects
Packet-switched routed networks
The impact of interconnect design on NUCA and UCA caches
NUCA caches
UCA caches
Innovative network architectures for large caches
5. Technology
Static-RAM limitations
Parameter variation
Modeling methodology
Mitigating the effects of process variation
Tolerating hard and soft errors
Leveraging 3D stacking to resolve SRAM problems
Emerging technologies
Embedded DRAM
Non-volatile memories
6. Concluding remarks
Authors' biographies.

Edition Notes

Part of: Synthesis digital library of engineering and computer science.

Series from website.

Includes bibliographical references (p. 119-136).

Abstract freely available; full-text restricted to subscribers or individual document purchasers.

Also available in print.

Mode of access: World Wide Web.

System requirements: Adobe Acrobat Reader.

Synthesis lectures on computer architecture -- # 17
Other Titles
Synthesis digital library of engineering and computer science.


Dewey Decimal Class
Library of Congress
TK7895.M4 B255 2011

The Physical Object

[electronic resource] /

ID Numbers

Open Library
Internet Archive
9781598297546, 9781598297539


Download catalog record: RDF / JSON
July 30, 2014 Created by ImportBot import new book