Table of Contents
1. Computer Abstractions and Technology
1.1 Introduction
1.2 Seven great ideas in computer architecture
1.3 Below your program
1.4 Under the covers
1.5 Technologies for building processors and memory
1.6 Performance
1.7 The power wall
1.8 The sea change: The switch from uniprocessors to multiprocessors
1.9 Real stuff: Benchmarking the Intel Core i7
1.10 Going Faster: Matrix Multiply in Python
1.11 Fallacies and pitfalls
1.12 Concluding remarks
1.13 Historical perspective and reading
1.14 Self-study
1.15 Exercises
2. Instructions
2.1 Introduction
2.2 Operations of the computer hardware
2.3 Operands of the computer hardware
2.4 Signed and unsigned numbers
2.5 Representing instructions in the computer
2.6 Logical operations
2.7 Instructions for making decisions
2.8 Supporting procedures in computer hardware
2.9 Communicating with people
2.10 MIPS addressing for 32-bit immediates and addresses
2.11 Parallelism and instructions: synchronization
2.12 Translating and starting a program
2.13 A C sort example to put it all together
2.14 Arrays versus pointers
2.15 Advanced material: Compiling C and interpreting Java
2.16 Real stuff: ARMv7 (32-bit) instructions
2.17 Real stuff: ARMv8 (64-bit) instructions
2.18 Real stuff: RISC-V instructions
2.19 Real stuff: x86 instructions
2.20 Going faster: Matrix multiply in C
2.21 Fallacies and pitfalls
2.22 Concluding remarks
2.23 Historical perspective and further reading
2.24 Self-study
2.25 Exercises
2.26 MIPS Simulator
3. Arithmetic for Computers
3.1 Introduction
3.2 Addition and subtraction
3.3 Multiplication
3.4 Division
3.5 Floating point
3.6 Parallelism and computer arithmetic: Subword parallelism
3.7 Real stuff: Streaming SIMD extensions and advanced vector extensions in x86
3.8 Going faster: Subword parallelism and matrix multiply
3.9 Fallacies and pitfalls
3.10 Concluding remarks
3.11 Historical perspective and further reading
3.12 Self-study
3.13 Exercises
4. The Processor
4.1 Introduction
4.2 Logic design conventions
4.3 Building a datapath
4.4 A simple implementation scheme
4.5 A multicycle implementation
4.6 An overview of pipelining
4.7 Pipelined datapath and control
4.8 Data hazards: Forwarding versus stalling
4.9 Control hazards
4.10 Exceptions
4.11 Parallelism via instructions
4.12 Putting it all together: The Intel Core i7 6700 and ARM Cortex-A53. On website: Real stuff: The ARM Cortex-A8 and Intel Core i7 pipelines
4.13 Going faster: Instruction-level parallelism and matrix multiply
4.14 Advanced topic: An intro to digital design using a hardware design language to describe and model a pipeline On webset end with: “and more pipelining illustrations”
4.15 Fallacies and pitfalls
4.16 Concluding remarks
4.17 Historical perspective and further reading
4.18 Self-study
4.19 Exercises
5. Memory Hierarchy
5.1 Introduction
5.2 Memory technologies
5.3 The basics of caches
5.4 Measuring and improving cache performance
5.5 Dependable memory hierarchy
5.6 Virtual machines
5.7 Virtual memory
5.8 A common framework for memory hierarchy
5.9 Using a finite-state machine to control a simple cache
5.10 Parallelism and memory hierarchies: Cache coherence
5.11 Parallelism and memory hierarchy: Redundant arrays of inexpensive disks
5.12 Advanced material: Implementing cache controllers
5.13 Real stuff: The ARM Cortex-A8 and Intel Core i7 memory hierarchies
5.14 Going faster: Cache blocking and matrix multiply
5.15 Fallacies and pitfalls
5.16 Concluding remarks
5.17 Historical perspective and further reading
5.18 Self-study
5.19 Exercises
6. Parallel Processors
6.1 Introduction
6.2 The difficulty of creating parallel processing programs
6.3 SISD, MIMD, SIMD, SPMD, and vector
6.4 Hardware multithreading
6.5 Multicore and other shared memory multiprocessors
6.6 Introduction to graphics processing units
6.7 Domain specific architectures
6.8 Clusters, warehouse scale computers, and other message-passing multiprocessors
6.9 Introduction to multiprocessor network topologies
6.10 Communicating to the outside world: Cluster networking
6.11 Multiprocessor benchmarks and performance models
6.12 Real stuff TPUv3 Volta
6.13 Going faster: Multiple processors and matrix multiply
6.14 Fallacies and pitfalls
6.15 Concluding remarks
6.16 Historical perspective and further reading
6.17 Self-study
7. Appendix A
7.1 Introduction
7.2 Assemblers
7.3 Linkers
7.4 Loading
7.5 Memory usage
7.6 Procedure call convention
7.7 Exceptions and interrupts
7.8 Input and output
7.9 SPIM
7.10 MIPS R2000 assembly language
7.11 Concluding remarks
7.12 Exercises
8. Appendix B
8.1 Introduction
8.2 Gates, truth tables, and logic equations
8.3 Combinational logic
8.4 Using a hardware description language
8.5 Constructing a basic arithmetic logic unit
8.6 Faster addition: Carry lookahead
8.7 Clocks
8.8 Memory elements: Flip-flops, latches, and registers
8.9 Memory elements: SRAMs and DRAMs
8.10 Finite-state machines
8.11 Timing methodologies
8.12 Field-programmable devices
8.13 Concluding remarks
8.14 Exercises
9. Appendix C
9.1 Introduction
9.2 GPU system architectures
9.3 Programming GPUs
9.4 Multithreaded multiprocessor architecture
9.5 Parallel memory system
9.6 Floating point arithmetic
9.7 Real stuff: The NVIDIA GeForce 8800
9.8 Real stuff: Mapping applications to GPUs
9.9 Fallacies and pitfalls
9.10 Concluding remarks
9.11 Historical perspective and further reading
10. Appendix D
10.1 Introduction
10.2 Implementing combinational control units
10.3 Implementing finite-state machine control
10.4 Implementing the next-state function with a sequencer
10.5 Translating a microprogram to hardware
10.6 Concluding remarks
10.7 Exercises
11. Appendix E
11.1 Introduction
11.2 A Survey of RISC Architectures for Desktop, Server, and Embedded Computers
11.3 The Intel 80×86
11.4 The VAX Architecture
11.5 The IBM 360/370 Architecture for Mainframe Computers
11.6 Historical Perspective and References
The zyBooks version of the most comprehensive introduction to this core topic provides a powerful interactive learning environment
The Computer Organization and Design (MIPS) zyVersion contains the complete 6th edition text of Patterson and Hennessy’s classic book, enhanced with new interactive animations and questions to help students learn faster and more effectively.
- Presents fundamentals of hardware technologies, assembly language, computer arithmetic, pipelining, parallelism, optimization techniques, and memory hierarchy
- Features the MIPS processor core
- Built-in simulator so students code, build memory and add values right in the zyBook
- Built-in labs can be used as-is, or instructors can customize or create their own
- Over 300 auto-graded learning questions with immediate feedback
Animated learning activity from this zyBook
What is a zyVersion?
zyVersions are leading print titles converted and adapted to zyBooks’ interactive learning platform, allowing for a quick and easy transition to an engaging digital experience for instructors and students.
zyBooks’ web-native content helps students visualize concepts to learn faster and more effectively than with a traditional textbook. (Check out our research.)
This zyVersion of Computer Organization and Design (MIPS) benefits both students and instructors:
- Instructor Benefits
- Customize your course by reorganizing existing content or adding your own
- Continuous publication model updates your course with the latest content and technologies
- Robust reporting gives you insight into students’ progress, reading and participation
- Animations and other interactive content can be shown in presentation mode and used during a lecture
- Student Benefits
- Learning questions and other content serve as an interactive form of reading
- Instant feedback on labs and homework
- Concepts come to life through extensive animations embedded into the interactive content
- Save chapters as PDFs to reference the material at any time
Authors
David A. Patterson
University of California, Berkeley
John L. Hennessy
Stanford University