Computer Organization and Design  – ARM
David A. Patterson | John L. Hennessy
zyBooks 2017

Table of Contents

1. Computer Abstract / Tech
1.1 Introduction
1.2 Eight great ideas in computer architecture
1.3 Below your program
1.4 Under the covers
1.5 Technologies for building processors and memory
1.6 Performance
1.7 The power wall
1.8 The sea change: The switch from uniprocessors to multiprocessors
1.9 Real stuff: Benchmarking the Intel Core i7
1.10 Fallacies and pitfalls
1.11 Concluding remarks
1.12 Historical perspective and reading
1.13 Exercises

2. Instructions
2.1 Introduction
2.2 Operations of the computer hardware
2.3 Operands of the computer hardware
2.4 Signed and unsigned numbers
2.5 Representing instructions in the computer
2.6 Logical operations
2.7 Instructions for making decisions
2.8 Supporting procedures in computer hardware
2.9 Communicating with people
2.10 LEGv8 addressing for wide immediates and addresses
2.11 Parallelism and instructions: synchronization
2.12 Translating and starting a program
2.13 A C sort example to put it all together
2.14 Arrays versus pointers
2.15 Advanced material: Compiling C and interpreting Java
2.16 Real stuff: MIPS instructions
2.17 Real stuff: ARMv7 (32-bit) instructions
2.18 Real stuff: x86 instructions
2.19 Real stuff: The rest of the ARMv8 instruction set
2.20 Fallacies and pitfalls
2.21 Concluding remarks
2.22 Historical perspective and further reading
2.23 Exercises

3. Arithmetic for Computers
3.1 Introduction
3.2 Addition and subtraction
3.3 Multiplication
3.4 Division
3.5 Floating point
3.6 Parallelism and computer arithmetic: Subword parallelism
3.7 Real stuff: Streaming SIMD extensions and advanced vector extensions in x86
3.8 Real stuff: The rest of the ARMv8 arithmetic instructions
3.9 Going faster: Subword parallelism and matrix multiply
3.10 Fallacies and pitfalls
3.11 Concluding remarks
3.12 Historical perspective and further reading
3.13 Exercises

4. The Processor
4.1 Introduction
4.2 Logic design conventions
4.3 Building a datapath
4.4 A simple implementation scheme
4.5 An overview of pipelining
4.6 Pipelined datapath and control
4.7 Data hazards: Forwarding versus stalling
4.8 Control hazards
4.9 Exceptions
4.10 Parallelism via instructions
4.11 Real stuff: The ARM Cortex-A53 and Intel Core i7 pipelines
4.12 Going faster: Instruction-level parallelism and matrix multiply
4.13 Advanced topic: An introduction to digital design using a hardware design language to describe and model a pipeline and more pipelining illustrations
4.14 Fallacies and pitfalls
4.15 Concluding remarks
4.16 Historical perspective and further reading
4.17 Exercises

5. Memory Hierarchy
5.1 Introduction
5.2 Memory technologies
5.3 The basics of caches
5.4 Measuring and improving cache performance
5.5 Dependable memory hierarchy
5.6 Virtual machines
5.7 Virtual memory
5.8 A common framework for memory hierarchy
5.9 Using a finite-state machine to control a simple cache
5.10 Parallelism and memory hierarchies: Cache coherence
5.11 Parallelism and memory hierarchy: Redundant arrays of inexpensive disks
5.12 Advanced material: Implementing cache controllers
5.13 Real stuff: The ARM Cortex-A53 and Intel Core i7 memory hierarchies
5.14 Real stuff: The rest of the ARMv8 special instructions
5.15 Going faster: Cache blocking and matrix multiply
5.16 Fallacies and pitfalls
5.17 Concluding remarks
5.18 Historical perspective and further reading
5.19 Exercises

6. Parallel Processors
6.1 Introduction
6.2 The difficulty of creating parallel processing programs
6.3 SISD, MIMD, SIMD, SPMD, and vector
6.4 Hardware multithreading
6.5 Multicore and other shared memory multiprocessors
6.6 Introduction to graphics processing units
6.7 Clusters, warehouse scale computers, and other message-passing multiprocessors
6.8 Introduction to multiprocessor network topologies
6.9 Communicating to the outside world: Cluster networking
6.10 Multiprocessor benchmarks and performance models
6.11 Real stuff: Benchmarking Intel Core i7 960 versus NVIDIA Tesla GPU
6.12 Going faster: Multiple processors and matrix multiply
6.13 Fallacies and pitfalls
6.14 Concluding remarks
6.15 Historical perspective and further reading
6.16 Exercises

7. Appendix A
7.1 Introduction
7.2 Gates, truth tables, and logic equations
7.3 Combinational logic
7.4 Using a hardware description language
7.5 Constructing a basic arithmetic logic unit
7.6 Faster addition: Carry lookahead
7.7 Clocks
7.8 Memory elements: Flip-flops, latches, and registers
7.9 Memory elements: SRAMs and DRAMs
7.10 Finite-state machines
7.11 Timing methodologies
7.12 Field programmable devices
7.13 Concluding remarks
7.14 Exercises

8. Appendix B
8.1 Introduction
8.2 GPU system architectures
8.3 Programming GPUs
8.4 Multithreaded multiprocessor architecture
8.5 Parallel memory system
8.6 Floating point arithmetic
8.7 Real stuff: The NVIDIA GeForce 8800
8.8 Real stuff: Mapping applications to GPUs
8.9 Fallacies and pitfalls
8.10 Concluding remarks
8.11 Historical perspective and further reading

9. Appendix C
9.1 Introduction
9.2 Implementing combinational control units
9.3 Implementing finite-state machine control
9.4 Implementing the next-state function with a sequencer
9.5 Translating a microprogram to hardware
9.6 Concluding remarks
9.7 Exercises

10. Appendix D
10.1 Introduction
10.2 Addressing modes and instruction formats
10.3 Instructions: The MIPS core subset
10.4 Instructions: Multimedia extensions of the desktop/server RISCs
10.5 Instructions: Digital signal-processing extensions of the embedded RISCs
10.6 Instructions: Common extensions to MIPS core
10.7 Instructions unique to MIPS-64
10.8 Instructions unique to Alpha
10.9 Instructions unique to SPARC v9
10.10 Instructions unique to PowerPC
10.11 Instructions unique to PA-RISC 2.0
10.12 Instructions unique to ARM
10.13 Instructions unique to Thumb
10.14 Instructions unique to SuperH
10.15 Instructions unique to M32R
10.16 Instructions unique to MIPS-16
10.17 Concluding remarks