Fundamentals of Computer Architecture
Offered By: EIT Digital via Coursera
Course Description
Overview
This course introduces several topics for the learners about the fundamentals of computer architecture. After completing this course, the students will have the basic knowledge of:
• Computer Performance and Benchmarks
• Summarizing Performance
• Amdahl’s law
• Introduction to Embedded Systems
Learning Outcome:
• After completing this course, the learners will have the tools to evaluated different computer architectures as well as the software executing on them.
• The learners of this course will have knowledge about modern microprocessors and the design techniques used to increase their performance.
Skills Gained:
• Basic skills to evacuate the performance of computer systems
• Computer Performance and Benchmarks
• Summarizing Performance
• Amdahl’s law
• Introduction to Embedded Systems
Learning Outcome:
• After completing this course, the learners will have the tools to evaluated different computer architectures as well as the software executing on them.
• The learners of this course will have knowledge about modern microprocessors and the design techniques used to increase their performance.
Skills Gained:
• Basic skills to evacuate the performance of computer systems
Syllabus
Introduction
This week we first present a definition of computer architecture and the overall objectives of this specialization. Then we will learn how to measure and summarize performance, and about Amdahl's famous law. Finally we will give an introduction to embedded systems.
ISA Design and MIPS64
The set of instructions supported by a processor is called its Instruction Set Architecture (ISA). This week we will learn the MIPS64 ISA, which will be used for code examples throughout this specialization. We will also learn some basic code optimizations that reduce the number of instructions.
Review of Pipelining
This week we will learn about pipelining, which is a technique that overlaps the execution of several instructions. Pipelining is a key implementation technique to make CPUs fast. Using the canonical 5-stage pipeline for illustration, we will learn about pipelining hurdles called hazards and how they can be solved.
Multicycle Operations and Pipeline Scheduling
This week we extend the canonical 5-stage pipeline with multicycle operations; operations that require multiple cycles to execute. Thereafter we learn how instructions can be scheduled in order to reduce the number of pipeline stalls.
Cache Basics
To bridge the gap between processor speed and memory speed, modern processors employ caches. Caches are high-speed memories that contain recently used code and data. This week we will learn the basics of caches (how they are organized, how data is found in the cache, etc.). In addition, we will learn the average memory access time (AMAT) equation as well as 5 basic cache optimizations that aim at reducing the AMAT.
This week we first present a definition of computer architecture and the overall objectives of this specialization. Then we will learn how to measure and summarize performance, and about Amdahl's famous law. Finally we will give an introduction to embedded systems.
ISA Design and MIPS64
The set of instructions supported by a processor is called its Instruction Set Architecture (ISA). This week we will learn the MIPS64 ISA, which will be used for code examples throughout this specialization. We will also learn some basic code optimizations that reduce the number of instructions.
Review of Pipelining
This week we will learn about pipelining, which is a technique that overlaps the execution of several instructions. Pipelining is a key implementation technique to make CPUs fast. Using the canonical 5-stage pipeline for illustration, we will learn about pipelining hurdles called hazards and how they can be solved.
Multicycle Operations and Pipeline Scheduling
This week we extend the canonical 5-stage pipeline with multicycle operations; operations that require multiple cycles to execute. Thereafter we learn how instructions can be scheduled in order to reduce the number of pipeline stalls.
Cache Basics
To bridge the gap between processor speed and memory speed, modern processors employ caches. Caches are high-speed memories that contain recently used code and data. This week we will learn the basics of caches (how they are organized, how data is found in the cache, etc.). In addition, we will learn the average memory access time (AMAT) equation as well as 5 basic cache optimizations that aim at reducing the AMAT.
Taught by
Juha Plosila
Tags
Related Courses
ABC du langage CInstitut Mines-Télécom via France Université Numerique Abstraction, Problem Decomposition, and Functions
University of Colorado System via Coursera Advanced Data Structures in Java
University of California, San Diego via Coursera Advanced React
Meta via Coursera Testing with Agile
University of Virginia via Coursera