Computer Architecture (Architecture des ordinateurs)
This course introduces the students to basic von Neumann machine organization, instruction set architecture, assembly language programming, process microarchitecture, memory hierarchies and virtual memory. The students will learn to implement a multi-cycle MIPS-like processor on FPGAs using a hardware description language and run assembly language programs on the processor.
This course serves as an introductory course in concurrent programming. It introduces the students to basic concepts of concurrency, thread-level parallelism, basic multi-core architecture and memory hierarchies, and data-level parallelism. The students will learn concurrent programming in JAVA using locks and monitors. They will also learn Cuda programming.
Introduction to Multiprocessor Architecture
This course builds upon the important pre-requisites (computer architecture, system-on-chip and concurrency) to provide the students with the foundations of multiprocessor architecture, which are the building blocks in all modern digital platforms from embedded systems to supercomputers.
Advanced Multiprocessor Architecture
We have witnessed four decades of Moore's law (scaling chips in density) accompanied with Dennard scaling -- i.e., a reduction of voltages that would allow for chips with twice the capabilities but minimal to no increase in total power. Unfortunately, Dennard scaling has slowed down (starting mid 2000's) forcing designers to divide chips up into multiple processing cores. Therefore, from now on multiprocessors will be the defacto building blocks for all computers. This course will build upon the basic concepts offered in Computer Architecture I&II to cover modern design techniques as well as projections of designs into the next decade. We will cover topics as diverse as modern memory hierarchies and interconnects, GPU's and energy efficiency in datacenters and cloud computing platforms.
Advanced Topics in Memory Systems
Device scaling in processor fabrication technologies along with microarchitectural innovation have led to a tremendous gap between processor and memory performance. While architects have primarily relied on deeper cache hierarchies to reduce this performance gap, the limited capacity in higher cache levels and simple data placement/eviction policies have resulted in diminishing returns for commercial workloads with large memory footprints and adverse access patterns. Device scaling has also led to both unprecedented power consumption levels (with no silver bullets in hand to mitigate it) and low circuit reliability. In this course, we will read the latest technical contirbutions to bridge the processor/memory performance gap, improve power efficiency in the memory system and increase memory reliability in future technologies.
Topics on Datacenter Design
Contemporary datacenters are warehouse-scale computers with tens of thousands of servers and multi-megawatt power budgets, running today's most demanding online services (e.g., Google search, Facebook, YouTube). In this course, students we will survey a broad and comprehensive spectrum of design topics in modern datacenters including infrastructure, server hardware, networking and storage technologies. Students will also understand the implications of warehouse-scale computing on software, including programming models, databases, file systems, and resource management. Finally, the course will cover a number of cross-cutting issues, such as quality-of-service, energy-efficiency, and TCO (total cost of ownership).
Topics in Approximate Computing Systems
Diminishing improvements in transistor performance and energy efficiency mandate new ways to ameliorate overall system performance and energy efficiency. Fortunately, many emerging resource-hungry applications such as image and video processing, computer vision, machine learning, big data analytics can tolerate imperfect results, which can be achieved through approximate computing. In this course, students will survey recent literature on exploiting approximate computing for better system efficiency, covering hardware, programming languages, compilers, runtime systems, and applications and understand the state-of-the-art in the emerging area of approximate computing.