cmu online
Publicly Accessible CMU Courses
 Machine Learning
 Computer Systems
 Computer Science Theory
 Programming Language Theory
 Computer Graphics
 Robotics
 Mathematics and Statistics
Education and knowledge should be accessible to anyone who is willing to learn. There are many great course offerings from the topranked Computer Science program in Carnegie Mellon University whose lectures are available on the public domain. The purpose of this page is to curate these courses, so people can selfstudy and learn from the best professors in the world without the hefty college price tag. I hope that this resource will be helpful for both people currently in industry aiming to improve their knowledge, current students to make course decisions or to do selflearning, and even prospective students to get a taste of what a CMU education looks like. In addition to courses, I have also included links to weekly seminar series which are held by various research groups for people interested in getting closer to stateoftheart developments.
Courses which are accessible to undergraduates will have a tag. This does not mean that the difficulty is not suitable for graduate students; on the contrary, almost all such classes are crosslisted as graduate courses and are commonly taken by graduate students at CMU. There are also some graduate classes (based on the course number) which are suitable for advanced undergraduates which I marked as undergraduate as well. Classes that require significant background and which are less accessible will have a tag.
Some courses are only publicly available on Panopto instead of Youtube, in which case I provided a link to the Panopto playlist. Don’t feel too sad that this is the case, because it can be more conducive to use Panopto than Youtube due to the ability to customize speaker and presentation view.
Within each category, the courses are not ordered in any particular way that is indicative of difficulty or the order in which they should be taken. Consider it to be arbitrary.
Please let me know if there are any dead links, or if you spot an error with the page. In addition, if there are courses that I missed which might be useful for others, you can either modify the relevant data file here and make a pull request, or let me know via email. I will very much appreciate it!
The list of courses available publicly is heavily skewed towards which departments or faculty members make their lecture recordings publicly available, and therefore omits a lot of classes that Computer Science majors in CMU tend to take. In particular, courses from the following departments in the School of Computer Science are not present: Computational Biology, HumanComputer Interaction, Institute for Software Research.
The contents on this page is personally curated and does not reflects the views of CMU or SCS, and is also not endorsed by either parties.
Machine Learning
The machine learning courses in this section generally assumes working knowledge of probability, statistics, calculus, and linear algebra. 10701 Introduction to Machine Learning or 10715 Advanced Introduction to Machine Learning are the recommended prerequisites for most of these courses if you do not have taken any prior machine learning classes. Both of these introductory courses are also available in the table below.
Course Name  Lecture Videos 

10414/714 Deep Learning Systems, Algorithms and Implementation (Fall 2022)
Instructors: Zico Kolter , Tianqi Chen Deep learning methods have revolutionized a number of fields in Artificial Intelligence and Machine Learning in recent years. The widespread adoption of deep learning methods have in no small part been driven by the widespread availability of easytouse deep learning systems, such as PyTorch and TensorFlow. But despite their widespread availability and use, it is much less common for students to get involved with the internals of these libraries, to understand how they function at a fundamental level. But understanding these libraries deeply will help you make better use of their functionality, and enable you to develop or extend these libraries when needed to fit your own custom use cases in deep learning. The goal of this course is to provide students an understanding and overview of the “full stack” of deep learning systems, ranging from the highlevel modeling design of modern deep learning systems, to the basic implementation of automatic differentiation tools, to the underlying devicelevel implementation of efficient algorithms. Throughout the course, students will design and build from scratch a complete deep learning library, capable of efficient GPUbased operations, automatic differentiation of all implemented functions, and the necessary modules to support parameterized layers, loss functions, data loaders, and optimizers. Using these tools, students will then build several stateoftheart modeling methods, including convolutional networks for image classification and segmentation, recurrent networks and selfattention models for sequential tasks such as language modeling, and generative models for image generation. 

11711 Advanced NLP (Fall 2022)
Instructors: Graham Neubig , Robert E. Frederking Natural language processing technology attempts to model human language with computers, tackling a wide variety of problems from automatic translation to question answering. CS11711 Advanced Natural Language Processing (at Carnegie Mellon University's Language Technology Institute) is an introductory graduatelevel course on natural language processing aimed at students who are interested in doing cuttingedge research in the field. In it, we describe fundamental tasks in natural language processing such as syntactic, semantic, and discourse analysis, as well as methods to solve these tasks. The course focuses on modern methods using neural networks, and covers the basic modeling and learning algorithms required therefore. The class culminates in a project in which students attempt to reimplement and improve upon a research paper in a topic of their choosing. 

10708 Probabilistic Graphical Models (Spring 2020)
Instructor: Eric P. Xing Many of the problems in artificial intelligence, statistics, computer systems, computer vision, natural language processing, and computational biology, among many other fields, can be viewed as the search for a coherent global conclusion from local information. The probabilistic graphical models' framework provides a unified view for this wide range of problems, enabling efficient inference, decisionmaking, and learning in problems with a very large number of attributes and huge datasets. This graduatelevel course will provide you with a strong foundation for both applying graphical models to complex problems and for addressing core research topics in graphical models. The class will cover classical families of undirected and directed graphical models (i.e. Markov Random Fields and Bayesian Networks), modern deep generative models, as well as topics in graph neural networks and causal inference. It will also cover the necessary algorithmic toolkit, including variational inference and Markov Chain Monte Carlo methods. 

10725 Convex Optimization (Fall 2022)
Instructor: Yuanzhi Li Convex optimization was an extremely useful tool in Machine Learning, capable of solving many realworld problems efficiently, such as linear regression, logistic regression, linear programming, SDP programming etc. However, in modern machine learning, as the nonconvex neural networks growing to be the dominating models in this field, principles developed in traditional convex optimization are now becoming insufficient. In these lectures, you will see a convex optimization course that hasn't been taught (or close to being taught) anywhere else: This newly designed course spans the old topics in convex optimization such as Gradient Descent, Stochastic Gradient Descent, Mirror Descent, Accelerated Gradient Descent, Interior Point Method, and making connections all the way to optimizations topics in deep learning, such as Neural Tangent Kernel, Scheduled Learning Rate, Momentums, Batch normalization and Linear Reinforcement Learning. In the end of the course, you will have a sense about the spirit of convex optimization, and how it can still be applied to other realworld problems that are not convex at all. 
Public Panopto Link 
10701 Introduction to Machine Learning (Fall 2020)
Instructors: Ziv BarJoseph , Eric P. Xing Machine Learning is concerned with computer programs that automatically improve their performance through experience (e.g., programs that learn to recognize human faces, recommend music and movies, and drive autonomous robots). This course covers the theory and practical algorithms for machine learning from a variety of perspectives. We cover topics such as Linear Regression, SVMs, Neural Networks, Graphical Models, Clustering, etc. Programming assignments include handson experiments with various learning algorithms. This course is designed to give a PhDlevel student a thorough grounding in the methodologies, technologies, mathematics and algorithms currently needed by people who do research in machine learning. 

10715 Advanced Introduction to Machine Learning (Fall 2015)
Instructors: Barnabas Poczos , Alex Smola The rapid improvement of sensory techniques and processor speed, and the availability of inexpensive massive digital storage, have led to a growing demand for systems that can automatically comprehend and mine massive and complex data from diverse sources. Machine Learning is becoming the primary mechanism by which information is extracted from Big Data, and a primary pillar that Artificial Intelligence is built upon. This course is designed for Ph.D. students whose primary field of study is machine learning, or who intend to make machine learning methodological research a main focus of their thesis. It will give students a thorough grounding in the algorithms, mathematics, theories, and insights needed to do indepth research and applications in machine learning. The topics of this course will in part parallel those covered in the general graduate machine learning course (10701), but with a greater emphasis on depth in theory and algorithms. The course will also include additional advanced topics such as RKHS and representer theory, Bayesian nonparametrics, additional material on graphical models, manifolds and spectral graph theory, reinforcement learning and online learning, etc. 

11485/785 Introduction to Deep Learning (Fall 2022)
Instructors: Bhiksha Raj , Rita Singh "Deep Learning" systems, typified by deep neural networks, are increasingly taking over all the AI tasks, ranging from language understanding, speech and image recognition, to machine translation, planning, and even game playing and autonomous driving. As a result, expertise in deep learning is fast changing from an esoteric desirable to a mandatory prerequisite in many advanced academic settings, and a large advantage in the industrial job market. In this course we will learn about the basics of deep neural networks, and their applications to various AI tasks. By the end of the course, it is expected that students will have significant familiarity with the subject, and be able to apply Deep Learning to a variety of tasks. They will also be positioned to understand much of the current literature on the topic and extend their knowledge through further study. 

10703 Deep Reinforcement Learning & Control (Fall 2022)
Instructor: Katerina Fragkiadaki This course will cover latest advances in Reinforcement Learning and Imitation learning. This is a fast developing research field and an official textbook is available only for about one forth of the course material. The rest will be taught from recent research papers. This course brings together many disciplines of Artificial Intelligence to show how to develop intelligent agent that can learn to sense the world and learn to act imitating others or maximizing sparse rewards Particular focus will be given in incorporating visual sensory input and learning suitable visual state representations. 
Public Panopto Link 
11777 MultiModal Machine Learning (Fall 2020)
Instructor: LouisPhilippe Morency Multimodal machine learning (MMML) is a vibrant multidisciplinary research field which addresses some of the original goals of artificial intelligence by integrating and modeling multiple communicative modalities, including linguistic, acoustic, and visual messages. With the initial research on audiovisual speech recognition and more recently with language & vision projects such as image and video captioning, this research field brings some unique challenges for multimodal researchers given the heterogeneity of the data and the contingency often found between modalities. This course will teach fundamental mathematical concepts related to MMML including multimodal alignment and fusion, heterogeneous representation learning and multistream temporal modeling. We will also review recent papers describing stateoftheart probabilistic models and computational algorithms for MMML and discuss the current and upcoming challenges. The course will present the fundamental mathematical concepts in machine learning and deep learning relevant to the five main challenges in multimodal machine learning: (1) multimodal representation learning, (2) translation & mapping, (3) modality alignment, (4) multimodal fusion and (5) colearning. These include, but not limited to, multimodal autoencoder, deep canonical correlation analysis, multikernel learning, attention models and multimodal recurrent neural networks. The course will also discuss many of the recent applications of MMML including multimodal affect recognition, image and video captioning and crossmodal multimedia retrieval. 

10605/805 Machine Learning with Large Datasets (Fall 2016)
Instructor: William Cohen Large datasets are difficult to work with for several reasons. They are difficult to visualize, and it is difficult to understand what sort of errors and biases are present in them. They are computationally expensive to process, and often the cost of learning is hard to predict  for instance, and algorithm that runs quickly in a dataset that fits in memory may be exorbitantly expensive when the dataset is too large for memory. Large datasets may also display qualitatively different behavior in terms of which learning methods produce the most accurate predictions. This course is intended to provide a student practical knowledge of, and experience with, the issues involving large datasets. Among the issues considered are: scalable learning techniques, such as streaming machine learning techniques; parallel infrastructures such as mapreduce; practical techniques for reducing the memory requirements for learning methods, such as feature hashing and Bloom filters; and techniques for analysis of programs in terms of memory, disk usage, and (for parallel methods) communication complexity. 

15388 Practical Data Science (Fall 2019)
Instructor: Zico Kolter Data science is the study and practice of how we can extract insight and knowledge from large amounts of data. It is a burgeoning field, currently attracting substantial demand from both academia and industry. This course provides a practical introduction to the “full stack” of data science analysis, including data collection and processing, data visualization and presentation, statistical model building using machine learning, and big data techniques for scaling these methods. Topics covered include: collecting and processing data using relational methods, time series approaches, graph and network models, free text analysis, and spatial geographic methods; analyzing the data using a variety of statistical and machine learning methods include linear and nonlinear regression and classification, unsupervised learning and anomaly detection, plus advanced machine learning methods like kernel approaches, boosting, or deep learning; visualizing and presenting data, particularly focusing the case of highdimensional data; and applying these methods to big data settings, where multiple machines and distributed computation are needed to fully leverage the data. 
Public Panopto Link 
15780 Graduate Artificial Intelligence (Spring 2014)
Instructors: Zico Kolter , Zachary B. Rubinstein 

CMU AI Seminar (Ongoing)
This seminar aims to cover a wide variety of AI topics, such as computer vision, natural language processing (NLP), theoretical ML, AI fairness & ethics, cognitive science, AI hardware, etc. 
Link to Youtube Channel 
Computer Systems
15213 Introduction to Computer Systems is a required course for all CS majors in CMU, and is the prerequisite for all subsequent systems classes. It is a good place to start if you are new to computer systems. In most other universities, this class is usually referred to as an operating systems class.
Course Name  Lecture Videos 

15213/513 Introduction to Computer Systems (Fall 2015)
Instructors: Randal E. Bryant , David R. O'Hallaron The Introduction to Computer Systems course provides a programmer's view of how computer systems execute programs, store information, and communicate. It enables students to become more effective programmers, especially in dealing with issues of performance, portability and robustness. It also serves as a foundation for courses on compilers, networks, operating systems, and computer architecture, where a deeper understanding of systemslevel issues is required. Topics covered include: machinelevel code and its generation by optimizing compilers, performance evaluation and optimization, computer arithmetic, memory organization and management, networking technology and protocols, and supporting concurrent computation. 

15445/645 Database Systems (Fall 2022)
Instructor: Andy Pavlo This course is on the design and implementation of database management systems. Topics include data models (relational, document, key/value), storage models (nary, decomposition), query languages (SQL, stored procedures), storage architectures (heaps, logstructured), indexing (order preserving trees, hash tables), transaction processing (ACID, concurrency control), recovery (logging, checkpoints), query processing (joins, sorting, aggregation, optimization), and parallel architectures (multicore, distributed). Case studies on opensource and commercial database systems are used to illustrate these techniques and tradeoffs. The course is appropriate for students that are prepared to flex their strong systems programming skills. 

15721 Advanced Database Systems (Spring 2018)
Instructor: Andy Pavlo This course is a comprehensive study of the internals of modern database management systems. It will cover the core concepts and fundamentals of the components that are used in both highperformance transaction processing systems (OLTP) and largescale analytical systems (OLAP). The class will stress both efficiency and correctness of the implementation of these ideas. All class projects will be in the context of a real inmemory, multicore database system. The course is appropriate for graduate students in software systems and for advanced undergraduates with dirty systems programming skills. 

15418/618 Parallel Computer Architecture and Programming (Spring 2016)
Instructors: Kayvon Fatahalian , Randal E. Bryant From smart phones, to multicore CPUs and GPUs, to the world's largest supercomputers, parallel processing is ubiquitous in modern computing. The goal of this course is to provide a deep understanding of the fundamental principles and engineering tradeoffs involved in designing modern parallel computing systems as well as to teach parallel programming techniques necessary to effectively utilize these machines. Because writing good parallel programs requires an understanding of key machine performance characteristics, this course will cover hardware design and how that affects software design. 
Public Panopto Link 
¡Databases! – A Database Seminar Series (Fall 2020)
A weekly seminar series on databases hosted by CMU, streamed on Zoom and open to the public. Each speaker will present the implementation details of their respective systems and examples of the technical challenges they faced when working with realworld customers. It is possible that similar such seminars will be held in the future, check their Youtube channel for the latest updates. 
Computer Science Theory
15251 Great Ideas in Theoretical Computer Science is the introductory CS theory class taken by CS majors in their first year, and would be a good place to start if you are new to CS theory. All subsequent theory classes assumes knowledge from 15251.
Course Name  Lecture Videos 

15251 Great Ideas in Theoretical Computer Science (Spring 2016)
Instructor: Ryan O'Donnell This course is about how to use theoretical ideas to formulate and solve problems in computer science. It integrates mathematical material with general problem solving techniques and computer science applications. Examples are drawn from algorithms, complexity theory, game theory, probability theory, graph theory, automata theory, algebra, cryptography, and combinatorics. Assignments involve both mathematical proofs and programming. 

15455 Undergraduate Complexity Theory (Spring 2017)
Instructor: Ryan O'Donnell The course will overlap significantly with "Part 3: Complexity" of Sipser's textbook. Topics will include models of computation (Turing Machines, circuits, ...), time and space complexity; hierarchy theorems by diagonalization; P vs. NP; theorems of Ladner, Mahaney, Savitch, ImmermanSzelepcsényi; randomized computation; and, the polynomialtime hierarchy. We will also cover some nontraditional topics of current research interest, such as the Exponential Time Hypothesis, Feige's Hypothesis, and finegrained complexity. 

15459 Quantum Computation (Fall 2018)
Instructor: Ryan O'Donnell This course will be an introduction to quantum computation and quantum information theory, from the perspective of theoretical computer science. Topics include: Qubit operations, multiqubit systems, p=artial measurements, entanglement, quantum teleportation and quantum money, quantum circuit model, DeutschJozsa and Simons algorithm, number theory and Shors Algorithm, Grovers Algorithm, quantum complexity theory, limitations and current practical developments. 

15855 Graduate Complexity Theory (Fall 2017)
Instructor: Ryan O'Donnell Recommended Prerequisites: 15445 Undergraduate Complexity Theory The overarching goal is for students to learn the foundational aspects of structural complexity theory developed in the period 1970–2010. Students should learn the foundational relationships between time, space, and nondeterminism in computational complexity. Students will learn about circuit classes, how they relate to uniform models of computation, tools for proving circuit lower bounds, and also reasons why proving lower bounds are difficult. Students will be exposed to the pervasive role of randomness in complexity theory, even in the proof of purely deterministic results. By the end of the course, students will be able to study and contribute to cuttingedge research in computational complexity theory 

15751 Theory Toolkit (Spring 2020)
Instructor: Ryan O'Donnell The goal of this class is to provide students with the tools needed to be able to read and participate in contemporary research in theoretical computer science. The main focus will be on learning fundamental CS Theory concepts and mathematical tools. Students should learn to apply basic techniques in asymptotic and probabilistic analysis. They should gain familiarity and comfort with a wide variety of tools in theoretical computer science, including analysis of Boolean functions, errorcorrecting codes, pseudorandomness, graph expansion, information theory, and communication complexity. They will learn applications of linear programming and semidefinite programming in constraint satisfaction problems, with a view from proof complexity. They will also be exposed to different computational intractability assumptions, and learn how they are applied. A secondary focus of the course will be developing good practices in mathematical writing and good research habits. A goal for the students will be to improve their facility with LaTeX and their ability to write clear correct proofs. 

15859GG Proofs vs Algorithms: The SumofSquares Method
(Fall 2020)
Instructor: Pravesh Kothari This seminar examines the sumofsquares method with an emphasis on recent developments and open directions for research. We will cover use of this method in designing algorithms for both worstcase optimization problems such as MaxCut and Coloring and averagecase problems arising in machine learning and algorithmic statistics such as learning mixtures of Gaussians and robust statistics. We will also cover techniques for proving lowerbounds on this method and its connection to restricted algorithmic frameworks such as Statistical Query Model, the LowDegree Likelihood Ratio Method and Extended Formulations. 

15859S Analysis of Boolean Functions (Fall 2012)
Instructor: Ryan O'Donnell Boolean functions are perhaps the most basic object of study in theoretical computer science. They also arise in several other areas of mathematics, including combinatorics (graph theory, extremal combinatorics, additive combinatorics), metric and Banach spaces, statistical physics, and mathematical social choice. In this course we will study Boolean functions via their Fourier transform and other analytic methods. Highlights will include applications in property testing, social choice, learning theory, circuit complexity, pseudorandomness, constraint satisfaction problems, additive combinatorics, hypercontractivity, Gaussian geometry, random graph theory, and probabilistic invariance principles. 

Theory Lunch (Ongoing)
Theory Lunch is an informal seminar run by the Algorithms and Complexity Theory Group at CMU. The meetings have various forms: talks on recently completed results, joint reading of an interesting paper, presentations of current work in progress, exciting open problems, etc. 
Link to Youtube Channel 
Programming Language Theory
Unfortunately, there are not a lot of publicly available programming language (PL) theory lectures. However, Robert Harper and Jan Hoffmann, staples of the PL scene in CMU, both frequently give lectures for the Oregon Programming Languages Summer School that are much smaller in scope than what is covered in a comparable fullsemester course in CMU, but still goes through many key ideas nonetheless. Therefore, I have also decided to include these resources than not have anything, which would be a shame as CMU is a powerhouse in PL after all.
Why are so many courses titled 15819? This is because the course number is a catchall for advanced topics in PL theory, so while the course name is the same, the content can vary.
15312 Foundations of Programming Languages is the introductory class of the PL track, and would be a good place to start. The other introductory PL class 15317 Constructive Logic unfortunately does not have any publicly available lectures.
Course Name  Lecture Videos 

15312 Foundations of Programming Languages (OPLSS Summer 2018)
Instructor: Jan Hoffmann This course discusses in depth many of the concepts underlying the design, definition, implementation, and use of modern programming languages. Formal approaches to defining the syntax and semantics are used to describe the fundamental concepts underlying programming languages. A variety of programming paradigms are covered such as imperative, functional, logic, and concurrent programming. In addition to the formal studies, experience with programming in the languages is used to illustrate how different design goals can lead to radically different languages and models of computation. 
Note that videos in the playlist are from a similar set of lectures given at OPLSS, instead of CMU due to the absence of recordings. 
15819 Homotopy Type Theory (Fall 2012)
Instructor: Robert Harper This is a graduate research seminar on Homotopy Type Theory (HoTT), a recent enrichment of Intuitionistic Type Theory (ITT) to include "higherdimensional" types. The dimensionality of a type refers to the structure of its paths, the constructive witnesses to the equality of pairs of elements of a type, which themselves form a type, the identity type. In general a type is infinite dimensional in the sense that it exhibits nontrivial structure at all dimensions: it has elements, paths between elements, paths between paths, and so on to all finite levels. Moreover, the paths at each level exhibit the algebraic structure of a (higher) groupoid, meaning that there is always the "null path" witnessing reflexivity, the "inverse" path witnessing symmetry, and the "concatenation" of paths witnessing transitivity such that grouplike laws hold "up to higher homotopy". Specifically, there are higherdimensional paths witnessing the associative, unital, and inverse laws for these operations. Altogether this means that a type is a weak ∞groupoid. 

15819 Resource Aware Programming Languages (OPLSS Summer 2019)
Instructor: Jan Hoffmann Resource usage—the amount of time, memory, and energy a program requires for its execution—is one of the central subjects of computer science. Nevertheless, resource usage often does not play a central role in classical programminglanguage concepts such as operational semantics, type systems, and program logics. The first part of the course revisits these concepts to model and analyze resource usage of programs in a compositional and mathematicallyprecise way. The emphasis is on practical, typebased techniques that automatically inform programmers about the resource usage of their code. The second part of the course studies probabilistic programming languages. Such languages describe probability distributions and can be used to precisely describe and analyze probabilistic models. Probabilistic languages are for instance used in statistical modeling and to facilitate probabilistic inference. The focus of the course is on the semantics and analysis of probabilistic languages. Interestingly, the techniques from the first part can be applied to automatically analyze the expected cost and results of probabilistic programs. The third part of the course, presents an application of automatic resource bound analysis in Nomos, a programming language for implementing digital contracts. Nomos is based on session types, which are a focus in this part. Moreover, blockchains and smart contracts are briefly covered to motivate the need for resource bounds. 
Note that videos in the playlist are from a similar set of lectures given at OPLSS, instead of CMU due to the absence of recordings. 
15819 Computational Type Theory (OPLSS Summer 2018)
Instructor: Robert Harper Type theory may be presented semantically, with types specifying the behavior of programs, or axiomatically, as a formal system admitting many interpretations. From a computer science perspective, the computational semantics is of the essence, and the role of formalisms is pragmatic, the means for writing programs and building proof checkers. A central tool in either case is the method of logical relations, or Tait's method. Semantically, a wide range of typing constructs can be expressed using logical relations. Formally, logical relations are fundamental to the study of properties of languages, such as decidability of typing or termination properties. In this course we will study a range of typing concepts using these methods, from simple typing concepts such as functions and data structures found in many programming languages, to polymorphic type theory, which accounts for abstract types, and to dependent type theory, which accounts for mathematical proofs and the theory of program modules. Specific topics will vary according to the needs and interests of the students. 
Note that videos in the playlist are from a similar set of lectures given at OPLSS, instead of CMU due to the absence of recordings. 
15816 Substructural Logics (OPLSS Summer 2017)
Instructor: Frank Pfenning This graduate course provides an introduction to substructural logics, such as linear, ordered, affine, bunched, or separation logic, with an emphasis on their applications in computer science. This includes the design and theory of programming constructs for concurrent messagepassing computation and techniques for specifying and reasoning about programming languages in the form of substructural operational semantics. 
Note that videos in the playlist are from a similar set of lectures given at OPLSS, instead of CMU due to the absence of recordings. 
Computer Graphics
Course Name  Lecture Videos 

15462/662 Computer Graphics (Fall 2020)
Instructor: Keenan Crane This course provides a comprehensive introduction to computer graphics. It focuses on fundamental concepts and techniques, and their crosscutting relationship to multiple problem domains in graphics (rendering, animation, geometry, imaging). Topics include: sampling, aliasing, interpolation, rasterization, geometric transformations, parameterization, visibility, compositing, filtering, convolution, curves & surfaces, geometric data structures, subdivision, meshing, spatial hierarchies, ray tracing, radiometry, reflectance, light fields, geometric optics, Monte Carlo rendering, importance sampling, camera models, highperformance ray tracing, differential equations, time integration, numerical differentiation, physicallybased animation, optimization, numerical linear algebra, inverse kinematics, Fourier methods, data fitting, examplebased synthesis 
For whatever reason I'm unable to embed this playlist directly, so here's a direct link: https://www.youtube.com/playlist?list=PL9_jI1bdZmz2emSh0UQ5iOdT2xRHFHL7E 
15458/858 Discrete Differential Geometry (Spring 2021)
Instructor: Keenan Crane This course focuses on threedimensional geometry processing, while simultaneously providing a first course in traditional differential geometry. Our main goal is to show how fundamental geometric concepts (like curvature) can be understood from complementary computational and mathematical points of view. This dual perspective enriches understanding on both sides, and leads to the development of practical algorithms for working with realworld geometric data. Along the way we will revisit important ideas from calculus and linear algebra, putting a strong emphasis on intuitive, visual understanding that complements the more traditional formal, algebraic treatment. The course provides essential mathematical background as well as a large array of realworld examples and applications. It also provides a short survey of recent developments in digital geometry processing and discrete differential geometry. Topics include: curves and surfaces, curvature, connections and parallel transport, exterior algebra, exterior calculus, Stokes’ theorem, simplicial homology, de Rham cohomology, HelmholtzHodge decomposition, conformal mapping, finite element methods, and numerical linear algebra. Applications include: approximation of curvature, curve and surface smoothing, surface parameterization, vector field design, and computation of geodesic distance. 
Robotics
Course Name  Lecture Videos 

16715 Advanced Robot Dynamics and Simulation (Fall 2022)
Instructor: Zachary Manchester This course explores the fundamental mathematics behind modeling the physics of robots, as well as stateoftheart algorithms for robot simulation. We will review classical topics like Lagrangian mechanics and Hamilton’s Principle of Least Action, as well as modern computational methods like discrete mechanics and fast lineartime algorithms for dynamics simulation. A particular focus of the course will be rigorous treatments of 3D rotations and nonsmooth contact interactions (impacts and friction) that are so prevalent in robotics applications. We will use numerous case studies to explore these topics, including quadrotors, fixedwing aircraft, wheeled vehicles, quadrupeds, humanoids, and manipulators. Homework assignments will focus on practical implementation of algorithms and a course project will encourage students to apply simulation methods to their own research. 

16745 Optimal Control and Reinforcement Learning (Spring 2022)
Instructor: Zachary Manchester This is a course about how to make robots move through and interact with their environment with speed, efficiency, and robustness. We will survey a broad range of topics from nonlinear dynamics, linear systems theory, classical optimal control, numerical optimization, state estimation, system identification, and reinforcement learning. The goal is to provide students with handson experience applying each of these ideas to a variety of robotic systems so that they can use them in their own research. 

Robotics Institute Seminar Series (Ongoing)
Robotics Seminar series hosted by Carnegie Mellon University's Robotics Institute. 
Mathematics and Statistics
In general, the math department does not record any of their lectures, so this portion is pretty dry. However, it was worth having it just to include PoShen Loh’s lectures. He coaches the USA IMO team and is wellloved by his students.
Course Name  Lecture Videos 

21228 Discrete Mathematics Computer Graphics (Sping 2021)
Instructor: PoShen Loh Combinatorics, which concerns the study of discrete structures such as sets and networks, is as ancient as humanity's ability to count. Yet although in the beginning, combinatorial problems were solved by pure ingenuity, today that alone is not enough. A rich variety of powerful methods have been developed, often drawing inspiration from other fields such as probability, analysis, algorithms, and even algebra and topology. This course provides a general introduction to this beautiful subject. Students will learn both classical methods for clever counting, as well as how to apply more modern methods to analyze recursions and sequences. Students will also learn fundamental techniques in graph theory. Throughout the course, students will encounter novel challenges that blur the lines between combinatorics and other subjects, such as number theory, probability, and geometry, so that they develop the skills to creatively combine seeming disparate areas of mathematics. This course is structured around challenge. Lecture topics are handpicked to reflect the rather advanced ability level of the general CMU student, and consequently, much of the curriculum sequence is original. Homework and exam problems are particularly difficult, and require creative problemsolving rather than application of learned techniques. To encourage students to truly develop these skills, collaboration is encouraged on homework, and exams (which are noncollaborative) will be opennotes. 
Note that videos in the playlist are in reverse chronological order. 
21738 Extremal Combinatorics (Spring 2022)
Instructor: PoShen Loh This is one of the core graduate courses in advanced combinatorial methods, for the Algorithms, Combinatorics, and Optimization program at CMU. As such, it will be a rigorous and challenging introduction to extremal combinatorics, aimed to provide the necessary background for cuttingedge research in this area. Typical problems concern the maximum or minimum values attained by common graph parameters over certain interesting families of combinatorial objects. Extremal results are not only interesting in their own right, but are also useful in applications because sometimes one can already draw valuable conclusions from the combinatorial basis of a problem with deeper mathematical structure. In that case, the most easily accessible graphtheoretical properties are often the values of the graph parameters, such as the number of edges, diameter, size of the largest set of pairwise nonadjacent vertices, etc. It is therefore useful to understand what other properties are already implied once certain graph parameters lie in certain ranges. 
Note that videos in the playlist are in reverse chronological order. 
36705 Intermediate Statistics (Fall 2016)
Instructor: Larry Wasserman This course covers the fundamentals of theoretical statistics. Topics include: probability inequalities, point and interval estimation, minimax theory, hypothesis testing, data reduction, convergence concepts, Bayesian inference, nonparametric statistics, bootstrap resampling, VC dimension, prediction and model selection. 