U.C. Berkeley CS267 Home Page

Applications of Parallel Computers

Spring 2014

T Th 12:30-2:00, 306 Soda Hall

Instructor:

  • Jim Demmel
  • Offices:
    564 Soda Hall ("Virginia", in ParLab), (510)643-5386
    831 Evans Hall
  • Office Hours: (subject to change) MWF 10-11 (starting Jan 24)
  • (send email)
  • Teaching Assistants:

  • Razvan Carbunescu
  • Office: 5th Floor Soda Hall (Parlab)
  • Office Hours: M 2-3, F 3-4 in 580 Soda Hall (changed Jan 30)
  • (send email)
  • Aditya Devarakonda
  • Office: 5th Floor Soda Hall (Parlab)
  • Office Hours: WF 2-3, in 580 Soda Hall
  • (send email)
  • Administrative Assistants:

  • Tammy Johnson
  • Office: 565 Soda Hall
  • Phone: (510)643-4816
  • (send email)
  • Roxana Infante
  • Office: 563 Soda Hall
  • Phone: (510)643-1455
  • (send email)
  • Syllabus and Motivation

    CS267 was originally designed to teach students how to program parallel computers to efficiently solve challenging problems in science and engineering, where very fast computers are required either to perform complex simulations or to analyze enormous datasets. CS267 is intended to be useful for students from many departments and with different backgrounds, although we will assume reasonable programming skills in a conventional (non-parallel) language, as well as enough mathematical skills to understand the problems and algorithmic solutions presented. CS267 satisfies part of the course requirements for the Designated Emphasis ("graduate minor") in Computational Science and Engineering.

    While this general outline remains, a large change in the computing world started in the mid 2000's: not only are the fastest computers parallel, but nearly all computers are becoming parallel, because the physics of semiconductor manufacturing will no longer let conventional sequential processors get faster year after year, as they have for so long (roughly doubling in speed every 18 months for many years). So all programs that need to run faster will have to become parallel programs. (It is considered very unlikely that compilers will be able to automatically find enough parallelism in most sequential programs to solve this problem.) For background on this trend toward parallelism, click here.

    This is a huge change not just for science and engineering but the entire computing industry, which has depended on selling new computers by running their users' programs faster without the users having to reprogram them. Large research activities to address this issue are underway at many computer companies and universities, including Berkeley's ASPIRE project, and its predecessor the ParLab. A summary of the ParLab's research agenda, accomplishments, and remaining challenges may be found here.

    While the ultimate solutions to the parallel programming problem are far from determined, students in CS267 will get the skills to use some of the best existing parallel programming tools, and be exposed to a number of open research questions.

  • Tentative Detailed Syllabus
  • Grading

    There will be several programming assignments to acquaint students with basic issues in memory locality and parallelism needed for high performance. Most of the grade will be based on a final project (in which students are encouraged to work in small interdisciplinary teams), which could involve parallelizing an interesting application, or developing or evaluating a novel parallel computing tool. Students are expected to have identified a likely project by mid semester, so that they can begin working on it. We will provide many suggestions of possible projects as the class proceeds.

    Asking Questions

    Outside of lecture, you are welcome to bring your questions to office hours (posted at the top of this page). If you cannot physically attend office hours, you may contact the instructor team via the instructor email. We encourage you to post your questions to the CS267 Piazza page (you need to sign up first). If you send a question to the instructor email, we may answer your question on Piazza if we think it might help others in the class. During lecture, remote students can also email their questions to instructor email, which the teaching assistants will be monitoring during lecture. Depending on the question, the teaching assistants will either answer by email, or ask the instructor to answer during the lecture. You will also submit homeworks via the instructor email; please check with assignment-specific submission instructions first.

    Class Projects

    You are welcome to suggest your own class project, but you may also look at the following sites for ideas:

  • the ParLab webpage,
  • the ASPIRE webpage,
  • the BEBOP webpage,
  • the Computational Research Division and NERSC webpages at LBL,
  • class posters from CS267 in Spring 2010
  • class posters and their brief oral presentations from CS267 in Spring 2009.
  • Announcements

  • (May 19) Videos of students presenting previews of posters for their final class projects are available here (as Lecture #27).
  • (May 11) Because of a problem with the class NERSC allocation running out, which we have now fixed, the due date for the final project will be extended by 48 hours, to midnight Wednesday, May 14.
  • (May 5) Prof. Demmel will have office hours during RRR week Monday from 1-2 (note correction from yesterday!), and on Wednesday and Friday from 10-11.
  • (Apr 22) There are two due dates for the final projects: First, we will have a poster session during regular class time on Tuesday, May 6, of RRR week. I will also ask you to send me 1 slide summarizing your (team) project by Tuesday morning, that I will display from my laptop in 306 Soda for about 1 minute each, and ask you (or one team member) to stand and summarize, for posting on the class web page (to help inspire students in future semesters!). Second, the final projects themselves will be due by end of Monday, May 12.
  • (Mar 24) For suggestions of how to organize class project proposals, and examples of prior class projects, click here.
  • (Feb 3) We have posted pdf files of the lecture slides, in 4-to-a-page format. Some animations in the slides may not be visible in pdf format.
  • (Jan 30) Razvan's office hours have been changed to M 2-3pm and F 3-4pm.
  • (Jan 28) Live video streaming of the lectures may be seen here.
  • (Jan 28) Archived video of the lectures may be seen here.
  • (Jan 21) For students who want to try some on-line self-paced courses to improve basic programming skills, click here. You can use this material without having to register. In particular, courses like CS 9C (for programming in C) might be useful.
  • (Jan 21) Please complete the following class survey.
  • (Jan 21) Homework Assignment 0 has been posted here, due Jan 31 by midnight.
  • (Jan 21) Fill out the following form to allow us to create a NERSC account for you.
  • (Jan 21) Please read the NERSC Computer Use Policy Form so that you can sign a form saying that you agree to abide by the rules state there.
  • (Jan 21) This course satisfies part of the course requirements for the Designated Emphasis ("graduate minor") in Computational Science and Engineering.
  • Class Resources and Homework Assignments.

  • This will include, among other things, class handouts, homework assignments, the class roster, information about class accounts, pointers to documentation for machines and software tools we will use, reports and books on supercomputing, pointers to old CS267 class webpages (including old class projects), and pointers to other useful websites.
  • Lecture Notes and Video

  • Live video streaming of the lectures may be seen here (updated Jan 28).
  • Archived video of the lectures may be seen here.
  • To ask questions during live lectures, you can email them to instructor email, which the teaching assistants will be monitoring during lecture. Depending on the question, the teaching assistants will either answer by email, or ask the instructor to answer during the lecture.
  • Archived video, posted after the lectures, may be seen at TBD.
  • The class web page from the 1996 offering has detailed, textbook-style notes available on-line which are up-to-date in their presentations of some parallel algorithms. The slides to be posted during this semester will contain a number of more recently invented algorithms as well.

  • Lectures from Spr 2014 will be posted here.
  • Lecture 1, Jan 21, Introduction, in ppt and pdf
  • Lecture 2, Jan 23, Single Processor Machines: Memory Hierarchies and Processor Features, in ppt and pdf
  • Lecture 3, Jan 28, Finish Lecture 2 (updated Jan 28), begin Lecture 3, Introduction to Parallel Machines and Programming Models, in ppt and pdf
  • Lecture 4, Jan 30, Finish Lecture 3 (updated Jan 30), begin Lecture 4, Sources of Parallelism and Locality in Simulation, Part 1, in ppt and pdf
  • Lecture 5, Feb 4, Finish Lecture 4 (updated Feb 3), begin Lecture 5, Sources of Parallelism and Locality in Simulation, Part 2, in ppt and pdf
  • Lecture 6, Feb 6, Shared Memory Programming: Threads and OpenMP, in ppt and pdf; then Tricks with Trees, in ppt and pdf
  • Lecture 7, Feb 11, Finish Lecture 6 (Tricks with Trees), begin Distributed Memory Machines and Programming, in ppt and pdf
  • Lecture 8, Feb 13, Partitioned Global Address Space Programming with Unified Parallel C (UPC), by Kathy Yelick, in pptx and pdf
  • Lecture 9, Feb 18, Debugging and Optimization Tools, by Richard Gerber, in pptx and pdf; and
    Performance Debugging Techniques for HPC Applications, by David Skinner, in pptx and pdf
  • Lecture 10, Feb 20, Cloud Computing and Big Data Processing, by Shivaram Venkataraman, in pptx and pdf
  • Lecture 11, Feb 25, An Introduction to CUDA/OpenCL and Graphics Processors, by Bryan Catanzaro, in pptx and pdf
  • Lecture 12, Feb 27, Dense Linear Algebra, Part 1, in ppt and pdf
  • Lecture 12, Feb 27, Dense Linear Algebra, Part 1, in ppt and pdf
  • Lecture 13, Mar 4, Finish Lecture 12, then Dense Linear Algebra, Part 2 in ppt and pdf
  • Lecture 14, Mar 6, Graph Partitioning, in ppt and pdf
  • Lecture 15, Mar 11, Finish Lecture 14 (Graph Partitioning, (updated Mar 10), then begin Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication, in ppt and pdf
  • Lecture 16, Mar 13, Finish Lecture 15, Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (updated Mar 13)
  • Lecture 17, Mar 18, Frameworks in Complex Multiphysics HPC Applications, by John Shalf, in pptx and pdf
  • Lecture 18, Mar 20, Architecting Parallel Software with Patterns, by Michael Anderson, in pptx and pdf
  • Lecture 19, Apr 1, Hierarchical Methods for the N-Body Problem, in pptx and pdf
  • Lecture 20, Apr 3: complete Lecture 19, on Hierarchical Methods for the N-Body Problem, (updated Apr 3, in pptx and pdf). Discuss Class Projects, in pptx and pdf
  • Lecture 21, Apr 8, Structured Grids, in ppt and pdf
  • Lecture 22, Apr 10, Fast Fourier Transform (FFT), in ppt and pdf
  • Lecture 23, Apr 15, Parallel Graph Algorithms, by Aydin Buluc, in pptx and pdf
  • Lecture 24, Apr 17, Dynamic Load Balancing, in ppt and pdf
  • Lecture 25, Apr 22, Big Bang, Big Data, Big Iron: High Performance Computing and the Cosmic Microwave Background, by Julian Borrill, in pptx and pdf
  • Lecture 26, Apr 24, Modeling and Predicting Climate Change, by Michael Wehner in ppt and pdf
  • Movie of CAM5 hi-resolution simulations (mov)
  • Movie of fvCAM5.1 Simulated Atmospheric River (mov)
  • Lecture 27, Apr 29, Accelerated Materials Design through High-throughput First-Principles Calculations and Data Mining, by Kristin Persson, in pptx and pdf
  • Lecture 28, May 1, Big Data, Big Iron and the Future of HPC, by Kathy Yelick, in pptx and pdf
  • Sharks and Fish

  • "Sharks and Fish" are a collection of simplified simulation programs that illustrate a number of common parallel programming techniques in various programming languages (some current ones, and some old ones no longer in use).
  • Basic problem description, and (partial) code from 1999 class, written in Matlab, CMMD, CMF, Split-C, Sun Threads, and pSather, available here.