|Lectures:||Wed,Fri 12:40-2:00, Room 380 Soda|
|Instructor:||Dave Patterson, Professor|
|office: 635 Soda, e-mail: firstname.lastname@example.org|
|office hours: Wednesday 4-5|
|Admin. Asst:||Judith Whyte, 634 Soda, 643-4014, email@example.com|
Servers today cannot scale as fast as the increasing demand and surprisingly even I/O-intensive applications are limited by CPU speed. They are also needlessly expensive: almost half the price of a large Sun Enterprise 10000--64 processors, 64 GB DRAM, 668 disks--are in the microprocessors, boards, and enclosures for the CPU, not including memory or disks and their enclosures. Moreover, servers today are plagued by availability and system administration problems, with annual cost of ownership running at three times the original hardware price.
If microprocessors are placed near the I/O devices and connected via fast serial links and single-chip crossbar switches. Such a system scales communication bandwidth and processing with increasing number of disks, and yet reduces cost by replacing expensive desktop-oriented microprocessors with low cost embedded-oriented microprocessors and by leveraging the cabinetry, cooling, and power supplies already needed for I/O devices. It is also easier to support redundancy to increase availability and therefore reduce the cost of system administration.
In the past, I/O devices were slow relative to the CPU, and hence relegated to third-class status in the minds of system designers. Current systems rely on a hierarchy of interconnects and contoller firmware: SCSI controller on disk is connected via a SCSI bus to SCSI controller on a PCI chip, which is connected via PCI bus to a PCI bridge chip, which is connected to via the memory bus to a memory controller. Such a hodge-podge of buses and firmware were satisfactory when I/O devices were slow, but Gigabit Ethernet is near and a single disk now transfers at 30 MB/sec from the media, increasing at 40%/year.
Since power supplies and fans for disks can only support a modest amount of additional power, and because of the size restrictions of a disk, the ideal microprocessor for an "Intelligent DISK" (IDISK) needs to be power- efficient, and integrates memory, fast serial lines, and disk interface into a single chip. Embedded microprocessors hold promise: they are more than 1/2 integer performance of desktop microprocessors yet the die is 4 to 6 times smaller and they burn 10 to 100 times less power.
Since Moore's Law also applies to crossbar switches, a high bandwidth communication system can be constructed using fast serial lines over 10s of meters of copper wire connected to single chip switches, allowing them to both scale economically with the number of disks and reduce cost of redundancy to support availability.
IDISKs have the processing and communication to enable low cost RAID support and the monitoring and control to detect and isolate failed components. By proving a conventional interface to standard front end, IDISKs can appear as just a bunch of disks rather than as hundreds of computers. This combination of availability and hiding complexity should significantly reduce cost of ownership.
Thus "Central Processing Unit" may soon join terms like core and drum which reflect more the age of the speaker rather than the state of the art.
This advanced graduate course re-examines the design of hardware and software that is based on the traditional separation of the memory and the processor. A background paper on the topic can be found in "A Case for Intelligent Disks (IDISKs)," Postscript or PDF, from SIGMOD Record, Vol. 27, No. 3, August 1998.
The purpose of this course is to read and discuss important papers on the following topics technology trends (disks, networks, processors, integrated circuits), applications/customer demands, system administration, historic projects, and database machines.
The course will consist of primarily of daily readings with round table discussions. Grades will be based in large part on class participation. After disucssing papers for several weeks, there will be two assignments. The first will be a vision paper, proposing a bold set of ideas without any evidence to back it up. The second will be a more conventional small project, an experiment in the area of intelligent disks and main memory using the resources here at Berkeley. There are no exams: grades are based on class participation and on the term projects.
CS 252 or the equivalent