PACT 2011: Keynotes

International Conference on 
Parallel Architecture and Compilation Techniques
PACT-2011
Galveston Island, Texas, USA
October 10-14, 2011

The Twentieth International Conference on
Parallel Architectures and Compilation Techniques (PACT)
Galveston Island, Texas, USA
October 10-14, 2011


PACT 2011 Keynotes


We have a stellar line-up of keynote speakers for PACT 2011. Confirmed speakers include:


Computing at the Extremes
Stuart Feldman, Google

Tuesday October 11, 2011

Abstract
Computing at the limits of technology calls for numerous engineering decisions and tradeoffs. General purpose solutions do not work at the extremes. Traditional HPC has been analyzed for decades, resulting in specialized architectures. Systems for life critical systems, those for large enterprises, those for tiny devices, also present their own special requirements.

The area of data intensive computing is newer, and the computing models are less established. To support large (millions) of users doing similar but different computations, expecting to have access to enormous amounts of information (petabytes, not gigabytes) and to get prompt responses and global access calls for different compromises. Different applications present their own requirements and difficulties.

This talk will address some of those needs - different models of storage and data management that are appropriate for different types of application, networking demands for parallelism and global access, management of large numbers of fallible processors and storage. Support for such computing also calls for different approaches to software methodology, system management, and deployment.

But massive data also opens new ways to approach science and to get remarkable results the delight and surprise users.

Biography
Stuart Feldman is responsible for the health and productivity of Google's engineering offices in the eastern part of the Americas, Asia, and Australia. He also has executive responsibility for a number of Google products.

Before joining Google, he worked at IBM for eleven years. Most recently, he was Vice President for Computer Science in IBM Research, where he drove the long-term and exploratory worldwide science strategy in computer science and related fields, led programs for open collaborative research with universities, and influenced national and global computer science policy.

Prior to that, Feldman served as Vice President for Internet Technology and was responsible for IBM strategies, standards, and policies relating to the future of the Internet, and managed a department that created experimental Internet-based applications. Earlier, he was the founding Director of IBM's Institute for Advanced Commerce, which was dedicated to creating intellectual leadership in e-commerce.

Before joining IBM in mid-1995, he was a computer science researcher at Bell Labs and a research manager at Bellcore (now Telcordia). In addition he was the creator of Make as well as the architect for a large new line of software products at Bellcore.

Feldman did his academic work in astrophysics and mathematics and earned his AB at Princeton and his PhD at MIT. He was awarded an honorary Doctor of Mathematics by the Univeristy of Waterloo in 2010. He is former President of ACM (Association for Computing Machinery) and member of the board of directors of the AACSB (Association to Advanced Collegiate Schools of Business). He received the 2003 ACM Software System Award. He is a Fellow of the IEEE, of the ACM, and of the AAAS and serves on a number of government advisory committees.


Do Parallel Algorithms and Programs Need to be Parameter Aware?
Leslie G. Valiant, Harvard University

Wednesday October 12, 2011

Abstract
For more than half a century programmers have enjoyed efficiently universal sequential languages and algorithms, which permitted them to write programs independent of machines. We suggest that for parallel or multicore computers this will no longer be generally possible, and that the algorithms that run on them will need to be designed to be aware of the resource parameters of the particular machines on which they are to run. Parameter-free parallel languages or models, and algorithms that are oblivious to some parameters for some reason, will continue to have roles in restricted domains. We suggest, however, that they will always remain inadequate for the full range of tasks that need to be computed.

With parameter-awareness the main promise is that of portable algorithms, those that contain efficient designs for all reasonable ranges of the basic resource parameters and input sizes. Such portable algorithms need to be designed just once, but, once designed, they can be compiled to run efficiently on any machine. In this way the considerable intellectual effort that goes into parallel program design becomes reusable.

To permit such portable algorithms some standard bridging model is needed - a common understanding between hardware and algorithm designers of what the costs of a computation are. We shall suggest the Multi-BSP model as a candidate for this bridging role. We show that for several basic problems, namely matrix multiplication, fast Fourier Transform, and sorting, portable algorithms do exist that are optimal in a defined sense, for all combinations of input size and parameter values for this model.

Biography
Leslie Valiant was educated at King's College, Cambridge; Imperial College, London; and at Warwick University where he received his Ph.D. in computer science in 1974. He is currently T. Jefferson Coolidge Professor of Computer Science and Applied Mathematics in the School of Engineering and Applied Sciences at Harvard University, where he has taught since 1982. Before coming to Harvard he had taught at Carnegie Mellon University, Leeds University, and the University of Edinburgh.

His work has ranged over several areas of theoretical computer science, particularly complexity theory, computational learning, and parallel computation. He also has interests in computational neuroscience, evolution and artificial intelligence.

He received the Nevanlinna Prize at the International Congress of Mathematicians in 1986, the Knuth Award in 1997, the European Association for Theoretical Computer Science EATCS Award in 2008, and the 2010 A.C.M. Turing Award. He is a Fellow of the Royal Society (London) and a member of the National Academy of Sciences (USA).


The Exascale Challenge
Shekhar Borkar, Intel Labs

Thursday October 13, 2011
(slides)

Abstract
Compute performance increased by orders of magnitude in the last few decades, made possible by continued technology scaling, increasing frequency, providing integration capacity to realize novel architectures, and reducing energy to keep power dissipation within limit. The technology treadmill will continue, and one would expect to reach Exascale level performance this decade; however, it's the same Physics that helped you in the past will now pose some barriers--Business as usual will not be an option.

The energy and power will pose as a major challenge--an Exascale machine would consume in excess of a Giga-watt! Memory & communication bandwidth to feed such a machine with conventional technology would be prohibitive. Orders of magnitude increased parallelism, let alone explosion of parallelism created by energy saving techniques, would increase unreliability. And programming system will be posed with even sever challenge of harnessing the performance with concurrency.

This talk will discuss potential solutions in all disciplines, such as circuit design, test, architecture, system design, programming system, and resiliency to pave the road towards Exascale performance.

Biography
Shekhar Borkar received M.Sc in Physics from University of Bombay in 1979, MSEE from University of Notre Dame in 1981 and joined Intel Corp, where he worked on the 8051 family of microcontrollers, and Intel's supercomputers. Shekhar is an Intel Fellow, an IEEE Fellow, and Director of Extreme-scale Research in Intel Labs.