Conference Overview
Agenda
Registration
Travel and Hotel
Directions
Dinner
Poster Session
Conference Sponsors
 
Dinner Sponsors





Lunch Sponsors




Media Sponsor


AGENDA

Thursday August 28th, 2008
8:00 am - 8:45 am
***Registration***
8:45 am - 9:00 am
9:00 am - 9:45 am
9:45 am - 10:00 am
***Break***
10:00 am - 11:00 am
Auditorium
Lab Sessions
11:00 am - 12:00 pm
12:00 pm - 1:00 pm
***Lunch***
1:00 pm - 1:45 PM
1:50 pm - 2:50 pm
Auditorium
Lab Sessions
2:50 pm - 3:00 pm
***Break***
3:00 pm - 4:00 pm
4:00 pm - 5:00 pm
5:00 pm - 6:00 pm
6:30 pm
***Dinner @ Illusions***

 

Friday August 29th, 2008
 
Auditorium
Lab Sessions
8:30 am - 9:15 am
9:15 am - 10:15 am
10:15 am - 10:30 am
***Break***
10:30 am - 11:30 am
11:30 am - 12:45 pm
12:45 pm - 1:30 pm
***Lunch***
1:30 pm - 3:00 pm

 

***Auditorium (Thursday)***

8:00 am - 8:45 am

Registration
(back to top)

8:45 am - 9:00 am

Welcome
(back to top)

9:00 am - 9:45 am

Keynote

Petaflop scaling computing with streaming processors on Folding@home
Vijay Pande
Stanford University

For those running grand-challenge class HPC calculations, what one really wants is a single processor with petaflop speed. Of course, we will likely never get that. Instead, we can get today petaflop (sustained) class performance from clusters of cutting edge "stream" processors, such as the Cell processor in PS3's or modern GPU's. While distributed computing of these stream processors is by no means a panacea for petaflop computing, one may be surprised to what degree algorithms that were traditionally tightly coupled can be ported to this platform. I will talk about our experience in this area in particular, and on how this could impact and influence HPC computing
for other groups in the future.
(back to top)

9:45 am - 10:00 am

Break

10:00 am - 11:00 am

Bio-X Research Presentations

Predicting protein structures with screensavers and video games
Rhiju Das
Stanford University, Depts. of Biochemistry and Physics,
formerly University of Washington, Dept. of Biochemistry


Co-authors: Adrien Treuille (Carnegie Mellon University); Rob Vernon, James Thompson, Seth Cooper, David Salesin, David Baker, Zoran Popovic (all at University of Washington).

The fundamental molecules of life -- proteins, RNA, and DNA -- are polymers that fold up into complex and unique three dimensional structures. Predicting these functional folds from sequence alone at atomic resolution has been a long-standing challenge in theoretical biophysics. Recent increases in computational power, largely enabled by distributing computation via screensavers, have finally enabled such high resolution modeling in the biennial CASP double-blind prediction trials. As a next step, harnessing human intuition and competition through an interactive videogame offers an unusual new paradigm for biomolecule modeling and is seeing its first blind tests this summer.
(back to top)

Normal mode analysis of the glycine receptor
Professor James R. Trudell
Stanford University

Co-authors Ed Bertaccini and Erik Lindahl
(back to top)

Computational Challenges in Patient-Specific Blood Flow Modeling
Alberto Figueroa

Charles Taylor Lab, Bioengineering
Stanford University

We are on the verge of a new era in medicine whereby the medical and scientific communities can utilize simulation-based methods, initialized with patient-specific anatomic and physiologic data, to design treatments for individuals, predict the performance of medical devices or understand the pathogenesis of cardiovascular diseases. While significant progress in patient-specific blood flow modeling has been made over the last decade, challenges remain especially related to the availability of input data and the fidelity and usability of the methods. In this talk we provide an overview of some of the computational challenges in the field, namely image segmentation, fluid-solid interaction, boundary condition specification, and mesh generation.
(back to top)

11:00 am - 12:00 pm

Engineering Research Presentations

Formula 1 Wheel Aerodynamics
John Axerio
Stanford University
Flow Physics and Computational Engineering

The flow field behind a Formula 1 tire in a closed wind tunnel was examined experimentally here at Stanford in order to validate the accuracy of a variety of numerical simulation techniques. The results of steady RANS, unsteady RANS (URANS), and large eddy simulations (LES) are compared with the available PIV data. Some of the limitations of RANS compared to LES are shown by examining the flow in the recirculation regions immediately behind the tire. The LES simulations identify a range of timescales, including the unsteady oscillation of the rear vortices and the shear layer instability near the front of the contact patch. Furthermore, the sensitivity of the flow solution due to geometrical uncertainties, turbulence models, and turbulent inflow conditions will also be discussed.
(back to top)

Towards Predictive Simulation in Nuclear Energy Applications
Curtis Hamman
Stanford University
Center for Turbulence Research

Sustained petaflop computing is on the horizon. To successfully harness this computing power, new algorithmic adaptations in both software and architecture design are needed to enable the next generation of predictive multiphysics simulation platforms. Examples of such paradigms applied to nuclear energy applications are reviewed to demonstrate how the coupling between computer architecture and software algorithms impacts the development of predictive simulation tools for scientific discovery.
(back to top)

Understanding Large Eddy Simulations with good parallel I/O
Dr. Frank Ham
Stanford University
Center for Turbulence Research

Stanford’s Center for Integrated Turbulence Simulations has been developing unstructured Large Eddy Simulation (LES) software for use in practical engineering configurations involving complex geometry and multiphysics. These simulations provide high-fidelity realizations of turbulent flows, but are often large and expensive. To facilitate interrogation of the time-dependent results, regular “snapshots” of the instantaneous data can be saved during the simulation, and then post-processed on a reduced number of nodes to look at flow structure, compute flow statistics, even compute noise. On clusters with fast parallel file systems, this “snapshot” capability minimally impacts the cost of the simulation, and can dramatically increase our understanding of the flow – particularly when we don’t know what we’re looking for in the first place.
(back to top)

12:00 pm - 1:00 pm

Lunch

1:00 pm - 1:45 PM

From Beowulf to Today: The State of Linux Cluster Management Software
Donald Becker
Founder and CTO, Scyld Software

Donald Becker was a co-founder of the original Beowulf project, which is the cornerstone for commodity-based high-performance cluster computing. Don's work in parallel and distributed computing began in 1983 at MIT's Real Time Systems group. He is known throughout the international community of operating system developers for his contributions to networking software and as the driving force behind beowulf.org.

Clusters of commodity compute servers have become the standard way to build scalable high performance compute environment. Unfortunately the software to operate and maintain clusters has often been secondary to the hardware details.

This talk will describe state of the art cluster software and the opportunities to futher improve the system. We will talk of how cluster software can advance from a collection of individually installed operating systems controlled by ad hoc tool to elegant and efficient single system image clusters that incrementally scale and tolerate failures.

The ease of use of a single system is handled by automatic provisioning:

Automatic detection of node hardware, a reliable network boot system, very lightweight stateless provisioning made possible by dynamic caching. Run-time subsystems complete the illusion: a single cluster-wide process space allows creating, monitoring and controlling processes over the cluster with semantics unchanged from cluster-specific name services. Other subsystems, such as node status and load reporting, schedulers, and integrated libraries reduce the complexity of building and using cluster applications.
(back to top)

1:50 pm - 2:50 pm

Earth Science Research Presentations

Geoscience computing for today and tomorrow
Robert G. Clapp
Stanford University, CEES

Research within the School of Earth Science varies widely in the type of problems being addressed, computational needs, and researchers' computational proficiency. CEES attempts to meet these needs by providing a diverse set of hardware options and support. I will briefly summarize some of the research topics being addressed, the computational challenges associated with them, and the options for handling them that CEES provides. Then I will discuss some of the technologies we are exploring to meet upcoming geoscience computing challenges.
(back to top)

A computational mathematician goes subsurface: large scale computing for modeling of reservoir fluid flows
Professor Margot Gerritsen
Department of Energy Resources Engineering
Stanford University

As easy to produce oil and gas reservoirs are declining, interest in Enhanced Oil Recovery (EOR) processes grows. EOR processes pose many challenges to the computational scientist or engineer: they are governed by strongly nonlinear systems of equations, are multi-scale, and must be solved on very dense computational grids for sufficient accuracy. Needless to say, simulation of EOR methods require the use of high performance computers. In this talk, we will pay special attention to gas injection processes, which are not only attractive for enhanced recovery, but also for carbon sequestration.
(back to top)

2:50 pm - 3:00 pm

Break

3:00 pm - 4:00 pm

SLAC Research Presentations

Extremely Large Database Challenges within the Large Synoptic Survey Telescope Project
Kian-Tat Lim
Information Systems Specialist
SCCS/Stanford Linear Accelerator Center

K-T is helping to design and build extremely large databases and data management systems for the Large Synoptic Survey Telescope and Linac Coherent Light Source projects. Previously, he spent more than seven years building extremely large databases, data management systems, and data mining applications at Yahoo!, Inc.

SLAC is responsible for delivering the multi-petabyte data access system for the Large Synoptic m Survey Telescope that will manage images and star and galaxy catalogs, among other raw and derived results. As part of our work on this project, we have discovered great commonalities between extremely large database users in science and industry. We have joined with a selection of such users and with leading database researchers to define SciDB, a new open source data management system for data-intensive scientific analytics. This talk will describe the common features needed by peta-scale data management systems and how SciDB will meet these requirements.
(back to top)

First glimpse through the GLAST (Gamma-ray Large Area Space Telescope)
Tom Glanzman
SLAC Experimental Physicist

While only on orbit for less than 2 months and still in the process of a two-month checkout, NASA's Gamma-ray Large Area Telescope (GLAST) has already detected 12 powerful gamma-ray bursts, an encouraging harbinger of good things to come for this mission. This talk will give early insight into some of those good things.
(back to top)

4:00 pm - 5:00 pm

SoM Research Presentations

Display Wall Technology for Ultra High-Resolution Imaging
Sean (Shyh-Yuan) Kung
SUMMIT

SUMMIT, the Center for Interactive and Simulation Learning (CISL), and EdTech prototyped a cluster-based tiled display wall for viewing ultra high-resolution images and videos.

The prototype 7680X4800 display wall consists of 3x3 30" monitors with resolution 2560X1600. It can display and manipulate giga-pixel images with little latency. Pilot studies show that the display wall is a very effective tool for teaching and learning in medical education curriculums such as histopathology.

This talk will give an overview of technologies for building cluster-based high-resolution display walls and a brief overview of the educational applications.
(back to top)

Shining a light on photon spectroscopy with computer simulations
Brian Moritz
Stanford Institute for Materials and Energy Science (SIMES)

One of the key unsolved problems in physics is the nature of electron dynamics in correlated materials.  Photon spectroscopies provide important clues to understanding the complex behavior in these systems where the interplay of spin, charge, orbital and lattice degrees of freedom gives rise to various phases that can be exploited for applications in electronic energy transmission, storage, and generation, as well as sensors and memory devices. High performance computing has enabled detailed numerical studies of the features revealed from various photon spectroscopies, often with quantitative specificity.  This presentation will focus on our recent simulations of spectroscopies, including angle-resolved photoemission and resonant inelastic X-ray scattering, for model systems using Hamiltonian and Green's function based techniques, in both the frequency and time domain, and their close connection to experimental results.  Particular emphasis will be placed on the computational aspects of the research.
(back to top)

5:00 pm - 6:00 pm

TBA

6:30 pm

Dinner @ Illusions

 

***Lab Sessions (Thursday)***

8:00 am - 8:45 am

Registration
(back to top)

8:30 am - 9:00 am

TBA

9:00 am - 9:45 am

TBA

9:45 am - 10:00 am

Break

10:00 am - 11:00 am

Hands-On Cluster Building using Rocks
Tim McIntire
Clustercorp

Interested in building a Linux cluster? This demonstration is hands-on building multi-node compute clusters with a number of components; from frontends, compute nodes, high speed interconnects to parallel storage systems. We'll start off with bare metal and have a fully-operational compute cluster built at the end of the session.
(back to top)

11:00 am - 12:00 pm

Hands-On Cluster Building using Rocks (cont)
(back to top)

12:00 pm - 1:00 pm

Lunch

1:00 pm - 1:45 PM

TBA

1:50 pm - 2:50 pm

Rocks: Cluster Management and Maintenance
Mason Katz
UCSD

While Rocks clusters are turnkey, users always to manage and customize their cluster. Introduction of the Rocks configuration graph and how to add new packages and configuration will be covered. Other common customization scenarios will be described.
(back to top)

2:50 pm - 3:00 pm

Break

3:00 pm - 4:00 pm

Rocks: Xen VMs, Virtual Clusters and Programmatic Partitioning
Mason Katz
UCSD

The internals of Xen support in Rocks will be presented and dissected in detail. New for Rocks 5.0 is the ability to fully program how a node partitions its local hard drives so that any partitioning policy can be implemented. Methods, techniques and examples of partitioning schemes will be presented.
(back to top)

4:00 pm - 5:00 pm

Rocks: Introduction to Building Your Own Roll
Mason Katz
UCSD

Rolls are the way to customize Rocks. The implementation of Rolls is defined and the levels of customization is presented. A detailed example of building a straightforward will be worked out during this session.
(back to top)

5:00 pm - 6:00 pm

Poster Session

$100 Prize for Best Poster, compliments of Platform Computing

Posters are judged by meeting participants.

Please fill in your Poster Title on the Registration Form.

Poster board dimensions are 3 ft x 4 ft. We recommend landscape orientation, but portrait will also work. Registered posters can be printed for FREE if submitted to Tanya Raschke (raschke@stanford.edu) by midnight on Sunday, August 24. Poster printing tips and templates
can be found at: http://clark-it.stanford.edu/poster.htm
(back to top)

6:30 pm

Dinner @ Illusions

 

***Auditorium (Friday)***

8:30 am - 9:15 am

Keynote

The Use of Computational Methods in the NASA Fundamental Aeronautics Program:
Bridging Aeronautics & Space

Juan Alonso
Director of the Fundamental Aeronautics Program Office, NASA and Professor, Aeronautics & Astronautics, Stanford University
(back to top)

9:15 am - 10:15 am

TotalView on HPC Clusters

Ed Hinkel Sales Engineer, TotalView and Johnny Chang, NASA Ames Research Center/CSC

This session provides an introduction to debugging parallel applications on HPC clusters with the TotalView Debugger? which covers parallel process control for MPI applications, as well as basic operations and techniques. Discussions include debugging on large cluster systems and the use of important concepts such as subset attach. There will also be a brief case study presented by a TotalView user, Johnny Chang from the NASA Ames Research Center/CSC.

About the presenters:
Ed Hinkel is a Sales Engineer at TotalView Technologies. His background includes more than 20 years of software development, spanning the evolution of computing technology leading to the multi-threaded, parallel, and distributed multi-core applications of today. His career includes technical and management positions at Dun & Bradstreet Systems Research and Development, Electronic Data Systems (EDS), and GTech Inc, the leading provider of lottery technologies. Against the odds, Ed has now found his way back to a role that helps to provide real solution for the challenges facing today's software developers. Ed holds a Bachelors degree in Mathematics from Indiana Institute of Technology.

Johnny Chang - NASA Ames Research Center/CSC

Johnny is a member of the Application Performance and Productivity group at the NASA Advanced Supercomputing (NAS) Division located in Moffett Field, California. He is part of a sub-group that provides consulting service to the 1000+ users of the Columbia, RTJones, and Pleiades supercomputers. His work includes code porting, debugging, tuning and optimization, and code scaling.

Johnny received his PhD in Chemical Physics from the University of Texas at Austin in 1985. He has published papers in multi-photon dynamics, quantum scattering, path-integral methods, quantum functional sensitivity analysis, and, most recently, weather modeling.
(back to top)

10:15 am - 10:30 am

Break

10:30 am - 11:30 am

Parallel Computing with MATLAB
Sarah Wait Zaranek, Ph.D.
Application Engineer, Mathworks

This session will show you how to perform parallel computing in MATLAB using either your desktop machine or a computer cluster.  You will learn how to utilize the full capabilities of your multicore machine through the new parallelism capabilities of MATLAB 7.6.0 (R2008a) and Parallel Computing Toolbox 3.3.  We will also introduce the use of our parallel computing products on a computer cluster to speed up your algorithms and handle larger data sets.

  • Applications of parallel computing
  • Interactive task parallel applications
  • Interactive data parallel applications
  • Interactive applications to batch applications
  • Tips/Tricks on parallel coding in MATLAB

We will finish the session showing a real-world example running on a cluster at Stanford.

About the presenter:
Sarah Wait Zaranek is an Application Engineer (MATLAB geek) at The MathWorks. She comes from UC Berkeley where she completed a post-doc focused mainly on understanding the interior dynamics of terrestrial planets. Her research involved both computational fluid dynamics as well as laboratory work. Her work at The MathWorks is currently focused on distributed computing and core MATLAB. Sarah graduated with a PhD in Geology and a Masters in Applied Mathematics from Brown University. She has been using MATLAB since her early undergraduate days and enjoys applying her experiences to help people use MATLAB to forward their science and research.

(back to top)

11:30 am - 12:45 pm

MPI: Friend or Foe? (slides)
Dr. Jeff Squyres
MPI Architect at Cisco Systems, Inc.

Everyone in high performance computing (HPC) has heard of the Message Passing Interface (MPI). But do you really know what it *is*? For what kind of problems is MPI well-suited, and perhaps more importantly, for what kind of problems is MPI *not* well-suited? How can you tell which category your problem fits into? This lecture will cover the basics of what MPI is and what it can (and cannot do), and
introduce the novice HPC / MPI programmer into fundamental concepts and basic MPI API usage.
(back to top)

12:45 pm - 1:30 pm

Lunch

1:30 pm - 3:00 pm

Stanford HPC Experiences: Bio-X, LBL, Flow Physics, SLAC, DoR

Environmental Monitoring in the Data Center using Synapsense
Gregory Bell
Chief Technology Architect, IT Division
Lawrence Berkeley National Laboratory

This talk will focus on the use of a wireless environmental monitoring system in use at Lawrence Berkeley National Laboratory to *tune* the data center and to visualize temperature, pressure and power consumption.
(back to top)

Building Faster, Easier HPC Clusters with Rocks
Steve Jones
Stanford University
High Performance Computing Center
Flow Physics and Computational Engineering

The Stanford University High-Performance Computing (HPC) Center supports more than 200 researchers. In only 11 days the HPC Center successfully installed a 240 node, 1920 Intel® Xeon® core cluster from scratch by leveraging the certification methodology from the Intel® Cluster Ready framework. This solution integrated hardware and software elements from Intel, Clustercorp, Dell, Panasas, American Power Conversion and Cisco. In this talk, we discuss the technical details that made this effort possible.
(back to top)

Stanford's Proposed Scientific Research Computing Facility
Phil Reese
Research Computing Strategist, IT Services and the Office of the Dean of Research

With the sea change in academic research toward more in depth use of computing for simulation, calculations, and display, comes the need for more server hardware.  The current campus model for server housing leaves a lot to be desired.  A plan is moving forward to develop a joint computer hosting and shared cycle facility with SLAC.  There is a concept drawing for the facility and a general site at SLAC identified.  This talk will discuss the current state of the project, including some novel cooling technologies, and steps needed to keep the effort moving forward to a reality.
(back to top)

***Lab Sessions (Friday)***

8:30 am - 9:30 am

Clustercorp, DataDirect Networks
(back to top)

9:30 am - 10:30 am

Workload Management: Moab, SGE, LSF
(back to top)

10:30 am - 10:45 am

Break

10:45 am - 12:15 pm

Intel Cluster Tools
(back to top)

12:15 pm - 1:00 pm

Lunch

1:00 pm - 3:00 pm

Storage, Servers and Interconnects: Data Direct, Dell, Panasas, Penguin
(back to top)

 

 

 

© Stanford University. All Rights Reserved. Stanford, CA 94305. (650) 723-2300. Terms of Use | Copyright Complaints