IT logo, Information Technology, University of OklahomaPhoto of City Skyline

Oklahoma Supercomputing Symposium 2010Oklahoma Supercomputing Symposium 2010


OSCER

OU IT

OK EPSCoR

Great Plains Network


Table of Contents


KEYNOTE SPEAKER

Horst Simon
Horst Simon

Deputy Laboratory Director
Lawrence Berkeley National Laboratory

Plenary Topic: "Exascale Challenges for the Computational Science Community"

Slides:   PowerPoint2003   PowerPoint2007   PDF

Plenary Talk Abstract

"The development of an exascale computing capability with machines capable of executing O(1018) operations per second in the 2018 time frame will be characterized by significant and dramatic changes in computing hardware architecture from current (2010) petascale high-performance computers. From the perspective of computational science, this will be at least as disruptive as the transition from vector supercomputing to parallel supercomputing that occurred in the 1990s." This was one of the findings of a recent workshop on crosscutting technologies for exascale computing. The impact of these architectural changes on future applications development for the computational sciences community can now be anticipated in very general terms. In this talk, I will summarize what I believe will be the major changes in the next decade for applications development.

Biography

Horst Simon is an internationally recognized expert in computer science and applied mathematics and the Deputy Director of Lawrence Berkeley National Laboratory (Berkeley Lab). Simon joined Berkeley Lab in early 1996 as director of the newly formed National Energy Research Scientific Computing Center (NERSC), and was one of the key architects in establishing NERSC at its new location in Berkeley. Under his leadership, NERSC enabled important discoveries for research in fields ranging from global climate modeling to astrophysics. Simon was also the founding director of Berkeley Lab's Computational Research Division, which conducts applied research and development in computer science, computational science, and applied mathematics.

In his prior role as Associate Lab Director for Computing Sciences, Simon helped to establish Berkeley Lab as a world leader in providing supercomputing resources to support research across a wide spectrum of scientific disciplines. He is also an adjunct professor in the College of Engineering at the University of California, Berkeley. In that role he worked to bring the Lab and the campus closer together, developing a designated graduate emphasis in computational science and engineering. In addition, he has worked with project managers from the Department of Energy, the National Institutes of Health, the Department of Defense and other agencies, helping researchers define their project requirements and solve technical challenges.

Simon's research interests are in the development of sparse matrix algorithms, algorithms for large-scale eigenvalue problems, and domain decomposition algorithms for unstructured domains for parallel processing. His algorithm research efforts were honored with the 1988 and the 2009 "http://en.wikipedia.org/wiki/Gordon_Bell_Prize">Gordon Bell Prize for parallel processing research. He was also member of the NASA team that developed the NAS Parallel Benchmarks, a widely used standard for evaluating the performance of massively parallel systems. He is co-editor of the biannual TOP500 list that tracks the most powerful supercomputers worldwide, as well as related architecture and technology trends.

He holds an undergraduate degree in mathematics from the Technische Universtä in Berlin, Germany, and a Ph.D. in Mathematics from the University of California at Berkeley.


PLENARY SPEAKERS

Henry Neeman
Henry Neeman

Director
OU Supercomputing Center for Education & Research (OSCER)
Information Technology
University of Oklahoma

Topic: "OSCER State of the Center Address"

Slides:   PowerPoint2003   PowerPoint2007   PDF

Talk Abstract

The OU Supercomputing Center for Education & Research (OSCER) celebrated its 9th anniversary on August 31 2010. In this report, we examine what OSCER is, what OSCER does, and where OSCER is going.

Biography

Dr. Henry Neeman is the Director of the OU Supercomputing Center for Education & Research and an adjunct assistant professor in the School of Computer Science at the University of Oklahoma. He received his BS in computer science and his BA in statistics with a minor in mathematics from the State University of New York at Buffalo in 1987, his MS in CS from the University of Illinois at Urbana-Champaign in 1990 and his PhD in CS from UIUC in 1996. Prior to coming to OU, Dr. Neeman was a postdoctoral research associate at the National Center for Supercomputing Applications at UIUC, and before that served as a graduate research assistant both at NCSA and at the Center for Supercomputing Research & Development.

In addition to his own teaching and research, Dr. Neeman collaborates with dozens of research groups, applying High Performance Computing techniques in fields such as numerical weather prediction, bioinformatics and genomics, data mining, high energy physics, astronomy, nanotechnology, petroleum reservoir management, river basin modeling and engineering optimization. He serves as an ad hoc advisor to student researchers in many of these fields.

Dr. Neeman's research interests include high performance computing, scientific computing, parallel and distributed computing and computer science education.

Jennifer M. Schopf
Jennifer M. Schopf

Program Officer
National Science Foundation EPSCoR
Topic: "NSF EPSCoR and the Role of Cyberinfrastructure"
Slides:     PowerPoint2003     PowerPoint2007     PDF
Video

Talk Abstract

In 1979, the National Science Foundation (NSF) set up the Experimental Program to Stimulate Competitive Research (EPSCoR), to avoid undue concentration of research funding in the US. Several years ago, this office was reinvigorated through the EPSCoR 2020 vision, to better focus its investment in science and engineering research infrastructure to support a wide range of research programs that fuel innovation and competitiveness. A key aspect of this approach has been the acknowledgement and support of cyberinfrastructure, broadly construed, in all EPSCoR programs.

This talk will discuss how cyberinfrastructure is an essential component to support today's collaborative research. After a brief overview of the current NSF CyberInfrastructure for 21st Century Science (CF21) vision, we will examine how CI is playing a role in current EPSCoR programs and projects, and what role it may play in the future.

Biography

Dr. Jennifer M. Schopf is a program officer at the National Science Foundation (NSF), originally in the Office of CyberInfrastructure — where she specialized in middleware, networking, and campus bridging programs, with an emphasis on sustainable approaches to pragmatic software infrastructure — and currently in EPSCoR. She also holds an appointment at the Woods Hole Oceanographic Institution (WHOI), where she is helping to develop a vision and implementation strategy to strengthen WHOI's participation in cyberinfrastructure and ocean informatics programs. Prior to this, she was a Scientist at the Distributed Systems Lab at Argonne National Laboratory for 7 years and spent 3½ years a researcher at the National eScience Centre in Edinburgh, UK. She received MS and PhD degrees from the University of California, San Diego in Computer Science and Engineering and a BA from Vassar College. Currently, her research interests include monitoring, performance prediction, and anomaly detection in distributed system environments. She has co-edited a book, co-authored over 50 refereed papers, and given over 100 invited talks.

Jan E. Odegard
Jan E. Odegard

Executive Director
Ken Kennedy Institute for Information Technology
Rice University
Topic: "University Research Cyberinfrastructure: Dreams, Needs and Realities"
Slides:     available after the Symposium

Talk Abstract: coming soon

Biography

Jan E. Odegard joined Rice University in 2002 as the Executive Director for the Ken Kennedy Institute for Information Technology (K2I, formerly Computer and Information Technology Institute). The primary mission for K2I is to help bring together scholars with complementary expertise to work on complex problems with broad impact covering both fundamental and applied research with great potential for transformative impact of society.

Odegard's research background is in signal and image processing, wavelet theory, filter banks and time-frequency analysis with applications to geophysics, multimedia and telecommunication. His current interest spans research, education and training in High Performance Computing, Information Technology, Cyberinfrastructure and Computational Science and Engineering.

In the role as Executive Director, Odegard helps support 135 faculty members associated with academic departments from across Rice. His focus is on: community building and acting as a catalyst for creating cross disciplinary strategic research programs, managing Rice's computational high performance computing research infrastructure, and developing university/university and university/industry research partnerships.

Dan Stanzione
Dan Stanzione

Deputy Director
Texas Advanced Computing Center
University of Texas

Topic: "The iPlant Collaborative: Cyberinfrastructure to Feed the World"

Slides: available after the Symposium

Talk Abstract: coming soon

Biography

Dr. Stanzione is the Deputy Director of the Texas Advanced Computing Center (TACC) at The University of Texas at Austin. He is the Co-Director of "The iPlant Collaborative: A Cyberinfrastructure-Centered Community for a New Plant Biology," an ambitious endeavor to build a multidisciplinary community of scientists, teachers and students who will develop cyberinfrastructure and apply computational approaches to make significant advances in plant science. He is also a Co-PI for TACC's Ranger supercomputer, the first of the "Path to Petascale" systems supported by the National Science Foundation (NSF). Ranger was deployed in February 2008 (at the time, the largest open science supercomputer in the world). Prior to joining TACC, Dr. Stanzione was the founding director of the Ira A. Fulton High Performance Computing Institute (HPCI) at Arizona State University (ASU). Before ASU, he served as an AAAS Science Policy Fellow in the NSF's Division of Graduate Education. Dr. Stanzione began his career at Clemson University, his alma mater, where he directed the supercomputing laboratory and served as an assistant research professor of electrical and computer engineering. Dr. Stanzione's research focuses on parallel programming, scientific computing, bioinformatics, and system software for large scale systems.

Stephen Wheat
Stephen Wheat

Senior Director, High Performance Computing Worldwide Business Operations
Intel

Topic: "Cycles for Competitiveness: A View of the Future HPC Landscape"

Slides:   PDF

Talk Abstract

The High Performance Computing (HPC) market segment has been trending towards a $10B/yr scale with a historical Compound Annual Growth Rate (CAGR) greater than Enterprise in general. Nevertheless, it continues to be a space perceived by many/most as a niche. With analyst data now supporting HPC as having 20% (or more) share of enterprise, more strategic attention is being given to the segment. Ironically, it is a segment showing signs of post maturity: in particular, the vibrancy of the ecosystem is on a downward trend, margins are thin for the Original Equipment Manufacturers (OEMs), and procurement complexities are described as onerous. Yet new data derived over the past several years is showing that there is a substantial unserved/underserved component to the market. This is the space referred to as the "Missing Middle"; it is those that would use HPC if they could. It is possible that a much necessary revitalization of the HPC segment is at hand.

I will first present the definition of and data around the Missing Middle, covering the scope, the barriers, and the "so what". Then I will describe the activities of the Alliance for High Performance Digital Manufacturing, a multi-party gathering of entities having come together with the expressed intention of resolving the Missing Middle. I will also touch on what is happening around the world with respect to national competitiveness, and computational methods for economic growth. I will finish the talk with a brief review of what we call the Digital Supply Chain as a means to resolving the Missing Middle. Throughout the presentation, I will review some of the latest technologies at Intel.

Biography

Dr. Stephen Wheat is the Senior Director for the HPC Worldwide Business Operations directorate within Intel's HPC Business Unit. He is responsible for driving the development of Intel's HPC strategy and the pursuit of that strategy through platform architecture, eco-system development and collaborations. While in this role, Dr. Wheat has influenced the deployment of several Top10 systems and many more Top500 HPC systems.

Dr. Wheat has a wide breadth of experience that gives him a unique perspective in understanding large scale HPC deployments. He was the Advanced Development manager for the Storage Components Division, the manager of the RAID Products Development group, the manager of the Workstation Products Group software and validation groups, and manager of the Supercomputing Systems Division (SSD) operating systems software group. At SSD, he was a Product Line Architect and was the systems software architect for the ASCI Red system.

Before joining Intel in 1995, Dr. Wheat worked at Sandia National Laboratories, performing leading research in distributed systems software, where he created and led the SUNMOS and PUMA/Cougar programs. Dr. Wheat is a 1994 Gordon Bell Prize winner and has been awarded Intel's prestigious Achievement Award. He has a patent in Dynamic Load Balancing in HPC systems.

Dr. Wheat holds a Ph.D. in Computer Science and has several publications on the subjects of load balancing, inter-process communication, and parallel I/O in large-scale HPC systems. Outside of Intel, he is a commercial multi-engine pilot and a FAA certified multi-engine, instrument flight instructor.


BREAKOUT SPEAKERS

Amy Apon
Amy Apon

Director
Arkansas High Performance Computing Center
Professor
Department of Computer Science & Computer Engineering
University of Arkansas
Talk Topic: "Investment in High Performance Computing as a Predictor of Research Competitiveness in U.S. Academic Institutions"
Slides:   PowerPoint2003   PowerPoint2007   PDF

Talk Abstract

Cyberinfrastructure is the information technology that, by definition, enables scientific inquiry. High Performance Computing (HPC) is one of the key cyberinfrastructure components. This talk traces the development of HPC in U.S. academic institutions, and describes how the Top 500 list can be used to measure the investment that institutions have made in HPC. In any given Top 500 list, roughly 25 U.S. academic institutions have an entry on the Top 500 list, and this number has been dropping in recent years. With support of a grant from the National Science Foundation to the University of Arkansas and RENCI (University of North Carolina at Chapel Hill), we have studied how investments in HPC can be shown to be leading indicators of the research productivity of an institution. Results show that investment in HPC can increase the average NSF funding to an institution and the average publication production by a statistically significant amount. We also discuss the various costs to invest in HPC, and suggest supporting factors in making an HPC investment have a full impact.

Biography

Dr. Amy Apon is Director of the Arkansas High Performance Computing Center and Professor of Department of Computer Science & Computer Engineering at the University of Arkansas. She holds a Ph.D. from Vanderbilt University in performance analysis of parallel and distributed systems. She is the Principal Investigator of the four National Science Foundation grants that have acquired the shared supercomputer resources at the University of Arkansas and has directed high performance computing activities at the University of Arkansas since 2004. The Arkansas High Performance Computing Center was founded in 2008 under the direction of Dr. Apon with funding from Governor Beebe of Arkansas. She leads the cyberinfrastructure for the state of Arkansas, and is the current Vice Chair for the Coalition for Academic Scientific Computation, an organization of more than 60 of the nation's most forward thinking universities and computing centers. Dr. Apon's current research focuses on high performance cluster computing, including scheduling in cluster systems, management of large-scale data-intensive applications, and accelerator architectures. She has more than 70 peer-reviewed publications, and is PI or co-PI on more than $10M of external research funding.

Dana Brunson
Dana Brunson

Senior Systems Engineer
High Performance Computing Center
Oklahoma State University

Topic: "Introduction to FREE National Resources for Scientific Computing"
(with Jeff Pummill)

Slides:   PDF

Abstract:

As the need for computational resources in scientific research continues to outgrow local campus infrastructure, it becomes imperative to seek out alternatives that can meet that critical need. Unknown to many, there are a significant number of these resources at the national level available to academic researchers at no charge! This session will aspire to cover these resources, their requirements for access, and what they offer researchers in terms of both raw hardware and also advanced user support. While TeraGrid will be the primary focus, a number of other offerings will be mentioned during the session.

Biography

Dana Brunson oversees the High Performance Computing Center and is an adjunct assistant professor in the Computer Science Department at Oklahoma State University (OSU). Before transitioning to High Performance Computing in the fall of 2007, she taught mathematics and served as systems administrator for the OSU Mathematics Department. She earned her Ph.D. in Numerical Analysis at the University of Texas at Austin in 2005 and her M.S. and B.S. in Mathematics from OSU. In addition, Dana is serving on the ad hoc committee for OSU's new Bioinformatics Graduate Certificate program and is the TeraGrid Campus Champion for OSU.

Clay B. Carley III
Clay B. Carley III

Assistant Professor
Department of Computer Science
East Central University

Topic: "Using Remote HPC Resources to Teach Local Courses"
(with Larry Sells and Charlie Zhao)

Slides:   PowerPoint2003   PowerPoint2007   PDF

Talk Abstract:

At our institutions, parallel computing is a topic of growing interest, but providing local High Performance Computing (HPC) resources is impractical. Under the Oklahoma Cyberinfrastructure Initiative, we have been able to teach this topic via access to resources at the OU Supercomputing Center for Education & Research (OSCER), affording both rich parallel computing experiences and an unprecedented opportunity to use a production system of substantial scale, so that we can teach not only the foundations of parallel computing but also the practicalities of HPC practice. In this presentation, we discuss both our individual institutions' unique experiences and the commonalities that our courses have shared.

Biography

Clay Carley obtained a B.A. in Mathematics from Sonoma State University in 1970, and an M.S. in Computer Science from Rensselaer Polytechnic Institute at Hartford in 1997.

Born and raised in northern California, Mr. Carley is a U.S. Navy veteran, 1970 - 1975. He has worked as a hardware technician, software support specialist, and as a software developer in the insurance and banking industries. He has been teaching full time at East Central University since 1999.

Greg Clifford
Greg Clifford

Industrial Segment Manager
Product Division
Cray Inc.
Topic: "Breakthrough Science via Extreme Scalability"
Slides:     PDF

Talk Abstract

In the past decade, the HPC field has turned to cluster architectures to achieve performance. A cluster can deliver both high peak performance and excellent price performance. However, high peak performance is no guarantee that an application will run effectively, much less achieve leading-edge performance. Cray's focus has always been to deliver the world's most powerful HPC solutions to solve real world problems. This requires the ability to scale production applications, which in turn requires enhancements to the interconnect, software and overall system reliability. This presentation will highlight technology that Cray has implemented to achieve the highest level of sustained performance. It will also present examples of where this highly scalable technology has been applied to solve mission critical problems.

Biography

Greg Clifford has worked in the high performance computing (HPC) field for over 25 years, most recently with Cray Inc as Manufacturing Segment Manager. Greg's primary focus has been on application performance on HPC architectures. This has involved close cooperation with application developers and users to improve real performance for production environments. Prior to joining Cray, Greg work at IBM for 10 years on the HPC team and 16 years at Cray Research in the Application Group. Greg has an MS in Structural Engineering from the University of Minnesota and has completed the "Executive Management Training" program in the Haas School of Business, University of California, Berkeley.

Annette D. Colbert-Latham
Annette D. Colbert-Latham

Chief Executive Officer
Visage Production, Inc.

Topic: "Royalties"

Slides:   PowerPoint2003   PowerPoint2007

Abstract

Calculating royalties and developing formulas for calculating royalties for software and entertainment related products...

Biography

Annette D. Colbert-Latham is an independent filmmaker developing documentaries, features and television programming, working with international connections, preparing for a year at Sotheby's Institute in London 2011.

Dan Dawson
Dan Dawson

NRC Postdoctoral Research Scientist
NOAA National Severe Storms Laboratory

Topic: "High Resolution Numerical Simulations of Severe Thunderstorms and Tornadoes"

Slides: available after the Symposium

Abstract

In recent years, supercomputing resources have increased to the point where it has become possible to simulate many atmospheric phenomena with unprecedented detail, both in regards to the physical processes and in the resolution of the computational grid. In particular, simulations of severe convective storms and tornadoes have improved significantly. This talk will address some of the recent research in this area, focusing on my own work, and reveal just what is possible with our current and near-future computing capabilities. Specifically, I will discuss several challenges that remain in regards to the physical modeling of cloud and precipitation processes within storms, how these affect the behavior of the storm and attendant tornadoes, and what still needs to be addressed as we move toward the goal of explicit numerical prediction of severe storms and tornadoes.

Biography

Dan Dawson received his B.S. in atmospheric dynamics from Purdue University in 2002, whereupon he moved to the University of Oklahoma to pursue M.S. (2004) and Ph.D. (2009) degrees, with a focus on numerical simulation and prediction of severe thunderstorms and tornadoes. He is currently a National Research Council (NRC) postdoc at the NOAA National Severe Storms Laboratory (NSSL) in Norman, OK. His current work involves ensemble numerical simulation and prediction of the 4 May 2007 Greensburg, KS storm and F5 tornado, with a view toward improving our ability to predict individual storms and their attendant severe weather threats on the timescale of a few hours. His other research interests include cloud and precipitation microphysics and their interactions with supercell thunderstorm and tornado dynamics.

Kendra Dresback
Kendra Dresback

Research Assistant Professor
School of Civil Engineering & Environmental Science
University of Oklahoma

Topic: "The Use of Supercomputing in Hurricane Storm Surge and Hydrodynamic Modeling"

Slides: available after the Symposium

Abstract

Due to the increase in supercomputing power and computing availability, we can now more accurately provide evaluations of hurricanes and the storm surge or maximum inundation associated with these hurricanes in real-time. Within this presentation, we will discuss two applications of the ADCIRC (Advanced CIRCulation), a 2D/3D hydrodynamic model based on the St. Venant equations subject to the standard Boussinesq approximation, that utilize supercomputing resources to obtain the simulation results. Over the 20-year history of ADCIRC, applications have varied from predicting the effects of coastal dredging to developing a tidal database to estimating the extent of hurricane storm surge inundation. When employing supercomputing facilities, ADCIRC utilizes the METIS algorithm for the domain decomposition and MPI for communication between these domains. In an effort to streamline writing of results and reduce simulation times, the development team implemented a new algorithm that utilizes writer processors to provide the simulation results. Recently, to extend the capabilities of ADCIRC and improve its predictive ability in these and other applications, the development team has also been coupling ADCIRC to other models, either dynamically or one-way, depending on the physics of the problem. One application looks at the 2D ADCIRC dynamically coupled to an unstructured version of the SWAN wave model, and the HLRDHM hydrologic model provides fresh water inflows for major rivers and tributaries Initially, the system is being tested on the Tar-Neuse-Pamlico Sound basin in North Carolina; preliminary results from Hurricane Isabel hindcasts will be shown. Another application looks at the 3D baroclinic ADCIRC coupled to the regional HYCOM model. In this presentation, we will show some preliminary results from the coupled HYCOM/ADCIRC system in the Northern Gulf of Mexico.

Biography

Dr. Kendra M. Dresback is a Research Assistant Professor in the School of Civil Engineering & Environmental Science at the University of Oklahoma. She received her PhD in Civil Engineering at the University of Oklahoma. Her MS thesis investigated a predictor-corrector time-marching algorithm to achieve accurate results in less time using a finite element-based shallow water model; her dissertation focused on several algorithmic improvements to the same finite element-based shallow water model, ADCIRC. She has published papers in the area of computational fluid dynamics. Dr. Dresback's research includes the use of computational models to help in the prediction of hurricane storm surge and flooding in coastal areas and the incorporation of transport effects in coastal seas and oceans in ADCIRC. Her research has been supported with funding from the National Science Foundation, the US Department of Education, the Office of Naval Research, the US Department of Defense EPSCoR, the US Department of Homeland Security, NOAA and the US Army Corp of Engineers.

Brent Eskridge
Brent Eskridge

Associate Professor
Department of Computer Science & Network Engineering
Southern Nazarene University

Topic: "Effective (ab)use of HPC with Non-parallelized Software"

Slides:   PDF

Talk Abstract: coming soon

Biography

Brent E. Eskridge, Ph.D. is an Associate Professor in the Department of Computer Science & Network Engineering at Southern Nazarene University. In 1995, he graduated Summa cum Laude with a BS in Physics and Mathematics from Southern Nazarene University. He earned his MS and PhD in Computer Science from the University of Oklahoma in 2004 and 2009, respectively. His primary research interests include multi-agent systems, machine learning, and robotics. He has industry experience in software development and has worked for companies such as Raytheon Systems Company and Rockwell, Intl.

Dan Fraser
Dan Fraser

Senior Fellow
Computation Institute
University of Chicago

Topic: "High Throughput Parallel Computing (HTPC)"
(with Horst Severini)

Slides:   PowerPoint2003   PowerPoint2007   PDF

Talk Abstract

High Throughput Parallel Computing (HTPC) is a computational paradigm for an emerging class of applications where large ensembles (hundreds to thousands) of modestly parallel (4- to ~64- way) jobs are used to solve scientific problems ranging from chemistry, physics, weather and flood modeling, to general relativity. Parallel jobs, in general, are not very portable, and as a result are difficult to run across multiple heterogeneous sites. In the Open Science Grid (OSG) framework, we are currently working on minimizing these barriers for an important class of modestly parallel (4- to ~64- way) jobs where the parallelism can be executed on a single multi-core machine. Some of the first HTPC jobs began running at the University of Oklahoma (OU) in 2009.

Biography

Dan Fraser is a Senior Fellow at the Computation Institute at the University of Chicago. Currently he is the Production Coordinator for the Open Science Grid, a collaborative effort involving over 70 independent scientific institutions where his focus is to ensure that a heterogeneous, independently operated, and distributed computing system is resilient and remains functioning in a 24x7 production capacity. He has a PhD in Physics from Utah State University and over a decade of experience working with high performance science and commercial applications in both industry and academia.

Blake T. Gonzales
Blake T. Gonzales

HPC Computer Scientist
Advanced Systems Group
Dell Inc

Topic: "Architecting High Performance Computing Systems for Fault Tolerance and Reliability"

Slides:   PDF

Talk Abstract

The complex and bleeding-edge nature of High Performance Computing systems at times has a negative impact on their ability to reliably complete the tasks at hand. At the same time, HPC systems generally perform many hundreds or thousands of independent jobs simultaneously. Because of this, reliability and fault tolerance is of upmost concern in HPC. Symmetric multiprocessor HPC systems are prone to system wide failures due to single errors in memory, CPU, or disk. With the advent of clustered HPC technology, the risk of a systemwide failure can be minimized if the system is designed correctly. This talk explores the key hardware and software components that are likely to cause system wide failures, and suggests architecture design techniques to prevent such failures.

Biography

As a Dell HPC Computer Scientist, Blake Gonzales brings a valued end-user perspective to the HPCatDell Team. Having served as a design and installation HPC engineer at ATK Thiokol, and formerly as a system administrator at Texas Instruments Defense Systems, Blake understands the challenges of architecting computing systems for maximum performance and ROI. One of Blake's areas of expertise includes designing HPC systems for fault tolerance and reliability. From a systems standpoint, Blake has worked with a variety of operating systems, schedulers, cluster management software, file systems, and applications.

Blake received his Bachelor of Science in Electrical Engineering from Louisiana State University with an emphasis in computer engineering, microprocessor/digital design. Currently enrolled in Colorado State University's Master of Computer Science Program, Blake is studying operating systems and parallel programming and expects to graduate in December 2010.

Roger Hall
Roger Hall

Technical Director
MidSouth Bioinformatics Center
University of Arkansas Little Rock

Topic: "Agent Designs for Cloud Bioinformatics"

Slides:   PowerPoint2003   PowerPoint2007   PDF

Talk Abstract

"Infrastructure as a Service" (IaaS) begins to fulfill the decades-long dream of computer science for self-configuring and self-healing systems. For systems developers, IaaS has created a new API where, instead of allocating memory, one can allocate entire clusters or networks. Additionally, IaaS truly enables advanced Software Agents (SA), a development paradigm with a decades-long dream of self-configuring and self-healing software.

In this talk, we will review the basics of IaaS, discuss its sympathies for Software Agents, study an active Agent project, and demonstrate script utilities and job execution for cloud operation.

Biography

Mr. Hall has designed and developed over one hundred custom desktop, network, and internet applications using Windows, UNIX, Linux, MPE, PalmOS, C++, Perl, Visual Basic, Java, COBOL, JCL, JavaScript, CGI, ASP, JSP, SAS, Matlab, Maple, R, SQL Server, Oracle, Sybase, IMAGE, DB2, Access and mySQL. Working in bioinformatics for the last eight years, Mr. Hall had previously completed projects in data warehousing, workflow, analysis, ecommerce, inventory, accounting, and customer relationship management for industries including biotech, healthcare, insurance, manufacturing, mail order, and professional organizations. Mr. Hall currently manages the MidSouth Bioinformatics Center @ UALR, which provides bioinformatics research support to all Arkansas investigators.

Kevin Heisler
Kevin Heisler

Senior Systems Engineer
HPC Networking Division
QLogic

Topic: "Advanced TrueScale InfiniBand Fabric Performance Features"

Slides: available after the Symposium

Talk Abstract

Today's HPC clusters tend to be larger and use processors with increased performance and cores count density, which means that the HPC cluster fabric is becoming more important in supporting the cluster's overall optimization and performance.

This presentation will cover fabric optimization features contained within the latest release of QLogic's InfiniBand Fabric Suite (IFS). IFS allows users to manage clusters of any size to obtain the highest fabric performance, the highest communications efficiency, and the lowest management costs.

Biography

Kevin Heisler is a Senior Systems Engineer at QLogic where he supports the sale of InfiniBand Networking Products. Mr. Heisler graduated from Purdue University (BS EE) and has more than 15 years of experience in information technology and direct sales. His background includes expertise in networking and software markets as well as 10 years of aerospace hardware design. Prior to joining Qlogic, Mr. Heisler worked for Alcatel/Lucent, International Network Services/BT, Nortel/Bay Networks and TRW.

Deepthi Konatham
Deepthi Konatham

Graduate Research Assistant
School of Chemical, Biological & Materials Engineering
University of Oklahoma
Topic: "Graphene Sheets-Oil Nanocomposites: Equilibrium and Transport Properties from Molecular Simulation"
Slides: available after the Symposium

Talk Abstract

Nanostructured materials hold unrestricted promise in material sciences. It has long been thought that dispersing nanoparticles in a polymer blend can enhance both mechanical and transport properties. It would be for example desirable to generate a polymer nanocomposite with high thermal conductivity. Such materials could be obtained by dispersing thermally conductive nanoparticles within polymers. Carbon-based nanoparticles are extremely promising towards these goals, although the use of carbon nanotubes is hindered by high resistance to heat transfer from the nanotubes to the polymer matrix.

We are interested in composites in which graphene sheets (GS) are dispersed within organic oils. Although pristine GS agglomerate when dispersed in oils such as octane, hexane and dodecane, our equilibrium molecular dynamic simulations demonstrate that when the GS are functionalized with short branched hydrocarbons, they remain well dispersed within the oils. We are now conducting equilibrium and non-equilibrium molecular dynamics simulations to assess the effective interactions between GS dispersed in oils, the self-assembly of GS within oils, the structure of the fluid surrounding the GS, and the heat transfer from a GS to the surrounding matrix. Our tools are designed to understand the effect of GS size, oils molecular weight and molecular architecture on the GS dispersability and GS-oil heat transfer rate. For example, we detail the formation of nematic phases for grapheme sheets in oils at room conditions as a function of the grapheme volume fraction. As expected, the transition from isotropic to nematic phase occurs at lower grapheme sheet concentrations as the grapheme sheet size increases.

As a consequence of the anisotropic molecular-level structure, a number of macroscopic properties show anisotropic behavior. We will discuss here our results, obtained conducting non-equilibrium molecular simulations and macroscopic modeling, for heat conductivity predicted for graphene-based nanocomposites.

Biography

Deepthi Konatham is a Graduate Research Assistant in the School of Chemical, Biological & Materials Engineering at the University of Oklahoma. She received her BS in Chemical Engineering from Jawaharlal Nehru Technological University (JNTU), Hyderabad, India in 2007. She came to OU in 2007 to pursue her PhD in Chemical Engineering under Dr. Alberto Striolo, as a member of the Molecular Science and Engineering group. Her research focuses on Molecular Simulation of Graphene Sheets for their application in Nanocomposites to form stable dispersions in organic oils, to produce materials with anisotropic properties and to use them as membranes for water desalination.

Allen LaBryer
Allen LaBryer

Graduate Research & Teaching Assistant
School of Aerospace & Mechanical Engineering
University of Oklahoma
Topic: "A Harmonic Balance Approach for Large-Scale Problems in Nonlinear Structural Dynamics"
Slides:   PDF

Talk Abstract

Harmonic balance (HB) methods allow for rapid computation of time-periodic solutions for nonlinear dynamical systems. We present a filtered high dimensional harmonic balance (HDHB) approach, which operates in the time domain, and provide a framework for implementation into an existing finite element solver. To demonstrate its capabilities, the method is used to solve a set of nonlinear structural dynamics problems related to the field of flapping flight. For each example, the HDHB approach produces accurate steady-state solutions orders of magnitude faster than a traditional time-marching scheme.

Biography

Allen LaBryer is a Graduate Research and Teaching Assistant in the School of Aerospace & Mechanical Engineering at the University of Oklahoma. He received a BS in Aerospace Engineering from the University of Michigan and an MS in Aerospace Engineering from the University of Oklahoma (OU) in 2009, and is currently pursuing his PhD at OU. Before coming to OU, he was an Aerospace Engineer at The Boeing Company and an Aerospace Engineering intern at Smiths Aerospace, now part of General Electric.

Evan Lemley
Evan Lemley

Professor
Department of Engineering & Physics
University of Central Oklahoma

Topic: "Building a System to Perform Fluid Dynamics Simulations and Experiments"

Slides:   PDF

Talk Abstract

Coming soon

Biography

Evan Lemley received his BA in Physics from Hendrix College and MS and Ph.D in Engineering (Mechanical) from the University of Arkansas. His thesis work was focused on modeling and simulation of various neutron detectors. Post graduation Evan worked for the engineering consulting firm Black & Veatch in a group responsible for modeling coal power plants with custom written software.

In August 1998, Evan became an Assistant Professor in the Department of Engineering and Physics (formerly Physics) at the University of Central Oklahoma, and has been there since, teaching mechanical engineering, physics, and engineering computation courses. Early research at UCO was focused on neutron transport in materials. More recently, Evan has been involved in simulation of flow in microtubes and microjunctions and simulation of flow in porous networks.

Greg Monaco
Greg Monaco

Director for Research and Cyberinfrastructure Initiatives
Great Plains Network

Topic: "Cyberinfrastructure Planning Across the Great Plains"

Slides:   PowerPoint2003   PowerPoint2007   PDF

Abstract

The Great Plains Network is a regional organization that crosses state boundaries and has historically offered one major service, advanced networking. GPN faces challenges in attempting to identify services and meet needs across the broader spectrum of cyberinfrastructure. Recognizing these challenges, the GPN Executive Council charged a GPN CI Advisory Committee (CIAC) with the tasks of (a) assessing CI needs in the GPN community beyond network connectivity, and (b) formulating a "CI plan to make it possible to achieve [GPN Strategic] objectives and distinguish the role of GPN from that of campus and state members and national organizations and to partner with these organizations in ways that complement and advance the mission of GPN members." CIAC members were selected from among the GPN CI community to be representative of the GPN membership, and included one outside member.

The committee developed two survey instruments. The first survey was designed to identify areas of CI need, and the second survey was designed to identify high priority CI services that GPN might offer to fill those needs. This presentation will discuss the process results and recommendations of the GPN CI Advisory Committee These recommendations were unanimously accepted by the GPN Executive Council.

Biography

Dr. Greg Monaco has held several positions with the Great Plains Network since August 2000, when he joined GPN. He began as Research Collaboration Coordinator, and then was promoted to Director for Research and Education, followed by Executive Director for several years. He is currently the Director for Research and Cyberinfrastructure Initiatives.

Jeff Pummill
Jeff Pummill

Manager for Cyberinfrastructure Enablement
Arkansas High Performance Computing Center
University of Arkansas

Topic: "Introduction to FREE National Resources for Scientific Computing"
(with Dana Brunson)

Slides:   PDF

Abstract:

As the need for computational resources in scientific research continues to outgrow local campus infrastructure, it becomes imperative to seek out alternatives that can meet that critical need. Unknown to many, there are a significant number of these resources at the national level available to academic researchers at no charge! This session will aspire to cover these resources, their requirements for access, and what they offer researchers in terms of both raw hardware and also advanced user support. While TeraGrid will be the primary focus, a number of other offerings will be mentioned during the session.

Biography
Jeff Pummill is the Manager for Cyberinfrastructure Enablement at the University of Arkansas. He has supported the high performance computing activities at the University of Arkansas since 2005, serving first as Senior Linux Cluster Administrator before his current role, and has more than a decade of experience in managing high performance computing resources. Jeff is also the TeraGrid Campus Champion for the University of Arkansas, and is a very active contributor at the national level on the Campus Champion Leadership Team.

Steve Rovarino

Director of StorNext Software
Quantum Software Products
Quantum Corp.

Topic: "Points of Consideration in the Need for Long Term Retention of Scientific Data"

Slides:   PowerPoint2003   PowerPoint2007   PDF

Talk Abstract

The increase of higher resolution results generated from HPC analysis is changing the way data needs to be managed and retained. Federally sponsored research and mandated retention are further challenging the landscape of institutes faced with the need for preservation of massive amounts of data for longer periods of time. The volume of information created will only increase as HPC systems become faster; yet finite restrictions on budgets, space and manpower are hard to overcome. This session will explore some of the many factors to consider when faced with such retention requirements. The technology one might use depends on many factors. Knowing the options and the right questions to ask will facilitate a choice that will meet your long term data management requirements.

Biography

Steve Rovarino is one of the pioneers of high performance file system, with 16 years of experience in high performance file systems and archival solutions. Prior to Quantum, Steve worked for MountainGate and ADIC, where StorNext File System and Storage Manager originated. His responsibilities with MountainGate and ADIC included product development, vertical solution development, and strategic alliance relationship development. His in-depth knowledge in regard to challenges of file systems and archival needs has made Steve qualified to lead software sales for Quantum in Asia Pacific and Americas. His vision of how to solve data storage/file system problems has driven many key IT vendors and Fortune 500 accounts to become technology partners or customers of StorNext. As a result, Quantum has forged strategic relationships with companies including Apple Computer, Dell Computer, EMC, Hewlett Packard and Sony around the application of StorNext software.

Larry F. Sells
Larry F. Sells

Adjunct Professor
Computer Science
Oklahoma City University

Topic: "Using Remote HPC Resources to Teach Local Courses"
(with Clay Carley and Charlie Zhao)

Slides: PDF

Talk Abstract

At our institutions, parallel computing is a topic of growing interest, but providing local High Performance Computing (HPC) resources is impractical. Under the Oklahoma Cyberinfrastructure Initiative, we have been able to teach this topic via access to resources at the OU Supercomputing Center for Education & Research (OSCER), affording both rich parallel computing experiences and an unprecedented opportunity to use a production system of substantial scale, so that we can teach not only the foundations of parallel computing but also the practicalities of HPC practice. In this presentation, we discuss both our individual institutions' unique experiences and the commonalities that our courses have shared.

Biography

Larry F. Sells obtained a B.A. in English from Franklin College (Indiana) in 1963; an M.A. (1966) and Ph.D. in English in 1970 from Pennsylvania State University; and an M.S. in Computer Science Education from the University of Evansville in 1985.

Dr. Sells taught in the English Department of Westminster College (Pennsylvania) from 1968-1985. He was a full-time faculty member in the Department of Computer Science at Oklahoma City University (OCU) from 1985-2008. He is currently an adjunct professor at OCU.

Horst Severini
Horst Severini

Research Scientist
Department of Physics & Astronomy
University of Oklahoma

Topic: "High Throughput Parallel Computing (HTPC)"
(with Dan Fraser)

Slides:   PowerPoint2003   PowerPoint2007   PDF

Talk Abstract

High Throughput Parallel Computing (HTPC) is a computational paradigm for an emerging class of applications where large ensembles (hundreds to thousands) of modestly parallel (4- to ~64- way) jobs are used to solve scientific problems ranging from chemistry, physics, weather and flood modeling, to general relativity. Parallel jobs, in general, are not very portable, and as a result are difficult to run across multiple heterogeneous sites. In the Open Science Grid (OSG) framework, we are currently working on minimizing these barriers for an important class of modestly parallel (4- to ~64- way) jobs where the parallelism can be executed on a single multi-core machine. Some of the first HTPC jobs began running at the University of Oklahoma (OU) in 2009.

Biography

Horst Severini got his Vordiplom (BS equivalent) in Physics at the University of Wuerzburg in Germany in 1988, then went on to earn a Master of Science in Physics in 1990 and a Ph.D. in Particle Physics in 1997, both at the State University of New York at Albany.

He is currently a Research Scientist in the High Energy Physics group at the University of Oklahoma, where he is in charge of computing at the US ATLAS SouthWest Tier2 facility at OU. He is also the Grid Computing Coordinator at the Oklahoma Center for High Energy Physics (OCHEP), and the Associate Director for Remote and Heterogeneous Computing at OU Supercomputing Center for Education & Research (OSCER).

Wade Vinson
Wade Vinson

Distinguished Technologist
POD Chief Architect
Power and Cooling Strategist
Service Provider and High Performance Computing Business
Hewlett Packard

Topic: "Breakthrough Cost Model for New Data Centers from 150 Kilo Watts to 20 Mega Watts"

Talk Abstract

See how customers are deploying data centers in weeks versus years. See how the new Modular POD allows you to scale out at the pace that is right for your business. See the cost comparisons of Brick and Mortar versus POD. POD can reduce cost up to 45% and provide the flexibility to "pay as you grow."

Biography:

Wade Vinson is the mechanical, power and thermal architect for the POD, HP's Performance Optimized Data Center. In his strategist role, he works with customers, marketing and technical staff to evaluate, communicate and plan technologies for the thermal design of servers and deployments. This includes fans, heatsinks, liquid cooling and data center innovations to reduce customer power, capital expense and time-to-deployment, and to improve total cost of ownership. Vinson was the Thermal Architect for the BladeSystem c-class, leading a team that designed the Active Cool Fan and PARSEC thermal architecture. Since joining Compaq/HP in 1995 as a Mechanical Engineer, Vinson has designed mechanical components, thermal solutions and led the mechanical design team on many of HP's industry leading Proliant servers. This work includes 35 issued US patents in thermal and mechanical.

Vinson earned a Bachelor of Science degree in mechanical engineering from the University of Houston, and dual Bachelors degrees in business administration marketing and management from the University of Texas at Austin.

Kent Winchell
Kent Winchell

Distinguished Engineer and Deep Computing Chief Technology Officer
World Wide Deep Computing
IBM

Topic: "Petascale Computing for Science"

Slides:   PDF

Talk Abstract

Kent will discuss IBM High Performance Computing strategy in the Petascale era, with focus on its benefit for and impact to scientific research. With the impending arrival of the Blue Waters supercomputer for Teragrid, Kent will also provide an overview of the system and its application for science. Lastly, Kent will discuss IBM's hybrid computing effort involving standards such as OpenCL.

Biography

Kent Winchell has worked in the high performance computing and IT architecture field for over 20 years. Joining IBM in 1981, Kent now manages IBM's worldwide team as Chief Technology Officer (CTO) of Deep Computing. Some of Kent's projects include:

  • Architecting complete solution for New Polar Orbiting Earth Satellite System (NPOESS)
  • Designing and implementing bioinformatics clusters for mass spectrometry at several universities
  • Lead Engineer for DINPACS Medical Imaging systems
Kent holds a B.Sc. in Computer Science from the University of Wyoming and a MS in Software Engineering from the University of Houston. Kent lives in Colorado Springs CO and works with a wide range of customers for IBM.

Chao (Charlie) Zhao
Chao (Charlie) Zhao

Associate Professor
Department of Computing & Technology
Cameron University

Topic: "Using Remote HPC Resources to Teach Local Courses"
(with Clay Carley and Larry Sells)

Slides: PDF

Talk Abstract

At our institutions, parallel computing is a topic of growing interest, but providing local High Performance Computing (HPC) resources is impractical. Under the Oklahoma Cyberinfrastructure Initiative, we have been able to teach this topic via access to resources at the OU Supercomputing Center for Education & Research (OSCER), affording both rich parallel computing experiences and an unprecedented opportunity to use a production system of substantial scale, so that we can teach not only the foundations of parallel computing but also the practicalities of HPC practice. In this presentation, we discuss both our individual institutions' unique experiences and the commonalities that our courses have shared.

Biography

Dr. Chao (Charlie) Zhao is an associate professor in the Department of Computing & Technology at Cameron University. He received a B.S. in Biology from the Liaoning Normal University in 1982. After graduating from college, he became an instructor at the Shenyang University. Three years later, he was appointed as chairman of the biology department at the same university. In 1992, he was invited to visit the United States of America as a visiting scholar at Texas A&M University-Commerce. He earned a M.S in Biology and Higher Education in 1994, a M.S in Computer Science in 1998, and Ed. D in Higher Education in 1999 from Texas A&M University-Commerce. He was hired as a tenure-track assistant professor in the mathematical sciences department at Cameron University in 1999. After five years, he was tenured and promoted to an associate professor in 2004 due to that he did something right to his students and institute. Since then he has been continuing his teaching journal at Cameron until now. He has been teaching a number of CS courses, such as Operating Systems, Network Programming, Parallel Computing, Databases, Software Engineering, Data Structures, Computer Science I, Computer Science II, and CS seminars on current topics.


OU Logo