first_imgAddThis ShareCONTACT: Mike WilliamsPHONE: 713-348-6728E-MAIL: [email protected] Standards seen as key to parallel progressHigh Performance Computing Workshop at Rice University advances case for educationTim Mattson, an applications programmer at Intel, delivered an impassioned plea at Rice University Thursday to audience members who may be in the market for high-performance computing resources: Make manufacturers play nice. Mattson told attendees at the third Oil and Gas High Performance Computing Workshop, hosted by Rice’s Ken Kennedy Institute for Information Technology, that without pressure from customers, chip manufacturers wouldn’t have incentive to standardize existing or future programming that would streamline the adoption of a new generation of multiple-core processors.A sellout crowd at Rice’s Duncan Hall listened as Mattson, in ponytail and jeans, gave a rapid-fire overview of the cutting-edge parallel processors coming out of Intel Labs. He said it was critical to have software engineers in on the design of a recent Intel research project, a 48-core model, from the start.But he warned that if customers don’t lay down the law, chaos will ensue. “Hardware is marching along, but if you want really great software (portability and tools), I’m telling you folks, it’s a train wreck. We are setting up a great train wreck here, and I’m hoping we can avoid it,” he said. “We have to remember we’re at many-core solutions because we’re limited in our ability to crank the frequency on a chip,” Mattson said. “We have a very deep, fundamental – and from the point of view of a computer company – a very dangerous mismatch. Parallel hardware is ubiquitous, and it’s getting more and more parallel and more and more ubiquitous. Parallel software is rare, and I don’t see anything changing that.”Mattson said the next generation of computer software for everything from supercomputers to netbooks has to make use of tools supported by standards, not proprietary solutions. In particular, he held out OpenMP as an example of success, and he likes how OpenCL is trying to do the same for general purpose computing on graphical processing units.“This is a sincere plea: Those of you who are from companies in the oil industry and who actually use systems, please save us from ourselves. I’m talking about the academic/industry parallel computing world. Please, save us, because only you can do it.”Mattson gave a sense of his glee at having worked on the first teraflop computer, Intel’s ASCI Red, in 1998 and the first 80-core teraflop chip a mere decade later. “This is my personal Moore’s Law: to go in one 10-year period within my career from a megawatt of electricity in 1,600 square feet to 97 watts in 297 square millimeters,” he said, showing a slide of a house-sized supercomputer next to another of a chip. “That’s cool.” He said Intel plans to offer about 100 of the 48-core prototype processors announced in late 2009 for testing to university and industry researchers. These will not be for sale; they were created to help stimulate new thinking around the programmability of tools and implementation of parallel algorithms. ”What we learn from this will be incorporated into future Intel technology,” Mattson said.Training up-and-coming computational scientists to think in parallel was a theme many speakers addressed at the workshop. With computer technology not only expected to keep up and, by some projections, even surpass the limits implied by Moore’s Law over the next decade, training enough computational scientists to keep pace is continuing to be a problem.Certainly that’s the case in the energy industry, a point made in no uncertain terms by Peter Carragher, vice president of geoscience and exploration in BP’s access and exploration unit, who delivered the morning keynote.“Trying to explore oil fields with a drill bit when it’s costing you $500,000 a day is quite an expensive hobby,” Carragher said before taking listeners through the life cycle of an undersea oil field. He demonstrated the importance of recent seismic technologies that let geophysicists see miles beneath the seabed to identify deposits and weed out false leads. High-performance computing (HPC), he said, is the key driver in imaging quality. “Seismic imaging is letting subsalt reservoir images emerge that would not have been possible to see without HPC,” he said.Jay Boisseau, director of the Texas Advanced Computing Center (TACC) at The University of Texas at Austin, addressed higher-education initiatives needed to support HPC. “The tech bubble of the ’90s sort of subverted our scientific-computing education,” he said. He noted that a number of computer science departments have eagerly migrated to the Java programming language at the expense of others more appropriate to scientific computing. “There was an explosion of value in science and business computing in recent years, as well as entertainment-oriented computing, and that all led to the tech bubble. The downside is that we’ve had 10 to 20 years of reduced focus in scientific computing curricula overlapping with the onset of massive system and on-chip parallelism,” Boisseau said. When even $300 netbook computers boast dual-core processors, the trend becomes clear, he added. “Over the past 60 years, computing has become the most important general-purpose instrument of science,” Boisseau said. “The onset of multicore is really going to help us train a future generation of people who think in parallel, not just in Java.” Dave Hale, the C.H. Green Professor of Exploration Geophysics at the Colorado School of Mines, addressed the question more directly: “Who’s going to write the code for my 1,000 cores?”Hale said he often encounters companies that look for a human-resources solution to a difficult problem. “They say, ‘We need 1.5 geophysicists and 1.5 computer scientists,’” he said. “But this partial solution doesn’t work as well in practice as we might hope.”The solution tends to be unstable, he said. While teams occasionally gel, more often computer scientists become frustrated. “You have the ‘vision bunny’ (the geophysicist) and the ‘code monkey’ (the computer scientist). The geophysicist can think of ideas 10 times faster than the computer scientist can implement one. And maybe the computer scientist isn’t so excited about implementing the ideas, especially when he has ideas of his own,” Hale said.Institutions need to encourage scientists who have an interest in computational science to pursue programming on the side, he said. Andy Bechtolsheim, chief development officer at Arista Networks and a founder and former chief system architect at Sun Microsystems, noted processing power has increased a millionfold over 40 years, and said he wouldn’t be surprised to see exaflop clusters — with as many as 100 million cores — debut within 10 years; this would add to the pressure on programmers.“What do we do with 100 billion transistors per chip?” he asked.Bechtolsheim provided insight into what CPUs will look like in the future. Multichip stacking (a technique already being used in cell phones) is becoming common; it helps to solve architectural, power and cooling problems that increase with every generation. He also discussed fiber-based networking techniques that need to scale up to handle the processing power to come.André B. Erlich, vice president of technology and innovation at Schlumberger, took a macro view of oil supply and demand. He noted that while investment in resources has tripled, too little capacity has been added to industry or national resources. “It really shows the easy oil is over,” he said. “The reservoirs are more complex, and they require more work” to find new and extract more of the oil.He said Schlumberger is moving into places — offshore Greenland, in sub-Saharan Africa, Siberia – “where the logistics are difficult, the geology is nerve-racking and temperatures and conditions are even worse.” Increased ability to run such wellheads from remote locations through real-time reservoir management may ease the difficulties – but someone has to understand the computational challenge.“What scares us a little bit is the demographics,” Erlich said. “The big crew change is coming – professionals in the industry who will retire in the next five to 10 years. We are facing a problem of the transfer of knowledge and how often we see people making the same errors.”He said estimates put the “talent gap” for oil and gas professionals at between 9,000 and 50,000 workers in as little as two years. “We need to go beyond the classic recruiting methods and provide access to more recent or modern commercial technologies,” he said. “We need (students) to know what they’re getting into, so they’ll be better prepared.”Along with Rice, workshop organizers included BP, Chevron, Fusion Petroleum Technologies, Hess, Total and WesternGeco. Sponsors included AMD, APPRO, Arista, Brocade, Convey Computer, CyrusOne, Intel, Panasas, RNA Networks, SGI, Texas Memory Systems and Unique Digital Inc.A webcast of the conference and copies of the slides will be available at http://og-hpc.org/Rice2010. last_img