Advertisement

Science / Medicine : The Numbers-Crunching Frontier : Scientists are near success in a race to build a supercomputer that works 100 times faster than today’s most powerful machines. Their efforts are expected to bring major advances in global climate research, chemistry and aircraft design.

<i> Zorpette is a technology writer based in New York and an editor at Spectrum Magazine</i>

What the South Pole was to explorers in the early 20th Century, what Mt. Everest was to mountain climbers in the early 1950s, what the moon was to the aerospace community in the 1960s--the “one teraflops” machine has become to computer scientists in the 1990s.

“One teraflops” is shorthand for 1 trillion floating-point operations per second, a level of performance that is at least 100 times greater than the execution rates sustained by today’s most powerful computers while solving complex problems in science, technology and finance. Much more than an abstract exercise in computer machismo, the race for a teraflops is expected to result in major advances for researchers working in fields as diverse as global climate change and the design of pharmaceuticals.

Those in the race--a dozen or so of the most advanced computer companies in the United States and Japan, with support from both governments--concede that one teraflops is merely a milestone on the way to far greater processing rates, much as the moon was supposed to be a steppingstone to Mars. But experts expect that within three to five years teraflops machines will begin helping them revolutionize the study of subatomic physics, meteorology, and the molecular basis of diseases such as AIDS. They could also be the elixir for a host of new materials, from semiconductors to “superalloys”--high-strength, lightweight materials for engines, buildings, vehicles, and the like. These are among the most pressing of the so-called “grand challenges” identified by government researchers as justifying the need for vastly more powerful computers.

Advertisement

Experts also expect that the route to one teraflops machines will be paved with technological advances that could lead, for example, to suitcase-sized or possibly even desktop computers that are as powerful as today’s fastest supercomputers. The implications, both scientific and commercial, for the country where the advances are first made could be profound.

Like the explorers and adventurers who came before them, those involved in the one-teraflops race speak of their quest in fervent, almost reverent terms. “It’s going to affect all of our lives,” said Martin Walker, director of parallel applications at Cray Research Inc., the Eagan, Minn.-based company that is widely regarded as the leading maker of supercomputers. “We’re all going to experience the benefits when these challenges are met. So the motivation is very strong.”

One of the most active--and urgent--areas of research in supercomputing involves using the machines to forecast long-term shifts in the Earth’s weather and temperature patterns, an application known as climate modeling. “Human activity is changing the biosphere--the atmosphere and the oceans--of Earth,” Walker said. “We don’t understand precisely how, but some of these changes are not pleasant. Ultraviolet radiation causes skin cancer--we know that’s true. Ozone in the upper atmosphere (which blocks ultraviolet rays) is getting thin, and not just in the polar regions.

Advertisement

“I would like my kids to go out and play in the summer without having to wear protective equipment.” In addition to the ozone layer, climate modeling takes aim at subjects such as the reliability of crops, the availability of water and the welfare of various plant and animal species.

The problem with climate-modeling programs is that on today’s supercomputers they are both woefully slow and incapable of providing the kind of detail needed for meaningful analysis and forecasts. Robert Chervin, a researcher at the National Center for Atmospheric Research in Boulder, Colo., recalled a program he wrote for a Cray-1, a 1970s-vintage supercomputer. To simulate global climate change over a period of 20 years, the program ran for eight months.

“The good news was that it ran faster than real time (20 years),” he said. “The bad news was that I had aged considerably before it was done.” More modern supercomputers can perform such simulations in a few hundred hours, but the level of detail is still inadequate. In many current global climate models, for example, the entire six-county area from Santa Barbara to San Diego is represented by a single data point, as though the weather over the whole area were uniform.

Advertisement

The same problems--long “run times” and insufficient detail--plague designers of all kinds of products, from airplanes to high-tech materials. “We can do very detailed two-dimensional simulations”--for example of the flow of air over a cross section of an airfoil--noted William J. Camp, manager of mathematics and computational science at Sandia National Laboratories in Albuquerque, N.M.

But a three-dimensional simulation of an airplane wing, with an engine, takes “hundreds of hours, even (with) the most powerful supercomputers. And you still can’t get enough detail to dispense with a great deal of the experimental work”--such as wind-tunnel tests. “So supercomputers are largely used to interpolate between tests, rather than in the design itself.”

In order to be useful in design, the time needed for such a simulation would have to be reduced to an hour or so, according to Camp, who is also technical director of the Department of Energy’s Massively Parallel Computing Research Laboratory. He estimated that such a machine could perform a detailed simulation of the aerodynamics of the entire airplane--a task beyond the capabilities of today’s machines--in about a hundred hours. “One teraflops will give you that,” he said.

The design of materials, including plastics, ceramics, metal alloys, and ceramic-metal mixtures called cermets, is another field that would benefit greatly from a teraflops computer, Camp said. “Right now, we’re at the state in materials science where if you tell me the molecules and their arrangement (in the material), I can do a pretty good job of telling you how the material will behave. What we really want is the reverse: Tell me what you want, and I’ll tell you the alloy needed.”

If the date when researchers will “get there” is still uncertain, the inevitability of it, and even the path to it, are now clear. Computer scientists agree that the first teraflops machines will be “massively parallel,” meaning that they will be based on interconnections of hundreds or thousands of relatively ordinary processing units, as opposed to the dozen or so extremely powerful and technologically exotic processors found in a conventional supercomputer.

In fact, last November, two makers of massively parallel supercomputers--Thinking Machines Corp. in Cambridge, Mass., and Intel Supercomputer Systems Division in Beaverton, Ore.--introduced architectures (schemes for interconnecting processors) that they said could theoretically be scaled all the way up to a teraflops. However, spokesmen for the companies conceded that at current prices of components, these teraflops-level machines would cost well over $100 million. No company or research organization has ever paid more than about $25 million for a supercomputer, and it will be several years before the current architectures can accommodate teraflops performance in this price range.

Advertisement

Still, the machines are already offering glimpses of things to come. The Touchstone Delta System, a sort of prototype for the recently introduced Intel line, was installed at Caltech last May. Based on 528 processors arranged in a flat grid, the machine recently achieved a processing rate of 13.9 billion floating point operations per second (In floating point operations, the decimal point is not constrained to the same place in every number).

However, this rate, the highest ever achieved by a computer, was recorded as the Delta churned through a mathematical benchmark program designed specifically to test the pure speed of the computer. That program lacks the complexities and quirks of a real-world simulation. Such benchmark rates are about three times what the Delta and other supercomputers normally sustain on actual scientific simulations and other programs, according to Terry Cole, chief technologist at Caltech’s Jet Propulsion Laboratory.

The Delta is being used to model everything from the Earth’s climate to individual brain cells. In explaining the Delta’s superiority over older supercomputers for modeling the Earth’s climate, Cole likened the performance of the older machines to “looking at a very high-resolution picture through ground glass: You lose all the details. With the speed of the Delta, you can sharpen the focus, so to speak.” As an example he cited the first program run on the Delta, which simulated extremely long pressure waves, known as Rossby-Haurwitz waves, that travel in the Earth’s atmosphere and are partly responsible for the movement of weather.

On the Delta, the simulations were detailed enough “to see some aspects of (the waves’) behavior that no one had seen, or even expected to see, before,” said Ian Foster, a scientist at Argonne National Laboratory in Illinois who helped write the simulation. The group gained insights into how the waves go from unstable to stable conditions, and how their symmetry about the Equator disintegrates over a six-day period. The findings could help explain various meteorological mysteries, such as why weather is not very symmetrical.

Though enthusiastic about such discoveries, scientists and designers are quick to point out that it will take more than tremendous processing rates and memories to tackle the grand challenges. Many solutions will hinge on the ability to communicate the results of awesomely complex calculations so as to enlighten and inform as instantaneously as possible. With the wing of an airplane displayed on the screen of a computer workstation, for example, a designer would be able to make minute changes to its shape, and immediately see how they would alter the aerodynamic forces acting on the wing. This capability, which computer scientists call visualization, has made significant strides in recent years but has to go farther before graphics systems can keep up with supercomputers.

“We have a tremendous ability to detect patterns and develop hypotheses about things seen in graphic form, whereas we may miss these conclusions entirely if we were to see the same information in a great stack of computer printouts or lists of numbers,” D. Allan Bromley, President Bush’s science adviser, told a gathering of supercomputer vendors and users this fall. “If we were able to fully exploit these mental capabilities through the use of supercomputers, we would open up whole new areas of scientific, and human, endeavor.”

Advertisement
Advertisement
Advertisement