Multi-core is bad for Supercomputers

With no other way to improve the performance of processors further, chip makers have staked their future on putting more and more processor cores on the same chip. Engineers at Sandia National Laboratories, in New Mexico, have simulated future high-performance computers containing the 8-core, 16‑core, and 32-core microprocessors that chip makers say are the future of the industry. The results are distressing. Because of limited memory bandwidth and memory-management schemes that are poorly suited to supercomputers, the performance of these machines would level off or even decline with more cores. The performance is especially bad for informatics applications—data-intensive programs that are increasingly crucial to the labs’ national security function.

High-performance computing has historically focused on solving differential equations describing physical systems, such as Earth’s atmosphere or a hydrogen bomb’s fission trigger. These systems lend themselves to being divided up into grids, so the physical system can, to a degree, be mapped to the physical location of processors or processor cores, thus minimizing delays in moving data.

But an increasing number of important science and engineering problems—not to mention national security problems—are of a different sort. These fall under the general category of informatics and include calculating what happens to a transportation network during a natural disaster and searching for patterns that predict terrorist attacks or nuclear proliferation failures. These operations often require sifting through enormous databases of information.

0 comments: