Innovatives Supercomputing in Deutschland
inSiDE • Vol. 7 No. 2 • Spring 2009
current edition
archive
centers
events
download
about inSiDE
index  index prev  prev next  next

ScaFaCoS – when Long Range Goes Parallel

What makes difference between an ideal gas and real matter is that there are interactions between particles. Due to a variety of types of interactions, substances might have completely different structural or dynamical properties. It is the interplay between attraction and repulsion between particles which makes the world so diverse in nature. Taking the viewpoint of classical mechanics, matter is composed of particles, described as force-centers, which form a potential energy landscape. Considering the work, which is needed to bring a particle to a given location on this map, characterizes its energy. The force onto this particle is then simply given by the gradient at the particles’ position, which is responsible that it moves away from a given position, thus changing the energy landscape and inducing a collective response of the particle system. This simplified description is taken as a basis for various types of classical simulation techniques, which are based on so called force-fields, which are nothing else than analytical approximations to various types of interactions between particles. These interactions are usually classified on the one hand into bonded and non-bonded and on the other hand into short- and long-range interactions. Although bonded interactions, like stretching, bending or torsional deformations of bonds between sites, define what is classically called a molecule, the non-bonded interactions are responsible for the dynamical interplay between atoms and molecules, inducing collective behavior. In the extreme case of astrophysical objects, like planets, stars or galaxies it is the long-range nature of gravitation, which is responsible for the dynamics and large scale structure formation.

Figure 1: Scaling behavior of the FMM method with the number of particles for different geometries. As can be seen, the method has complexity O(N) for 1D-, 2D- and 3D-periodic systems as well as for systems with open boundaries.

Relying on the framework of classical mechanics and knowing the interactions between objects, it is straight forward to set up Molecular Dynamics (MD) or Monte Carlo (MC) simulations to study these systems. With the advent of large scale parallel computers, like the IBM BlueGene/P in Jülich, it is nowadays in principle possible to simulate systems of billions of particles. Although this is still far beyond Avogadros number, it brings simulations closer to length scales of laboratory experiments. However, if long range interactions play an important role for systems of interest, like gravitational systems or non-equilibrium Coulomb interaction driven plasmas, it would be necessary to take them properly into account. Both, gravitational and electrostatic potentials are the solution of the linear Poisson equation, which means that the total solution of a many-body system is built up of a linear superposition of individual contributions. This means, however, that for particle systems all pair-contributions between particles have to be taken into account. Since in a system, composed of N objects, there are N(N-1)/2 particle pairs, this implies a computational complexity of O(N2).

Figure 2: Parallel scaling on the Jülich IBM BlueGene/P for the long range part of the P3Mg method (Particle-Particle Particle-Multigrid), which was developed at JSC.

Various methods were developed to overcome this computational bottleneck, which all have in common to split the computing domain into short-range and long-range contributions. The short-range part, which has on average a constant number of neighbors (and therefore is O(N)) is calculated in a pair-wise fashion, whereas the long-range part is subject to different kinds of techniques, depending on the algorithm. In periodic systems, where so called Ewald-summation techniques, like the Particle-Particle Particle-Mesh method (P3M), are applied, fast Fourier transform techniques (FFT) may be used to reduce the computational work to O(Nlog(N)). Other techniques, like the Fast Multipole Method (FMM, O(N)) or Barnes-Hut tree methods (O(Nlog(N))) group far away particles together to pseudo-particles or multipole expansions, in order to reduce the complexity, induced by the number of pair contributions. Finally, multigrid techniques, which have complexity O(N), use different grid hierarchies, on which the Poisson equation is solved subject to charges, which are transferred from the particle positions to equidistant grid nodes. In solving the Poisson equation this method has the advantage that it can also take into account a prescribed dielectric background.

All these methods have in common that they solve the Coulomb problem for a prescribed error tolerance very efficiently, but that the implementation into a computer program may be very demanding. This is even more a fact, if the target platform is a massively parallel computer and that the program should run and scale on up to thousands of processors. Within the Call “HPC Software für skalierbare Parallelrechner” (HPC software for scalable parallel computers) of the German Ministery for Education and Science (BMBF), the grant proposal ScaFaCoS (Scalable Fast Coulomb Solvers) from Jülich Supercomputing Centre (JSC) was succesful, which aims to concentrate the most promising methods for solving the Coulomb problem in classical atomistic systems, in a parallel library which may be linked to existing programs. This will on the one hand strongly facilitate code development, since programs can easily be extended to efficient, but sophisticated algorithms. On the other hand it allows to couple these methods to programs, which have already well developed parallel methods, but lack some functionalities, e.g. Coulomb solvers for 1D- or 2D-periodic systems, high accurate solvers or solvers for systems embedded in a dielectric continuum.

Figure 3: Schematic of the hierarchical calculation of the long range interactions in the fast multipole method (FMM), which works on different levels of coarse scales to sum up energy contributions in O(N) complexity (cmp. Fig. 1).

Within the German network project ScaFaCoS, formed by University groups from Bonn, Chemnitz, Stuttgart and Wuppertal, by Research Centre groups from JSC at Research Centre Jülich, Fraunhofer Institute SCAI St. Augustin and Max Planck Institute for Polymer Research Mainz, as by industrial partners from BASF, Cognis and IBM, physicists, chemists, mathematicians and computer scientists will work in common to bring together their expertise to realize a parallel OpenSource library, which will run on target platforms like the Jülich IBM BlueGene/P to help the scientific research community to satisfy their needs.

References

• Godehard Sutmann

Forschungs zentrum Jülich, Jülich Supercomputing Centre


top  top