Authors: Sahni, Onkar | Carothers, Christopher D. | Shephard, Mark S. | Jansen, Kenneth E.
Article Type:
Research Article
Abstract:
PHASTA falls under the category of high-performance scientific computation codes designed for solving partial differential equations (PDEs). Its a massively parallel unstructured, implicit solver with particular emphasis on fluid dynamics (CFD) applications. More specifically, PHASTA is a parallel, hierarchic, adaptive, stabilized, transient analysis code that effectively employs advanced anisotropic adaptive algorithms and numerical models of flow physics. In this paper, we first describe the parallelization of PHASTA's core algorithms for an implicit solve, where one of our key assumptions is that on a properly balanced supercomputer with appropriate attributes, PHASTA should continue to strongly scale on high core counts until
…the computational workload per core becomes insufficient and inter-processor communications start to dominate. We then present and analyze PHASTA's parallel performance across a variety of current near petascale systems, including IBM BG/L, IBM BG/P, Cray XT3, and custom Opteron based supercluster; this selection of systems with inherently different attributes covers a majority of potential candidates for upcoming petascale systems. On one hand, we achieve near perfect (linear) strong scaling out to 32,768 cores of IBM BG/L; showing that a system with desirable attributes will allow implicit solvers to strongly scale on high core counts (including petascale systems). On the contrary, we find that the relative tipping point for strong scaling fundamentally differs among current supercomputer systems. To understand the loss of scaling observed on a particular system (Opteron based supercluster) we analyze the performance and demonstrate that such a loss can be associated to an unbalance in a system attribute; specifically compute-node operating system (OS). In particular, PHASTA scales well to high core counts (up to 32,768 cores) during an implicit solve on systems with compute nodes using lightweight kernels (for example, IBM BG/L); however, we show that on a system where the compute node OS is more heavy weight (e.g., one with background processes) a loss in strong scaling is observed relatively at much fewer number of cores (4,096 cores).
Show more
Keywords: Strong scaling, massively parallel processing, unstructured and implicit methods, OS jitter
DOI: 10.3233/SPR-2009-0287
Citation: Scientific Programming,
vol. 17, no. 3, pp. 261-274, 2009
Price: EUR 27.50