MPI/Open-MP hybridization of higher order WENO scheme for the incompressible Navier-Stokes equations

Loading...
Thumbnail Image
Authors
Selvam, Mukundhan
Hoffmann, Klaus A.
Advisors
Issue Date
2015-01
Type
Conference paper
Keywords
Navier-Stokes equations , hybridization of Message Passing Interface (MPI) , Open Multi-Processing (Open-MP) , Weighted Essentially Non-Oscillatory (WENO) scheme , Research Subject Categories::TECHNOLOGY::Electrical engineering, electronics and photonics::Electrical engineering
Research Projects
Organizational Units
Journal Issue
Citation
Mukundhan Selvam and Klaus A. Hoffmann. "MPI/Open-MP Hybridization of Higher Order WENO Scheme for the Incompressible Navier-Stokes Equations", AIAA Infotech @ Aerospace, AIAA SciTech, (AIAA 2015-1951). http://dx.doi.org/10.2514/6.2015-1951
Abstract

With the contemporary emerging transition for the past two decades in high performance computing (HPC) towards clusters of nodes, it is possible to obtain an efficiently scalable CFD code by hybridization of Message Passing Interface (MPI) and Open Multi-Processing (Open-MP). The parallelization of an incompressible higher order DNS solver using Navier-Stokes equations utilizing WENO (weighted Essentially Non-Oscillatory) scheme is presented in this paper. Initially, a time dependent two dimensional diffusion equation was parallelized by MPI in an effort to explore the concepts. Subsequently, the procedure is extended with several advanced MPI routines for the parallelization of higher order finite-difference schemes. The parallelized code is subsequently examined on a benchmark problem for incompressible flows – lid driven cavity. The results of the developed code were compared and validated with the serial model. The overall objective is to achieve the best performance of the incompressible Navier-Stokes solver by carefully implementing, profiling and optimizing both MPI communications and Open-MP parallelization. The results demonstrate that, incorporating Open-MP parallelization technique contributes a slight advantage in performance relative to pure MPI implementation. MPI processes are limited by the number of communicators and grid size of the simulation. However, fine grain parallelization is possible by incorporating Open-MP threads in certain segments of the code.

Table of Contents
Description
Copyright by authors. Posted on SOAR with author permission.
Publisher
American Institute of Aeronautics and Astronautics
Journal
Book Title
Series
AIAA Infotech @ Aerospace;
PubMed ID
DOI
ISSN
EISSN