0
Research Papers: Techniques and Procedures

Computational Fluid Dynamics Computations Using a Preconditioned Krylov Solver on Graphical Processing Units

[+] Author and Article Information
Amit Amritkar

Department of Mechanical Engineering;
Department of Mathematics,
Virginia Tech,
226 SEB,
635 Prices Fork Road,
Blacksburg, VA 24061
e-mail: amritkar@vt.edu

Danesh Tafti

Mem. ASME
Department of Mechanical Engineering,
Virginia Tech,
213E SEB,
635 Prices Fork Road,
Blacksburg, VA 24061
e-mail: dtafti@exchange.vt.edu

1Corresponding author.

Contributed by the Fluids Engineering Division of ASME for publication in the JOURNAL OF FLUIDS ENGINEERING. Manuscript received August 25, 2014; final manuscript received July 22, 2015; published online August 21, 2015. Assoc. Editor: Zhongquan Charlie Zheng.

J. Fluids Eng 138(1), 011402 (Aug 21, 2015) (6 pages) Paper No: FE-14-1469; doi: 10.1115/1.4031159 History: Received August 25, 2014; Revised July 22, 2015

Graphical processing unit (GPU) computation in recent years has seen extensive growth due to advancement in both hardware and software stack. This has led to increase in the use of GPUs as accelerators across a broad spectrum of applications. This work deals with the use of general purpose GPUs for performing computational fluid dynamics (CFD) computations. The paper discusses strategies and findings on porting a large multifunctional CFD code to the GPU architecture. Within this framework, the most compute intensive segment of the software, the BiCGStab linear solver using additive Schwarz block preconditioners with point Jacobi iterative smoothing is optimized for the GPU platform using various techniques in CUDA Fortran. Representative turbulent channel and pipe flow are investigated for validation and benchmarking purposes. Both single and double precision calculations are highlighted. For a modest single block grid of 64 × 64 × 64, the turbulent channel flow computations showed a speedup of about eightfold in double precision and more than 13-fold for single precision on the NVIDIA Tesla GPU over a serial run on an Intel central processing unit (CPU). For the pipe flow consisting of 1.78 × 106 grid cells distributed over 36 mesh blocks, the gains were more modest at 4.5 and 6.5 for double and single precision, respectively.

FIGURES IN THIS ARTICLE
<>
Copyright © 2016 by ASME
Your Session has timed out. Please sign back in to continue.

References

Figures

Grahic Jump Location
Fig. 1

Data distribution for multilevel parallelism used in GenIDLEST

Grahic Jump Location
Fig. 2

RMS flow velocities along and perpendicular to the flow direction plotted against nondimensional channel half height starting from the center of the channel

Grahic Jump Location
Fig. 3

Radial variation (y+ starting from the pipe wall) in time averaged velocity along flow direction

Grahic Jump Location
Fig. 4

RMS flow velocity variation along the radial direction (y+) starting from the pipe wall

Tables

Errata

Discussions

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In