Research Papers: Techniques and Procedures

2012 Freeman Scholar Lecture: Computational Fluid Dynamics on Graphics Processing Units

[+] Author and Article Information

Fellow ASME
Department of Mechanical Science and Engineering,
University of Illinois at Urbana-Champaign,
1206 W. Green Street,
Urbana, IL 61801
e-mail: spvanka@illinois.edu

Contributed by the Fluids Engineering Division of ASME for publication in the JOURNAL OF FLUIDS ENGINEERING. Manuscript received September 6, 2012; final manuscript received February 21, 2013; published online April 23, 2013. Assoc. Editor: David E. Stock.

J. Fluids Eng 135(6), 061401 (Apr 23, 2013) (23 pages) Paper No: FE-12-1431; doi: 10.1115/1.4023858 History: Received September 06, 2012; Revised February 21, 2013

This paper discusses the various issues of using graphics processing units (GPU) for computing fluid flows. GPUs, used primarily for processing graphics functions in a computer, are massively parallel multicore processors, which can also perform scientific computations in a data parallel mode. In the past ten years, GPUs have become quite powerful and have challenged the central processing units (CPUs) in their price and performance characteristics. However, in order to fully benefit from the GPUs' performance, the numerical algorithms must be made data parallel and converge rapidly. In addition, the hardware features of the GPUs require that the memory access be managed carefully in order to not suffer from the high latency. Fully explicit algorithms for Euler and Navier–Stokes equations and the lattice Boltzmann method for mesoscopic flows have been widely incorporated on the GPUs, with significant speed-up over a scalar algorithm. However, more complex algorithms with implicit formulations and unstructured grids require innovative thinking in data access and management. This article reviews the literature on linear solvers and computational fluid dynamics (CFD) algorithms on GPUs, including the author's own research on simulations of fluid flows using GPUs.

Copyright © 2013 by ASME
Your Session has timed out. Please sign back in to continue.



Grahic Jump Location
Fig. 1

Evolution of computing power of CPU and GPU [6]

Grahic Jump Location
Fig. 3

Instantaneous contours of streamwise velocity in turbulent flow through ducts of different shapes. Reynolds numbers ∼4000.

Grahic Jump Location
Fig. 7

Instantaneous contours of total velocity (m/s) with different magnetic field positions

Grahic Jump Location
Fig. 10

Initial position of a deformable drop in a square duct

Grahic Jump Location
Fig. 11

Transient deformation of the droplet surface for different capillary numbers

Grahic Jump Location
Fig. 2

Correspondence between GPU grid and computational mesh

Grahic Jump Location
Fig. 5

Instantaneous nondimensional axial velocity contours with secondary velocity vectors for a square duct with a magnetic field

Grahic Jump Location
Fig. 6

Mean nondimensional axial velocity contours with secondary velocity vectors for a square duct with a magnetic field

Grahic Jump Location
Fig. 8

Evolution of the isosurface of ϕ at the interface at different times (t = 12, 18, 30, and 50) for parameters Re = 100, viscosity ratio = 10, At = 0.2, Fr = 5, κ = 0.005, and angle = 45 degrees

Grahic Jump Location
Fig. 9

Contours of the index function at t = 20 for a midspan plane for Re = 100, At = 0.2, viscosity ratio of 10, Fr = 5, and κ = 0.005

Grahic Jump Location
Fig. 4

(a) Instantaneous nondimensional temperature (T − T)/(T − Ts) at midspan for a film cooling jet. (b) Instantaneous nondimensional temperature (T − T)/(T − Ts) at midspan for microramp interacting with a film cooling jet.




Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In