Accelerating a finite-difference time-domain code for the acoustic wave propagation.

A. Guyau and Paul Cupillard and Anton Kutsenko. ( 2015 )
in: 35th Gocad Meeting - 2015 RING Meeting, ASGA

Abstract

There is a large panel of methods to numerically solve the seismic wave equation, varying in their approximation of the equation and/or their discretization scheme. Each method has its own benefits and drawbacks. For instance, ray tracers can return results quickly but they are not able to model diffraction phenomena. Other methods, such as Spectral Elements or Discontinuous Galerkin, can handle the full physics of wave propagation with a high accuracy but they require a considerable computation time. In some cases, this computation time can be prohibitive, particularly for inverse problems in which a large number of simulations is needed. In such cases, the numerical method to be chosen must be time-efficient while ensuring a reasonable accuracy. Finite Difference (FD) methods often are good candidates here. For solving the wave equation, FD methods use simple operations on a regular structured mesh, making them easy to handle. The main drawback of these methods appears when dealing with complex geometries or rapidly varying models. In these cases, the resolution requires an extremely refined mesh and time decomposition, which causes the computation time to rise dramatically. When dealing with large 3D models, such a decomposition becomes a computational problem. In this case, it is necessary to parallelize the resolution, either on CPU or GPU. A couple of recent works propose algorithms for parallelizing FD methods on GPU, most of them using CUDA language. In this work, we parallelize a finite-difference time-domain (FDTD) on CPU using OpenMP. After a brief presentation of the FDTD method for solving the acoustic wave equation in 3D, we give some details on our implementation. The performance of the obtained program is then challenged against various cases, up to 19 200 000 grid points. Absorbing boundary conditions are also discussed.

Download / Links

BibTeX Reference

@INPROCEEDINGS{GuyauGM2015,
    author = { Guyau, A. and Cupillard, Paul and Kutsenko, Anton },
     title = { Accelerating a finite-difference time-domain code for the acoustic wave propagation. },
 booktitle = { 35th Gocad Meeting - 2015 RING Meeting },
      year = { 2015 },
 publisher = { ASGA },
  abstract = { There is a large panel of methods to numerically solve the seismic wave equation, varying in their approximation of the equation and/or their discretization scheme. Each method has its own benefits and drawbacks. For instance, ray tracers can return results quickly but they are not able to model diffraction phenomena. Other methods, such as Spectral Elements or Discontinuous Galerkin, can handle the full physics of wave propagation with a high accuracy but they require a considerable computation time. In some cases, this computation time can be prohibitive, particularly for inverse problems in which a large number of simulations is needed. In such cases, the numerical method to be chosen must be time-efficient while ensuring a reasonable accuracy. Finite Difference (FD) methods often are good candidates here. For solving the wave equation, FD methods use simple operations on a regular structured mesh, making them easy to handle. The main drawback of these methods appears when dealing with complex geometries or rapidly varying models. In these cases, the resolution requires an extremely refined mesh and time decomposition, which causes the computation time to rise dramatically. When dealing with large 3D models, such a decomposition becomes a computational problem. In this case, it is necessary to parallelize the resolution, either on CPU or GPU. A couple of recent works propose algorithms for parallelizing FD methods on GPU, most of them using CUDA language. In this work, we parallelize a finite-difference time-domain (FDTD) on CPU using OpenMP. After a brief presentation of the FDTD method for solving the acoustic wave equation in 3D, we give some details on our implementation. The performance of the obtained program is then challenged against various cases, up to 19 200 000 grid points. Absorbing boundary conditions are also discussed. }
}