Concurrent Number Cruncher: An Efficient Sparse Linear Solver on the GPU

in: High Performance Computation Conference (HPCC-07), http://www.tlc2.uh.edu/hpcc07/, pages 358--371, Springer

Abstract

A wide class of geometry processing and PDE resolution methods needs to solve a linear system, where the non-zero pattern of the matrix is dictated by the connectivity matrix of the mesh. The advent of GPUs with their ever-growing amount of parallel horsepower makes them a tempting resource for such numerical computations. This can be helped by new APIs (CTM from ATI and CUDA from NVIDIA) which give a direct access to the multithreaded computational resources and associated memory bandwidth of GPUs; CUDA even provides a BLAS implementation but only for dense matrices (CuBLAS). However, existing GPU linear solvers are restricted to specific types of matrices, or use non-optimal compressed row storage strategies. By combining recent GPU programming techniques with supercomputing strategies (namely block compressed row storage and register blocking), we implement a sparse generalpurpose linear solver which outperforms leading-edge CPU counterparts (MKL / ACML).

Download / Links

    BibTeX Reference

    @INPROCEEDINGS{Buatois07,
        author = { Buatois, Luc and Caumon, Guillaume and Levy, Bruno },
        editor = { al. et, R. Perrott },
         title = { Concurrent Number Cruncher: An Efficient Sparse Linear Solver on the GPU },
     booktitle = { High Performance Computation Conference (HPCC-07), http://www.tlc2.uh.edu/hpcc07/ },
        series = { Lecture Notes in Computer Science 4782 },
        volume = { 4782 },
          year = { 2007 },
         pages = { 358--371 },
     publisher = { Springer },
      abstract = { A wide class of geometry processing and PDE resolution methods needs to solve a linear system, where the non-zero pattern of the matrix is dictated by the connectivity matrix of the mesh. The advent of GPUs with their ever-growing amount of parallel horsepower makes them a tempting resource for such numerical computations. This can be helped by new APIs (CTM from ATI and CUDA from NVIDIA) which give a direct access to the multithreaded computational resources and associated memory bandwidth of GPUs; CUDA even provides a BLAS implementation but only for dense matrices (CuBLAS). However, existing GPU linear solvers are restricted to specific types of matrices, or use non-optimal compressed row storage strategies. By combining recent GPU programming techniques with supercomputing strategies (namely block compressed row storage and register blocking), we implement a sparse generalpurpose linear solver which outperforms leading-edge CPU counterparts (MKL / ACML). }
    }