Y. Yamamoto, T. Fukaya, T. Uneyama, and Y. Nakamura (Japan)
Singular values, bidiagonalization, Floating-point accelerator, CSX600, Performance evaluation
In computing the singular values of a square matrix, transformation of the input matrix to bidiagonal form accounts for most of the computation time. In this paper, we consider speeding up this process using a combination of CPU and floating-point accelerator. As an algorithm for bidiagonalization, we can use the conventional Householder’s method or Bischof’s two-phase algorithm, which can use the level 3 BLAS efficiently. We can also choose to store the whole matrix in the CPU memory or in the on-board memory of the accelerator. So there are four possible strategies. We investigate the advantages and disadvantages of each strategy and construct an analytical performance model for each of them. Using the models, we predict the performance of bidiagonalzation on the CSX600 accelerator and show that it is the best to achieve high performance to use Bischof’s algorithm with the matrix stored in the on-board memory. This conclusion should hold for many other accelerators with similar performance characteristics.
Important Links:
Go Back