Quantcast
Channel: Active questions tagged python - Stack Overflow
Viewing all articles
Browse latest Browse all 23305

Diagonalising matrices that are too large for gpu memory

$
0
0

I want to diagonalise matrices which are too large for the amount of memory available on the gpu. I am interested in any approaches that would allow me to gain some speed up over just diagonalising the same matrix on a cpu. I need to compute all of the eigenvalues and eigenvectors

For example I have a 55296 x 55296 matrix and a gpu with 80 GB of memory. I am currently using cupy in python and when I try to diagonalise the matrix I get the error:

cupy.cuda.memory.OutOfMemoryError: Out of memory allocating 73,387,745,792 bytes (allocated so far: 48,922,804,736 bytes)

A minimal example of the python code I am running is:

from cupy import linalgimport cupy as cpimport numpy as npn = 55296A = np.random.rand(n,n)A = 0.5*(A+A.T)lambdas, Ut = (linalg.eigh(cp.asarray(A)))lambdas = cp.asnumpy(lambdas)Ut = cp.asnumpy(Ut)print("Diagonalised successfully")

This example is for a dense matrix but the matrix I have contains some sparsity and can be diagonalised using less memory on the cpu via employing scipy.linalg.eig_banded. If there were a way to exploit this on the gpu that would also be of interest to me.


Viewing all articles
Browse latest Browse all 23305

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>