I've followed this tutorial to use Numba CUDA JIT decorator: https://www.youtube.com/watch?v=-lcWV4wkHsk&t=510s.
Here is my Python code:
import numpy as npfrom timeit import default_timer as timerfrom numba import cuda, jit# This function will run on a CPUdef fill_array_with_cpu(a): for k in range(100000000): a[k] += 1# This function will run on a CPU with @jit@jitdef fill_array_with_cpu_jit(a): for k in range(100000000): a[k] += 1 # This function will run on a GPU@jit(target_backend='cuda')def fill_array_with_gpu(a): for k in range(100000000): a[k] += 1 # Maina = np.ones(100000000, dtype = np.float64)for i in range(3): start = timer() fill_array_with_cpu(a) print("On a CPU:", timer() - start)for i in range(3): start = timer() fill_array_with_cpu_jit(a) print("On a CPU with @jit:", timer() - start)for i in range(3): start = timer() fill_array_with_gpu(a) print("On a GPU:", timer() - start)And here is the prompt output:
On a CPU: 24.228116830999852On a CPU: 24.90354355699992On a CPU: 24.277727688999903On a CPU with @jit: 0.2590671719999591On a CPU with @jit: 0.09131158500008496On a CPU with @jit: 0.09054700799993043On a GPU: 0.13547917200003212On a GPU: 0.0922475330000907On a GPU: 0.08995077999998102Using the @jit decorator greatly increases the processing speed. However, it is unclear to me that the @jit(target_backend='cuda') decorator allows the function to be processed on the GPU. The processing times are similar to the function with @jit. I suppose @jit(target_backend='cuda') does not use the GPU. Actually, I've tried this code on a machine where there is no NVIDIA GPU and I got the same result without any warning or error.
How to make it run on my GPU? I have a GeForce GT 730M.