使用Cuda在python中使用numba在GPU上创建数组

问题描述 投票:0回答:1

我想在网格中的每个点评估一个函数。问题是,如果我在CPU端创建网格,将其传输到GPU的行为需要比实际计算更长的时间。我可以在GPU端生成网格吗?

下面的代码显示了在CPU端创建网格并评估GPU端的大部分表达式(我不知道如何让atan2在GPU上工作,所以我把它放在CPU端)。我应该事先道歉并说我还在学习这些东西,所以我确信下面的代码还有很大的改进空间!

谢谢!

import math
from numba import vectorize, float64
import numpy as np
from time import time

@vectorize([float64(float64,float64,float64,float64)],target='cuda')
def a_cuda(lat1, lon1, lat2, lon2):
    return  (math.sin(0.008726645 * (lat2 - lat1))**2) + \
             math.cos(0.01745329*(lat1)) * math.cos(0.01745329*(lat2)) * (math.sin(0.008726645 * (lon2 - lon1))**2)

def LLA_distance_numba_cuda(lat1, lon1, lat2, lon2):
    a = a_cuda(np.ascontiguousarray(lat1), np.ascontiguousarray(lon1), 
               np.ascontiguousarray(lat2), np.ascontiguousarray(lon2))
    return earthdiam_nm * np.arctan2(a,1-a)

# generate a mesh of one million evaluation points
nx, ny = 1000,1000
xv, yv = np.meshgrid(np.linspace(29, 31, nx), np.linspace(99, 101, ny))
X, Y = np.float64(xv.reshape(1,nx*ny).flatten()), np.float64(yv.reshape(1,nx*ny).flatten())
X2,Y2 = np.float64(np.array([30]*nx*ny)),np.float64(np.array([101]*nx*ny))

start = time()
LLA_distance_numba_cuda(X,Y,X2,Y2)
print('{:d} total evaluations in {:.3f} seconds'.format(nx*ny,time()-start))
python cuda gpu numba
1个回答
2
投票

让我们建立一个绩效基准。为earthdiam_nm添加定义(1.0),并在nvprof下运行代码,我们有:

$ nvprof python t38.py
1000000 total evaluations in 0.581 seconds
(...)
==1973== Profiling result:
            Type  Time(%)      Time     Calls       Avg       Min       Max  Name
 GPU activities:   55.58%  11.418ms         4  2.8544ms  2.6974ms  3.3044ms  [CUDA memcpy HtoD]
                   28.59%  5.8727ms         1  5.8727ms  5.8727ms  5.8727ms  cudapy::__main__::__vectorized_a_cuda$242(Array<double, int=1, A, mutable, aligned>, Array<double, int=1, A, mutable, aligned>, Array<double, int=1, A, mutable, aligned>, Array<double, int=1, A, mutable, aligned>, Array<double, int=1, A, mutable, aligned>)
                   15.83%  3.2521ms         1  3.2521ms  3.2521ms  3.2521ms  [CUDA memcpy DtoH]
(...)

因此,在我的特定设置中,“内核”本身在我的(小型,慢速)QuadroK2000 GPU上运行~5.8ms,从主机到设备的4个副本的数据复制时间总共为11.4ms,对于结果转回主机。重点是从主机到设备的4个副本。

让我们先看看低调的果实。这行代码:

X2,Y2 = np.float64(np.array([30]*nx*ny)),np.float64(np.array([101]*nx*ny))

除了将值30和101传递给每个“工人”之外,其他任何事情都没有。我在这里使用“worker”来指代在大型数据集中“广播”vectorize函数的numba过程中的特定标量计算的想法。 numba矢量化/广播过程不要求每个输入都是相同大小的数据集,而只需要提供的数据是“广播”的。因此,可以创建一个适用于数组和标量的vectorize ufunc。这意味着每个worker将使用其数组元素和标量来执行计算。

因此,悬而未决的成果就是简单地删除这两个数组,并将值(30,101)作为标量传递给ufunc a_cuda。当我们追求“低悬的果实”时,让我们将你的arctan2计算(替换为math.atan2)和earthdiam_nm的最终缩放合并到vectorize代码中,这样我们就不必在python / numpy中的主机上进行:

$ cat t39.py
import math
from numba import vectorize, float64
import numpy as np
from time import time
earthdiam_nm = 1.0
@vectorize([float64(float64,float64,float64,float64,float64)],target='cuda')
def a_cuda(lat1, lon1, lat2, lon2, s):
    a = (math.sin(0.008726645 * (lat2 - lat1))**2) + \
             math.cos(0.01745329*(lat1)) * math.cos(0.01745329*(lat2)) * (math.sin(0.008726645 * (lon2 - lon1))**2)
    return math.atan2(a, 1-a)*s

def LLA_distance_numba_cuda(lat1, lon1, lat2, lon2):
    return a_cuda(np.ascontiguousarray(lat1), np.ascontiguousarray(lon1),
               np.ascontiguousarray(lat2), np.ascontiguousarray(lon2), earthdiam_nm)

# generate a mesh of one million evaluation points
nx, ny = 1000,1000
xv, yv = np.meshgrid(np.linspace(29, 31, nx), np.linspace(99, 101, ny))
X, Y = np.float64(xv.reshape(1,nx*ny).flatten()), np.float64(yv.reshape(1,nx*ny).flatten())
# X2,Y2 = np.float64(np.array([30]*nx*ny)),np.float64(np.array([101]*nx*ny))
start = time()
Z=LLA_distance_numba_cuda(X,Y,30.0,101.0)
print('{:d} total evaluations in {:.3f} seconds'.format(nx*ny,time()-start))
#print(Z)
$ nvprof python t39.py
==2387== NVPROF is profiling process 2387, command: python t39.py
1000000 total evaluations in 0.401 seconds
==2387== Profiling application: python t39.py
==2387== Profiling result:
            Type  Time(%)      Time     Calls       Avg       Min       Max  Name
 GPU activities:   48.12%  8.4679ms         1  8.4679ms  8.4679ms  8.4679ms  cudapy::__main__::__vectorized_a_cuda$242(Array<double, int=1, A, mutable, aligned>, Array<double, int=1, A, mutable, aligned>, Array<double, int=1, A, mutable, aligned>, Array<double, int=1, A, mutable, aligned>, Array<double, int=1, A, mutable, aligned>, Array<double, int=1, A, mutable, aligned>)
                   33.97%  5.9774ms         5  1.1955ms     864ns  3.2535ms  [CUDA memcpy HtoD]
                   17.91%  3.1511ms         4  787.77us  1.1840us  3.1459ms  [CUDA memcpy DtoH]
(snip)

现在我们看到复制HtoD操作已从总共11.4ms减少到总共5.6ms。内核已从~5.8ms增长到~8.5ms,因为我们在内核中做了更多的工作,但python报告该函数的执行时间已从~0.58s下降到~0.4s。

我们可以做得更好吗?

我们可以,但为了这样做(我相信),我们需要使用不同的numba cuda方法。 vectorize方法对于标量元素操作很方便,但它无法知道整个数据集在哪里执行操作。我们需要这些信息,我们可以在CUDA代码中获取它,但我们需要切换到@cuda.jit装饰器才能这样做。

以下代码将以前的vectorize a_cuda函数转换为@cuda.jit设备函数(基本上没有其他更改),然后我们创建一个CUDA内核,根据提供的标量参数生成网格,并计算结果:

$ cat t40.py
import math
from numba import vectorize, float64, cuda
import numpy as np
from time import time

earthdiam_nm = 1.0

@cuda.jit(device='true')
def a_cuda(lat1, lon1, lat2, lon2, s):
    a = (math.sin(0.008726645 * (lat2 - lat1))**2) + \
             math.cos(0.01745329*(lat1)) * math.cos(0.01745329*(lat2)) * (math.sin(0.008726645 * (lon2 - lon1))**2)
    return math.atan2(a, 1-a)*s

@cuda.jit
def LLA_distance_numba_cuda(lat2, lon2, xb, xe, yb, ye, s, nx, ny, out):
    x,y = cuda.grid(2)
    if x < nx and y < ny:
        lat1 = (((xe-xb) * x)/(nx-1)) + xb # mesh generation
        lon1 = (((ye-yb) * y)/(ny-1)) + yb # mesh generation
        out[y][x] = a_cuda(lat1, lon1, lat2, lon2, s)

nx, ny = 1000,1000
Z = cuda.device_array((nx,ny), dtype=np.float64)
threads = (32,32)
blocks = (32,32)
start = time()
LLA_distance_numba_cuda[blocks,threads](30.0,101.0, 29.0, 31.0, 99.0, 101.0, earthdiam_nm, nx, ny, Z)
Zh = Z.copy_to_host()
print('{:d} total evaluations in {:.3f} seconds'.format(nx*ny,time()-start))
#print(Zh)
$ nvprof python t40.py
==2855== NVPROF is profiling process 2855, command: python t40.py
1000000 total evaluations in 0.294 seconds
==2855== Profiling application: python t40.py
==2855== Profiling result:
            Type  Time(%)      Time     Calls       Avg       Min       Max  Name
 GPU activities:   75.60%  10.364ms         1  10.364ms  10.364ms  10.364ms  cudapy::__main__::LLA_distance_numba_cuda$241(double, double, double, double, double, double, double, __int64, __int64, Array<double, int=2, A, mutable, aligned>)
                   24.40%  3.3446ms         1  3.3446ms  3.3446ms  3.3446ms  [CUDA memcpy DtoH]
(...)

现在我们看到:

  1. 内核运行时甚至更长,大约10ms(因为我们正在进行网格生成)
  2. 没有从主机到设备的明确数据复制
  3. 整个函数运行时间从~0.4s减少到~0.3s
© www.soinside.com 2019 - 2024. All rights reserved.