CUBLAS 与行主数据的矩阵乘法,无需转置

问题描述 投票:0回答:1

我目前正在尝试在我的 GPU 上使用 CUBLAS 实现矩阵乘法。

对于方阵和某些大小的输入,它工作得很好,但对于其他矩阵,最后一行不会返回(并且包含 0,因为这是我实现它的方式)。

我认为这是分配或

cublasSgemm
语法的问题,但我找不到它在哪里。

NB.:如果您不熟悉 CUBLAS: 它是以列为主,这就是为什么看起来操作是以另一种方式执行的。

如有任何帮助,我们将不胜感激。


乘法源代码

请注意,

gpuErrchk
cublasErrchk
在这里当然不相关。

#include <cuda.h>
#include <cuda_runtime.h>
#include <cublas_v2.h>

#include <vector>


extern void gpuAssert(cudaError_t code, const char *file, int line);
void cublasAssert(cublasStatus_t code, const char *file, int line);

// See below
#define gpuErrchk(ans) { gpuAssert((ans), __FILE__, __LINE__); }
#define cublasErrchk(ans) { cublasAssert((ans), __FILE__, __LINE__); }


std::vector<float> CUDA_mult_MAT(const std::vector<float> &data_1 , const uint64_t data_1_rows, const uint64_t data_1_columns,
                                 const std::vector<float> &data_2 , const uint64_t data_2_rows, const uint64_t data_2_columns)
{
    cublasHandle_t handle;

    cublasErrchk(cublasCreate(&handle));

    std::vector<float> result(data_1_rows * data_2_columns); //Vector holding the result of the multiplication

    /*----------------------------------------------------------------------------------------------*/

    float* GPU_data_1 = nullptr;
    gpuErrchk(cudaMalloc((void**)&GPU_data_1 , data_1.size()*sizeof(float))); //Allocate memory on the GPU
    gpuErrchk(cudaMemcpy(GPU_data_1, data_1.data(), data_1.size()*sizeof(float), cudaMemcpyHostToDevice)); //Copy data from data_1 to GPU_data_1

    float* GPU_data_2 = nullptr;
    gpuErrchk(cudaMalloc((void**)&GPU_data_2 ,data_2.size()*sizeof(float))); //Allocate memory on the GPU
    gpuErrchk(cudaMemcpy(GPU_data_2, data_2.data(), data_2.size()*sizeof(float), cudaMemcpyHostToDevice));//Copy data from data_2 to GPU_data_2

    float* GPU_result = nullptr;
    gpuErrchk(cudaMalloc((void**)&GPU_result , result.size()*sizeof(float))); //Allocate memory on the GPU

    /*----------------------------------------------------------------------------------------------*/


    const float alpha = 1.f; 
    const float beta  = 0.f;

    cublasErrchk(
               cublasSgemm(handle , CUBLAS_OP_N , CUBLAS_OP_N,
                           data_2_columns , data_2_rows ,data_1_columns,
                           &alpha , GPU_data_2 , data_2_columns,
                           GPU_data_1 , data_1_columns,
                           &beta , GPU_result , data_1_rows)
             ); //Perform multiplication 



    gpuErrchk(cudaMemcpy(result.data() , GPU_result , result.size() * sizeof(float) , cudaMemcpyDeviceToHost)); //Copy back to the vector 'result'

    gpuErrchk(cudaFree(GPU_data_1)); //Free GPU memory
    gpuErrchk(cudaFree(GPU_data_2)); //Free GPU memory
    gpuErrchk(cudaFree(GPU_result)); //Free GPU memory

    cublasErrchk(cublasDestroy_v2(handle)); 

    return result;
}


测试输入

#include <iostream>

#include <vector>

int main()
{
    const auto r1 = CUDA_mult_MAT({1 , 2 , 3 , 4 , 5 , 6} , 2 , 3 ,
                                  {7 , 8 , 9 , 10 , 11 , 12} , 3 , 2);
    /*
    Product:
              7  8
    1 2 3  x  9  10
    4 5 6     11 12
    
    */

    for(const auto& value: r1){std::cout << value << " " ;}
    std::cout << std::endl;

    const auto r2 = CUDA_mult_MAT({7 , 8 , 9 , 10 , 11 , 12} , 3 , 2 ,
                                  {1 , 2 , 3 , 4 , 5 , 6} , 2 , 3);
    /*
    Product:
    7  8   
    9  10  x  1  2  3
    11 12     4  5  6
    */


    for(const auto& value: r2){std::cout << value << " " ;}
    std::cout << std::endl;

    return 0;
}

// Shamelessly stolen from https://stackoverflow.com/a/14038590
void gpuAssert(cudaError_t code, const char *file, int line)
{

    if (code != cudaSuccess)
    {
        fprintf(stderr,"GPUassert: %s %s %d\n", cudaGetErrorString(code), file, line);
        exit(code);
    }
}

void cublasAssert(cublasStatus_t code, const char *file, int line)
{
    if(code != CUBLAS_STATUS_SUCCESS)
    {
        std::cerr <<  "CUBLAS error.\nError code: ";

        switch(code)
        {
            case CUBLAS_STATUS_SUCCESS:{std::cerr << "CUBLAS_STATUS_SUCCESS."; break;}

            case CUBLAS_STATUS_NOT_INITIALIZED:{std::cerr << "CUBLAS_STATUS_NOT_INITIALIZED."; break;}

            case CUBLAS_STATUS_ALLOC_FAILED:{std::cerr << "CUBLAS_STATUS_ALLOC_FAILED."; break;}

            case CUBLAS_STATUS_INVALID_VALUE:{std::cerr << "CUBLAS_STATUS_INVALID_VALUE."; break;}

            case CUBLAS_STATUS_ARCH_MISMATCH:{std::cerr << "CUBLAS_STATUS_ARCH_MISMATCH."; break;}

            case CUBLAS_STATUS_MAPPING_ERROR:{std::cerr << "CUBLAS_STATUS_MAPPING_ERROR."; break;}

            case CUBLAS_STATUS_EXECUTION_FAILED:{std::cerr << "CUBLAS_STATUS_EXECUTION_FAILED."; break;}

            case CUBLAS_STATUS_INTERNAL_ERROR:{std::cerr << "CUBLAS_STATUS_INTERNAL_ERROR."; break;}

            case CUBLAS_STATUS_NOT_SUPPORTED:{std::cerr << "CUBLAS_STATUS_NOT_SUPPORTED."; break;}

            case CUBLAS_STATUS_LICENSE_ERROR:{std::cerr << "CUBLAS_STATUS_LICENSE_ERROR."; break;}

            default:{std::cerr << "<unknown>."; break;}
        }

        std::cerr << "\nFile: "<< file << "\n";
        std::cerr << "Line: "<< line <<std::endl;

        exit(EXIT_FAILURE);
    }
}

输出

程序打印:

58 64 139 154 
39 54 69 49 68 87 0 0 0
                  ^~~~~~~

预期:

58 64 139 154 
39 54 69 49 68 87 59 82 105
                  ^~~~~~~
c++ cuda cublas
1个回答
7
投票

我们可以通过不同的方式观察您的 CUBLAS 使用问题。

首先,研究 CUBLAS Sgemm 文档,我们看到 3 个参数

m
n
k
出现,按顺序紧接在转置说明符之后:

cublasStatus_t cublasSgemm(cublasHandle_t handle,
                       cublasOperation_t transa, cublasOperation_t transb,
                       int m, int n, int k, 
                           ^      ^      ^

我们还观察到矩阵维度由下式给出:

A、B 和 C 是以列主格式存储的矩阵,维度为 op ( A ) m × k 、 op (B ) k × n 和 C m × n ,

所以第一个输入矩阵的维度为

m x k
第二个输入矩阵的维度为
k x n
,输出矩阵的维度为
m x n

让我们暂时关注一下输出矩阵。鉴于其尺寸是使用

m
n
参数指定的,因此为这些参数传递 only
data_2
尺寸不可能是正确的(假设在非方形情况下):

           cublasSgemm(handle , CUBLAS_OP_N , CUBLAS_OP_N,
                       data_2_columns , data_2_rows ,data_1_columns,
                       ^^^^^^^^^^^^^^   ^^^^^^^^^^^

其次,从错误检查的角度来看,您可以通过使用

cuda-memcheck
运行代码来快速估计 CUBLAS 调用存在问题。报告的第一个错误如下:

$ cuda-memcheck ./t23
========= CUDA-MEMCHECK
========= Invalid __global__ read of size 4
=========     at 0x000006f0 in void gemmSN_NN_kernel<float, int=256, int=4, int=2, int=8, int=3, int=4, bool=0, cublasGemvTensorStridedBatched<float const >, cublasGemvTensorStridedBatched<float>>(cublasGemmSmallNParams<float const , cublasGemvTensorStridedBatched<float const >, float>)
=========     by thread (64,0,0) in block (0,0,0)
=========     Address 0x7f9c30a2061c is out of bounds
=========     Device Frame:void gemmSN_NN_kernel<float, int=256, int=4, int=2, int=8, int=3, int=4, bool=0, cublasGemvTensorStridedBatched<float const >, cublasGemvTensorStridedBatched<float>>(cublasGemmSmallNParams<float const , cublasGemvTensorStridedBatched<float const >, float>) (void gemmSN_NN_kernel<float, int=256, int=4, int=2, int=8, int=3, int=4, bool=0, cublasGemvTensorStridedBatched<float const >, cublasGemvTensorStridedBatched<float>>(cublasGemmSmallNParams<float const , cublasGemvTensorStridedBatched<float const >, float>) : 0x6f0)
=========     Saved host backtrace up to driver entry point at kernel launch time
=========     Host Frame:/usr/lib/x86_64-linux-gnu/libcuda.so.1 (cuLaunchKernel + 0x2b8) [0x1e5cc8]
=========     Host Frame:/usr/local/cuda/lib64/libcublasLt.so.11 [0x1063c8b]
=========     Host Frame:/usr/local/cuda/lib64/libcublasLt.so.11 [0x10a9965]
=========     Host Frame:/usr/local/cuda/lib64/libcublasLt.so.11 [0x6bfacc]
=========     Host Frame:/usr/local/cuda/lib64/libcublasLt.so.11 [0x5fc7af]
=========     Host Frame:/usr/local/cuda/lib64/libcublasLt.so.11 [0x436c35]
=========     Host Frame:/usr/local/cuda/lib64/libcublasLt.so.11 (cublasLtMatmul + 0x60f) [0x43484f]
=========     Host Frame:/usr/local/cuda/lib64/libcublas.so.11 [0x9ef6db]
=========     Host Frame:/usr/local/cuda/lib64/libcublas.so.11 [0x50e4f0]
=========     Host Frame:/usr/local/cuda/lib64/libcublas.so.11 (cublasSgemm_v2 + 0x1ee) [0x50f29e]
=========     Host Frame:./t23 [0x7986]
=========     Host Frame:./t23 [0x7b4c]
=========     Host Frame:/lib/x86_64-linux-gnu/libc.so.6 (__libc_start_main + 0xe7) [0x21b97]
=========     Host Frame:./t23 [0x744a]
=========

当然,一种可能的解决方案是转置输入矩阵,因此它们按列主顺序排列,并且 CUBLAS 提供了带有

Sgemm
的选项来执行此操作(见上文)。然而,在我看来,您想要做的是执行 C 风格的行主乘法而不转置输入数组。有一篇文章here描述了如何做到这一点。

当我将该启发式应用于您的

cublasSgemm()
通话时,我得到以下信息:

           cublasSgemm(handle , CUBLAS_OP_N , CUBLAS_OP_N,
                       data_2_columns , data_1_rows ,data_1_columns,
                       &alpha , GPU_data_2 , data_2_columns,
                       GPU_data_1 , data_1_columns,
                       &beta , GPU_result , data_2_columns)

当我编译并运行包含这些更改的代码时,我得到以下信息:

$ cuda-memcheck ./t23
========= CUDA-MEMCHECK
58 64 139 154
39 54 69 49 68 87 59 82 105
========= ERROR SUMMARY: 0 errors
© www.soinside.com 2019 - 2024. All rights reserved.