验证是否在Keras / Tensorflow中实际使用了GPU,而不仅仅是验证了它是否存在

问题描述 投票:1回答:2

我刚刚构建了深度学习设备(AMD 12核心拆线器; GeForce RTX 2080 ti; 64Gb RAM)。我原本想在Ubuntu 19.0上安装CUDnn和CUDA,但是安装太麻烦了,经过一番阅读后,我决定切换到Windows 10 ...

在conda内外安装了tensorflow-gpu后,我遇到了其他问题,我认为这取决于CUDnn-CUDA-tensorflow的兼容性,因此卸载了各种版本的CUDA和tf。我从nvcc --version的输出:

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:04_Central_Daylight_Time_2018
Cuda compilation tools, release 10.0, V10.0.130

还附有nvidia-smi(显示CUDA == 11.0?!)

enter image description here

我也有:

 if tf.test.gpu_device_name():
        print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
    else:
        print("Please install GPU version of TF")
    print("keras version: {0} | Backend used: {1}".format(keras.__version__, backend.backend()))
    print("tensorflow version: {0} | Backend used: {1}".format(tf.__version__, backend.backend()))
    print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
    print("CUDA: {0} | CUDnn: {1}".format(tf_build_info.cuda_version_number,  tf_build_info.cudnn_version_number))

带有输出:

My device: [name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 12853915229880452239
, name: "/device:GPU:0"
device_type: "GPU"
memory_limit: 9104897474
lo

    cality {
      bus_id: 1
      links {
      }
    }
    incarnation: 7328135816345461398
    physical_device_desc: "device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:42:00.0, compute capability: 7.5"
    ]
    Default GPU Device: /device:GPU:0
    keras version: 2.3.1 | Backend used: tensorflow
    tensorflow version: 2.1.0 | Backend used: tensorflow
    Num GPUs Available:  1
    CUDA: 10.1 | CUDnn: 7

所以(我希望)我的安装至少可以部分工作,我仍然不知道是否正在使用GPU进行培训,或者是否只是[[公认的]],但是CPU 是仍然被使用。我该如何区分?我也使用pycharm。关于安装Visio Studio的建议,并附加了一个步骤here

5. Include cudnn.lib in your Visual Studio project. Open the Visual Studio project and right-click on the project name. Click Linker > Input > Additional Dependencies. Add cudnn.lib and click OK.

我没有执行此步骤。我还读到我需要在环境变量中设置以下内容,但我的目录为空:

SET PATH=C:\tools\cuda\bin;%PATH%

任何人都可以验证吗?

还有我的kera模型还需要搜索超参数:

grid = GridSearchCV(estimator=model, param_grid=param_grids, n_jobs=-1, # -1 for all cores cv=KFold(), verbose=10) grid_result = grid.fit(X_standardized, Y)

这在我的MBP上正常工作(当然,假设n_jobs = -1占用所有CPU内核)。在我的DL钻机上,我收到警告:

ERROR: The process with PID 5156 (child process of PID 1184) could not be terminated. Reason: Access is denied. ERROR: The process with PID 1184 (child process of PID 6920) could not be terminated. Reason: There is no running instance of the task. 2020-03-28 20:29:48.598918: E tensorflow/stream_executor/cuda/cuda_blas.cc:238] failed to create cublas handle: CUBLAS_STATUS_ALLOC_FAILED 2020-03-28 20:29:48.599348: E tensorflow/stream_executor/cuda/cuda_blas.cc:238] failed to create cublas handle: CUBLAS_STATUS_ALLOC_FAILED 2020-03-28 20:29:48.599655: E tensorflow/stream_executor/cuda/cuda_blas.cc:238] failed to create cublas handle: CUBLAS_STATUS_ALLOC_FAILED 2020-03-28 20:29:48.603023: E tensorflow/stream_executor/cuda/cuda_blas.cc:238] failed to create cublas handle: CUBLAS_STATUS_ALLOC_FAILED 2020-03-28 20:29:48.603649: E tensorflow/stream_executor/cuda/cuda_blas.cc:238] failed to create cublas handle: CUBLAS_STATUS_ALLOC_FAILED 2020-03-28 20:29:48.604236: E tensorflow/stream_executor/cuda/cuda_blas.cc:238] failed to create cublas handle: CUBLAS_STATUS_ALLOC_FAILED 2020-03-28 20:29:48.604773: E tensorflow/stream_executor/cuda/cuda_blas.cc:238] failed to create cublas handle: CUBLAS_STATUS_ALLOC_FAILED 2020-03-28 20:29:48.605524: E tensorflow/stream_executor/cuda/cuda_blas.cc:238] failed to create cublas handle: CUBLAS_STATUS_ALLOC_FAILED 2020-03-28 20:29:48.608151: E tensorflow/stream_executor/cuda/cuda_blas.cc:238] failed to create cublas handle: CUBLAS_STATUS_ALLOC_FAILED 2020-03-28 20:29:48.608369: W tensorflow/stream_executor/stream.cc:2041] attempting to perform BLAS operation using StreamExecutor without BLAS support 2020-03-28 20:29:48.608559: W tensorflow/core/common_runtime/base_collective_executor.cc:217] BaseCollectiveExecutor::StartAbort Internal: Blas GEMM launch failed : a.shape=(10, 8), b.shape=(8, 4), m=10, n=4, k=8 [[{{node dense_1/MatMul}}]] C:\Users\me\PycharmProjects\untitled\venv\lib\site-packages\sklearn\model_selection\_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed : a.shape=(10, 8), b.shape=(8, 4), m=10, n=4, k=8 [[node dense_1/MatMul (defined at C:\Users\me\PycharmProjects\untitled\venv\lib\site-packages\keras\backend\tensorflow_backend.py:3009) ]] [Op:__inference_keras_scratch_graph_982]

我可以假定在使用GridSearchCV时仅使用CPU,而不使用GPU吗?不过,在运行并计时我的代码中的另一种方法时,我将MBP的时间(使用2,8 GHz Intel Core i7大约为40s)与台式机的时间(使用12核threadripper大约为43s)进行了比较。即使比较CPU,我也希望比MBP更快的时间。那我的假设是错误的吗?

我刚刚构建了深度学习设备(AMD 12核心拆线器; GeForce RTX 2080 ti; 64Gb RAM)。我最初想在Ubuntu 19.0上安装CUDnn和CUDA,但是安装太麻烦了,...

python-3.x tensorflow keras windows-10 cudnn
2个回答
1
投票
您可以看到以下详细信息here。根据文档:

0
投票
我最终找到的(针对Windows用户)分析GPU性能的另一种方法是转到“任务管理器”,并将“性能”选项卡中的一个Monitors更改为CUDA,然后简单地运行脚本并观察它的峰值。
© www.soinside.com 2019 - 2024. All rights reserved.