TensorFlow模型服务器GPU构建-cudnn路径

问题描述 投票:0回答:1

我正在尝试从RedHat机器上的源代码构建TensorFlow模型服务器。我没有sudo特权,并且cudnn不在默认目录中。有没有办法在构建时指定cudnn路径?

Cuda Configuration Error: Failed to run find_cuda_config.py: Could not find any cudnn.h matching version '10.0' in any subdirectory:
        ''
        'include'
        'include/cuda'
        'include/*-linux-gnu'
        'extras/CUPTI/include'
        'include/cuda/CUPTI'
of:
        '/lib'
        '/lib64'
        '/opt/beegfs/lib'
        '/usr'
        '/usr/lib64/atlas'
        '/usr/lib64/dyninst'
        '/usr/lib64/mysql'
        '/usr/lib64/tcl8.5'
        '/usr/local/cuda'
        '/usr/local/cuda-10.0/targets/x86_64-linux/lib'
        '/usr/local/cuda-8.0/targets/x86_64-linux/lib'
        '/usr/local/cuda-9.0/targets/x86_64-linux/lib'
        '/usr/local/cuda-9.1/targets/x86_64-linux/lib'

tensorflow tensorflow-serving
1个回答
0
投票

即使在注释部分中也提供了此(答案)部分中的解决方案,但这样做是为了社区的利益。

我们需要在运行Bazel之前使用以下命令指定CUDNN路径,

export CUDNN_INSTALL_PATH=<...>

© www.soinside.com 2019 - 2024. All rights reserved.