如何测试stanfordnlp是否在gpu上运行?

问题描述 投票:1回答:1

我如何测试stanfordnlp是否在GPU上运行?

Here是一些示例代码:

import stanfordnlp
stanfordnlp.download('en')   # This downloads the English models for the neural pipeline
nlp = stanfordnlp.Pipeline() # This sets up a default neural pipeline in English
doc = nlp("Barack Obama was born in Hawaii.  He was elected president in 2008.")
doc.sentences[0].print_dependencies()

我做了些改动,以指向我的模型目录:

config = {'models_dir': '/scratch/lklein/models'}
nlp = stanfordnlp.Pipeline(**config)

我的机器具有CUDA,当我运行代码时,得到以下输出:

Use device: gpu
---
Loading: tokenize
With settings: 
{'model_path': '/scratch/lklein/models/en_ewt_models/en_ewt_tokenizer.pt', 'lang': 'en', 'shorthand': 'en_ewt', 'mode': 'predict'}

...

因此设置正确,它确实检测到了gpu。如何计算查询?理想情况下,我正在寻找类似spacy的require_gpu

python nlp stanford-nlp
1个回答
0
投票

您可以在执行代码时尝试以bash运行require_gpu

话虽如此,默认行为是在可用时使用GPUnvidia-smi

© www.soinside.com 2019 - 2024. All rights reserved.