如何将预训练模型加载到 Transformers pipeline 并指定多 GPU?

问题描述 投票:0回答:1

我有一个带有多个 GPU 的本地服务器,我正在尝试加载本地模型并指定要使用哪个 GPU,因为我们想在团队成员之间分配 GPU。

对于较小的模型,我可以使用 device_map='cuda:3' 成功指定 1 个 GPU,如何在多个 GPU 上执行此操作,例如 CUDA:[4,5,6] 对于较大的模型?

(我尝试使用 device_map = 'auto', 'balanced', 'sequential',这会自动传播模型。但这不是我们想要的......)

import torch
from transformers import LlamaForCausalLM

model_dir = '/models/Llama-2-13b-chat-hf'

# 'auto' 'balanced' 'sequential' 'balanced_low_0'
# 'cuda:3',

model = LlamaForCausalLM.from_pretrained(model_dir,
                                         device_map='cuda:[3,4,5]',#how to make things work here?
                                         torch_dtype=torch.float32 
                                        )
huggingface-transformers
1个回答
0
投票

我想实现你想要的最简单的方法就是导出

import os
os.environ["CUDA_VISIBLE_DEVICES"] = "1"
#or
os.environ["CUDA_VISIBLE_DEVICES"] = "0,1"

import torch
from transformers import LlamaForCausalLM

model_dir = '/models/Llama-2-13b-chat-hf'
model = LlamaForCausalLM.from_pretrained(model_dir,
                                         device_map='auto')

如果你想使用device_map,你必须自己映射每一层:

# distillroberta because it is smaller

from transformers import AutoModelForMaskedLM

model = AutoModelForMaskedLM.from_pretrained("distilbert/distilroberta-base")
# parameter names
print([x[0] for x in model.named_parameters()])

输出:

['roberta.embeddings.word_embeddings.weight',
 'roberta.embeddings.position_embeddings.weight',
 'roberta.embeddings.token_type_embeddings.weight',
 'roberta.embeddings.LayerNorm.weight',
 'roberta.embeddings.LayerNorm.bias',
 'roberta.encoder.layer.0.attention.self.query.weight',
 'roberta.encoder.layer.0.attention.self.query.bias',
...
 'roberta.encoder.layer.5.output.LayerNorm.weight',
 'roberta.encoder.layer.5.output.LayerNorm.bias',
 'lm_head.bias',
 'lm_head.dense.weight',
 'lm_head.dense.bias',
 'lm_head.layer_norm.weight',
 'lm_head.layer_norm.bias']

您不需要映射每个权重。当你映射图层时就足够了:

# device map example for distillroberta:
from transformers import AutoTokenizer, AutoModelForMaskedLM

device_map= {'roberta.embeddings':'cpu', 'roberta.encoder':0, 'lm_head':'cpu'}

model = AutoModelForMaskedLM.from_pretrained("distilbert/distilroberta-base", device_map = device_map)
© www.soinside.com 2019 - 2024. All rights reserved.