使用 QLoRA 和 Peft 进行错误训练变压器

问题描述 投票:0回答:1

所以我正在尝试使用 Peft 和 QLoRA 微调 google Gemma 模型。昨天我成功地将其微调了 1 个 epoch,作为测试。然而,当我今天打开笔记本并运行加载模型的单元时,我收到一个巨大的错误:

代码:

model_id = "google/gemma-7b"

bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)

tokenizer = 
AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id, 
quantization_config=bnb_config, 
device_map={0:""})

#model.gradient_checkpointing_enable()

train_dataset, val_dataset, data_collator = load_dataset(train_data_path, val_data_path, tokenizer)

错误(缩短):

RuntimeError: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at "../aten/src/ATen/cuda/CUDAContext.cpp":50, please report a bug to PyTorch. device=1, num_gpus=

.....

DeferredCudaCallError: CUDA call failed lazily at initialization with error: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at "../aten/src/ATen/cuda/CUDAContext.cpp":50, please report a bug to PyTorch. device=1, num_gpus=
.....

RuntimeError: Failed to import transformers.integrations.bitsandbytes because of the following error (look up to see its traceback):
CUDA call failed lazily at initialization with error: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at "../aten/src/ATen/cuda/CUDAContext.cpp":50, please report a bug to PyTorch. device=1, num_gpus=

.....

我缩短了错误以使其更具可读性。有人经历过这样的事情吗?我似乎无法解决它。非常感谢您的帮助。

python deep-learning nlp huggingface-transformers
1个回答
0
投票

看起来当您打开笔记本时,RAM内存被重置,因此它不再具有CUDA初始化配置,因此触发了上述错误。修复此错误的唯一方法是重新运行整个笔记本,而不仅仅是特定的单元格

© www.soinside.com 2019 - 2024. All rights reserved.