确定cuda/GPU作为LLM生成器的设备时出现问题,总是回到CPU

问题描述 投票:0回答:1

背景:我正在尝试微调微软的 Phi-2 模型,这是一个发布在 HuggingFace 上的 25 亿参数 LLM,其指令调整有超过 2000 条引用。我想通过指令调整来创建可识别的输出变化。我想稍后从基本模型和我的微调模型中提取词嵌入来比较它们。我在虚拟环境中使用带有 VS Code 的 jupyter 笔记本工作,并且可以访问具有足够容量来处理 LLM 的服务器。我已经成功地将数据标记化以输入到模型中,加载并测试了基本模型,并将所有内容移动到定义为“设备”的 cuda/GPU 但是,我的问题: 当我尝试将标记化的训练和评估数据集输入模型进行训练时,我收到以下错误消息,表明生成器位于 cpu 上,而不是 cuda 上,据我所知: 错误信息

RuntimeError Traceback (most recent call last) Cell In[51], line 37 11 trainer = transformers.Trainer( 12 model=model, 13 train_dataset=tokenized_train_dataset, (...) 33 data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False), 34 ) 36 model.config.use_cache = False # silence the warnings. Please re-enable for inference! ---> 37 trainer.train()

File ~/.venv/lib/python3.10/site-packages/transformers/trainer.py:1780, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1778 hf_hub_utils.enable_progress_bars() 1779 else: -> 1780 return inner_training_loop( 1781 args=args, 1782 resume_from_checkpoint=resume_from_checkpoint, 1783 trial=trial, 1784 ignore_keys_for_eval=ignore_keys_for_eval, 1785 )

File ~/.venv/lib/python3.10/site-packages/transformers/trainer.py:2085, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval) 2082 rng_to_sync = True 2084 step = -1 -> 2085 for step, inputs in enumerate(epoch_iterator): 2086 total_batched_samples += 1 2088 if self.args.include_num_input_tokens_seen:

File ~/.venv/lib/python3.10/site-packages/accelerate/data_loader.py:452, in DataLoaderShard.iter(self) 450 # We iterate one batch ahead to check when we are at the end 451 try: --> 452 current_batch = next(dataloader_iter) 453 except StopIteration: 454 yield

File ~/.venv/lib/python3.10/site-packages/torch/utils/data/dataloader.py:631, in BaseDataLoaderIter._next(self) 628 if self._sampler_iter is None: 629 # TODO(https://github.com/pytorch/pytorch/issues/76750) 630 self._reset() # type: ignore[call-arg] --> 631 data = self._next_data() 632 self._num_yielded += 1 633 if self._dataset_kind == _DatasetKind.Iterable and
634 self._IterableDataset_len_called is not None and
635 self._num_yielded > self._IterableDataset_len_called:

File ~/.venv/lib/python3.10/site-packages/torch/utils/data/dataloader.py:674, in _SingleProcessDataLoaderIter._next_data(self) 673 def _next_data(self): --> 674 index = self._next_index() # may raise StopIteration 675 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 676 if self._pin_memory:

File ~/.venv/lib/python3.10/site-packages/torch/utils/data/dataloader.py:621, in _BaseDataLoaderIter._next_index(self) 620 def _next_index(self): --> 621 return next(self._sampler_iter)

File ~/.venv/lib/python3.10/site-packages/torch/utils/data/sampler.py:287, in BatchSampler.iter(self) 285 batch = [0] * self.batch_size 286 idx_in_batch = 0 --> 287 for idx in self.sampler: 288 batch[idx_in_batch] = idx 289 idx_in_batch += 1

File ~/.venv/lib/python3.10/site-packages/accelerate/data_loader.py:92, in SeedableRandomSampler.iter(self) 90 # print("Setting seed at epoch", self.epoch, seed) 91 self.generator.manual_seed(seed) ---> 92 yield from super().iter() 93 self.set_epoch(self.epoch + 1)

File ~/.venv/lib/python3.10/site-packages/torch/utils/data/sampler.py:167, in RandomSampler.iter(self) 165 else: 166 for _ in range(self.num_samples // n): --> 167 yield from torch.randperm(n, generator=generator).tolist() 168 yield from torch.randperm(n, generator=generator).tolist()[:self.num_samples % n]

File ~/.venv/lib/python3.10/site-packages/torch/utils/device.py:77, in DeviceContext._torch_function(self, func, types, args, kwargs) 75 if func in _device_constructors() and kwargs.get('device') is None: 76 kwargs['device'] = self.device ---> 77 return func(args, *kwargs)

RuntimeError: Expected a 'cuda' device type for generator but found 'cpu'

此外,我收到以下警告:

Using the WANDB_DISABLED environment variable is deprecated and will be removed in v5. Use the --report_to flag to control the integrations used for logging result (for instance --report_to none). ./.venv/lib/python3.10/site-packages/accelerate/accelerator.py:432: FutureWarning: Passing the following arguments to Accelerator is deprecated and will be removed in version 1.0 of Accelerate: dict_keys(['dispatch_batches', 'split_batches', 'even_batches', 'use_seedable_sampler']). Please pass an accelerate.DataLoaderConfiguration instead: dataloader_config = DataLoaderConfiguration(dispatch_batches=None, split_batches=False, even_batches=True, use_seedable_sampler=True) warnings.warn( 

我的 jupyter 笔记本中相关的代码部分是训练:


#import wandb
import transformers
from datetime import datetime
import torch
torch.set_default_device("cuda")
project = "ideollm"
base_model_name = "phi2"
run_name = base_model_name + "-" + project
output_dir = "./" + run_name

trainer = transformers.Trainer(
model=model,
train_dataset=tokenized_train_dataset,
eval_dataset=tokenized_val_dataset,
args=transformers.TrainingArguments(
output_dir=output_dir,
warmup_steps=1,
per_device_train_batch_size=2,
gradient_accumulation_steps=1,
max_steps=500,
learning_rate=2.5e-5, # Want a small lr for finetuning
optim="paged_adamw_8bit",
logging_steps=25,              # When to start reporting loss
logging_dir="./logs",        # Directory for storing logs
save_strategy="steps",       # Save the model checkpoint every logging step
save_steps=25,                # Save checkpoints every 50 steps
evaluation_strategy="steps", # Evaluate the model every logging step
eval_steps=25,               # Evaluate and save checkpoints every 50 steps
do_eval=True,                # Perform evaluation at the end of training
#report_to="wandb",  
#run_name=f"{run_name}-{datetime.now().strftime('%Y-%m-%d-%H-%M')}"  
),
data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),
)

model.config.use_cache = False  # silence the warnings. Please re-enable for inference!
trainer.train()

在代币化之前:


import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

for i, tokens in tokenized_train_dataset.items():

        output_ids = model.generate(tokenized_train_dataset[i].cuda(), do_sample=True, max_new_tokens=270, early_stopping=True,)
        output = tokenizer.batch_decode(output_ids)
        print(output)

我尝试过:

  • 注释掉 WANDB,因为它抛出了错误 - 没有更多错误,但总体仍然无法运行
  • 更改分词器
  • 故意将模型和数据集设置为cuda
  • 检查 cuda/GPU 的可用性
  • 从 google colab 更改为 VS Code 和服务器

即使我尝试更改设备和数据处理,我也会一遍又一遍地遇到相同的错误。该模型根本不训练。

gpu cpu huggingface-transformers large-language-model
1个回答
0
投票

您能否分享一下您如何加载标记化文本,以及您是否正在微调您的模型?

© www.soinside.com 2019 - 2024. All rights reserved.