slurm 在运行几行后没有执行我的 Python 代码,但也没有停止,而它在我本地的 Linux 上运行良好

问题描述 投票:0回答:1

我的代码:

从数据集导入load_dataset 最大长度 = 512 数据集 = load_dataset("胶水","mrpc") 从 Transformer 导入 AutoTokenizer 从 Transformers 导入 RobertaTokenizerFast #tokenizer =AutoTokenizer.from_pretrained("bert-base-uncased") tokenizer = RobertaTokenizerFast.from_pretrained("/data/home//raw_roberta/Roberta_Tokenizer",max_length=MAX_LEN,padding='max_length',return_tensors='pt') 打印(“映射数据集”) mapped_dataset = dataset.map(lambda x: tokenizer(x["sentence1"],x["sentence2"], max_length = MAX_LEN, truncation=True, padding='max_length',return_tensors='pt'), batched=True)

print("完成的mapped_dataset") 从变压器导入 DataCollatorWithPadding data_collator= DataCollatorWithPadding(tokenizer=tokenizer)

从变压器导入 AutoModelForSequenceClassification 从变形金刚导入 RobertaForMaskedLM #model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased",num_labels = 2) #model = AutoModelForSequenceClassification.from_pretrained("data/home//raw_roberta/Roberta_Model/checkpoint-90000",num_labels = 2) #model = AutoModelForSequenceClassification.from_pretrained("data/home//raw_roberta/Roberta_Model/checkpoint-90000") base_model = RobertaForMaskedLM.from_pretrained('/data/home//raw_roberta/Roberta_Model/checkpoint-90000').roberta

从 Transformers 导入 TrainingArguments 打印(base_model.config)

我在本地linux上运行上面的代码,只需要大约2分钟即可执行。日志:

从此检查点加载的分词器类与调用此函数的类的类型不同。它可能会导致意外的标记化。 您从此检查点加载的标记生成器类是“BertTokenizer”。 调用该函数的类是“RobertaTokenizer”。 词汇表中添加了特殊标记,确保相关的词嵌入经过微调或训练。 您从此检查点加载的分词器类与调用此函数的类的类型不同。它可能会导致意外的标记化。 您从此检查点加载的标记生成器类是“BertTokenizer”。 调用该函数的类是“RobertaTokenizerFast”。 词汇表中添加了特殊标记,确保相关的词嵌入经过微调或训练。 映射数据集 完成的mapped_dataset 罗伯塔配置{ "_name_or_path": "/data/home//raw_roberta/Roberta_Model/checkpoint-90000", “架构”:[ 《罗伯塔为了蒙面LM》 ], “attention_probs_dropout_prob”:0.1, “bos_token_id”:0, “classifier_dropout”:空, “eos_token_id”:2, "hidden_act": "格鲁", “hidden_dropout_prob”:0.1, “隐藏大小”:768, “初始化范围”:0.02, “中间大小”:3072, “layer_norm_eps”:1e-12, “最大位置嵌入”:514, "model_type": "罗伯塔", “num_attention_heads”:12, “num_hidden_layers”:12, “pad_token_id”:1, "position_embedding_type": "绝对", "torch_dtype": "float32", “transformers_version”:“4.33.2”, “类型词汇大小”:1, “use_cache”:真, “词汇大小”:52000 }

但是当我将代码上传到slurm时,它运行了4个小时,只得到这些日志:

/data/home//anaconda3/envs/py38v1/lib/python3.8/site-packages/scipy/init.py:138:UserWarning:NumPy 版本 >=1.16.5 和 <1.23.0 is required for this version of SciPy (detected version 1.24.4) warnings.warn(f"A NumPy version >={np_minversion} 和<{np_maxversion} is required for this version of " The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization. The tokenizer class you load from this checkpoint is 'BertTokenizer'. The class this function is called from is 'RobertaTokenizer'. Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization. The tokenizer class you load from this checkpoint is 'BertTokenizer'. The class this function is called from is 'RobertaTokenizerFast'. Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. slurmstepd: error: *** JOB xxxxx ON compute-9-0 CANCELLED AT 2024-02-05T08:35:30 DUE TO TIME LIMIT ***

这个问题真的让我很困惑,有人知道如何解决吗?谢谢!

python linux nlp slurm
1个回答
0
投票

检查本地工作站和集群计算节点上的

python
scipy
numpy
版本。如果您可以在集群上创建
conda
环境,请创建一个仅包含
conda
+ 相关库的
python
环境。首先在本地工作站上检查其是否正常工作,然后将其复制到 HPC 上。

© www.soinside.com 2019 - 2024. All rights reserved.