韩国 WSD 错误,我该如何解决这个问题

问题描述 投票:0回答:0

我正在尝试制作我的小型 NLP 程序,但我不知道与 WSD(Word Sense Disambiguation)相关的 Bert、tokenizer、checkpoint 等。 这个问题是关于韩语 Python 包的。 所以我并没有期望太多,我只是想知道如何解决这个问题,例如,我应该修复哪个文件,什么是检查点,DistilBertModel。我需要修复哪个行号,等等。 无论你为我写什么,我都会感激。 错误如下。

Some weights of the model checkpoint at monologg/distilkobert were not used when initializing DistilBertModel: ['vocab_projector.bias', 'vocab_layer_norm.weight', 'vocab_layer_norm.bias', 'vocab_transform.bias', 'vocab_transform.weight', 'vocab_projector.weight']
- This IS expected if you are initializing DistilBertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DistilBertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'BertTokenizer'.
The class this function is called from is 'KoBertTokenizer'.
Traceback (most recent call last):
  File "C:\python nature word\WSD 다의어구분 국어사전 용량 너무큼\main.py", line 85, in <module>
    model = model.to('cuda')
  File "C:\Users\hodol\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 989, in to
    return self._apply(convert)
  File "C:\Users\hodol\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 641, in _apply
    module._apply(fn)
  File "C:\Users\hodol\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 641, in _apply
    module._apply(fn)
  File "C:\Users\hodol\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 641, in _apply
    module._apply(fn)
  [Previous line repeated 1 more time]
  File "C:\Users\hodol\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 664, in _apply
    param_applied = fn(param)
  File "C:\Users\hodol\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 987, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
  File "C:\Users\hodol\anaconda3\lib\site-packages\torch\cuda\__init__.py", line 221, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

我安装了最新版本的CUDA和torch,卸载重装,删掉Python,重装。我没有显卡但安装了 CUDA 11.7 版本。 所以我最终认为这不是CUDA错误,而是Traceback上的内容。 我找到了“monologg/distilkobert”文件夹,python 文件所在的文件夹中没有任何内容,用谷歌搜索,但我无法找到任何内容。

除了在这里问我别无他法。如果有人帮助我,我会非常感谢你。

nlp cuda torch checkpoint wsd
© www.soinside.com 2019 - 2024. All rights reserved.